Four wrong paths in the development of physics and the framework of the evolutionary platform
Note: The English version of this paper is translated from Chinese by Google Translate and has not been edited. Sorry for not accurate.
Pingbo Zhao
My lifelong dream is that all kindergarten students who graduate in the future will know the formula for the magic number of the nucleus: 82=2+4+6+8+12+20+30.
Keywords: Nucleus magic numbers, Indistinguishable state, synergetic state, ordered entropy, two types of entropy maximization, global warming, geomagnetic reversal model, electronic structure steady state, primordial angular momentum hypothesis, super-energy threshold, stellar nuclear fusion stability, galaxy rotation curve, dark matter, dark energy, biological vitality, scaling mechanism, life evolution, swing mechanism, spatial degeneracy, topological degeneracy, biological chirality, cyclic path, evolution parameter, irreversibility, bifurcation path, self-organized criticality, seesaw model, topological degenerate state, complex network, power-law distribution, cooperative tunneling state, quantum entangled state, uncertainty, paradigm shift, evolutionary platform, bipolarity, diversity, criticality
This article attempts to explain in the form of a popular science report rather than an academic paper that existing physics theories have gone astray in four directions, and proposes specific correction plans until an evolutionary platform framework is formed. This article's analysis of various fields of physics, including molecular biology and economics based complex network , will be expressed in accordance with the ideas of correcting deviations and forming an evolutionary platform, rather than following the conventional research report paradigm. Such ideas and content jumping patterns do not meet the requirements of regular scientific and technological journals, and I have no intention of deleting or revising my own views in order to obtain publication, but I strongly hope to be recognized and forwarded by netizens: the value of this article is reflected in the fact that it specifically points out which issues are worthy of further thinking and exploration. Below, I will briefly describe the four wrong paths in physics that need to be corrected:
The first wrong path is taking things for granted, which is especially reflected in the understanding of elementary particles and cosmology. Physicists take it for granted that when it comes to leptons and quarks, they can only analyze interactions and cannot do structural analysis. In this article, I will give the expression of the magic number of the atomic nucleus: 82=2+4+6+8+12+20+30, and introduce the concept of full homogeneity to prove the steady-state nature of the electronic structure. The difficulty of the electron's spatial scalelessness and its renormalization divergence comes from its steady-state structure. It is also taken for granted that the universe was in a high-temperature, thermally disordered state at the beginning of the Big Bang. The primordial angular momentum hypothesis believes that the universe should be in an ordered state of quantum gyroscopes at the beginning, but the vortex direction is disordered. This will spontaneously lead to the flatness of the universe without the need for an inflationary universe model, and the galaxy rotation curve does not need dark matter but can be explained by the path dependence of entropy force. It can also give the causes of the three types of galaxies, namely vortex, elliptical, and irregular, and explain the stability of stellar nuclear fusion. The dark energy problem does not exist.
Wrong path 2 is that too many of today's physicists' research comes from imagination, rather than seeking physical laws based on the questions raised by the research object. The "1 gene 1 enzyme" of molecular biology, the central dogma, and the concept of operons are called the three cornerstones, but physicists have never thought about extracting new physical laws from the above cornerstones. In my early years, I believed that the central dogma of DNA→RNA→protein, which embodies irreversibility, can be understood as a cyclic path, which is physically manifested as the combination of two types of phase transitions, gas-liquid and liquid-solid, and the two types of entropy maximization play a role in gene repair. I also proposed the concept of bifurcated paths for the other two cornerstones a long time ago, but it was not until 2013 when I learned about Tang Chao's seesaw model that I felt that I could combine Wen Xiaogang's concept of topological order to give a description of the evolutionary platform based on self-organized criticality. For the complex network platform of economic development evolution, the concepts of energy entropy and temperature can also be introduced to deduce the power-law distribution of the economic system.
The third wrong path is that physics research ignores the entropy force analysis under the meaning of the system. Landau's liquid helium superfluid theory believes that the entropy of the superfluid state is 0. How can this explain the Rollin film? After the λ phase transition, the liquid helium in the open container will climb over the edge of the bowl and drip out, which reflects the violation of the second law of thermodynamics by doing work from a single heat source. This paper proposes an order entropy explanation for liquid helium. The Cooper pairs in the BCS theory are electron pairs with opposite momentum directions: how can their passing by reduce the energy of the system? If the energies of all Cooper pairs are not strictly equal, how can we explain the never-ending superconducting circulation? I believe that superconductivity must come from the same energy of all Cooper pairs, and the quantum tunneling of the topological degenerate state confined to the two-dimensional surface of the superconductor must also reflect the order entropy drive, so that the magnetic field can be squeezed out of the superconductor to form the Meissner effect. Order entropy is reflected in three types of self-organized macroscopic quantum effects: the cooperative tunneling state comes from the geomagnetic reversal model that explains global warming, the quantum entangled state is a special manifestation of the same state, and the topological degenerate state comes from the extension of the concept of topological order .
Wrong path 4 is reflected in the fact that today's science only starts from individual associations or interactions, but has not built a platform concept. People first try to give deterministic associations between different concepts, such as F=ma and E=mc2 , which are called laws. If a deterministic association cannot be given, a probabilistic relationship is constructed, such as the Shrödinger equation or statistical distribution. If all of these do not work, it is considered to be uncertain, such as the uncertainty relationship. However, physicists have not yet built a system evolution platform thinking: computer science has developed from the machine language platform of 0 and 1 opened by von Neumann, to the operating language platform, and then to today's Internet and AI platform. If physical laws are built on an evolutionary platform, we must first reflect on whether the existing scientific thinking system needs a paradigm shift. The complexity of using an evolutionary platform to describe the complex network of cosmic expansion, life evolution, and economic development reflects the common characteristics of polarity, diversity, and criticality.
Introduction: The causes of the 3rd problem in physics and the consequences of forming an evolutionary platform
After the college entrance examination in 1978, a popular science book called "From One to Infinity" changed my life choice. I originally wanted to study mathematics in college because Xu Chi's reportage "Gottbach's Conjecture" shocked the whole of China that year. This conjecture is considered a "pearl" in the crown of mathematics: Chinese mathematician Chen Jingrun has proved 1+2, but there is still "one step away" from the final 1+1. That year, there were many candidates who wanted to pick this "pearl", and I was one of them. But this popular science book written by Gamov, which was newly translated and published that year, made me feel that the difficulties in physics are far more meaningful than this mathematical conjecture of Gottbach. After reading this book, I summarized the difficulties in physics into three "one surprise and three explosions", which involve the smallest elementary particles, the largest celestial body stars, and the vitality and evolution of life. Therefore, I chose to apply for the Department of Modern Physics of the University of Science and Technology of China.
Let’s talk about the first explosion problem first. In From One to Infinity, Gamov talked about the fact that the structure of matter only has “three entities” that are no longer divisible: nucleons, electrons, and neutrinos. This is “getting to the bottom of the matter”. The translator’s note adds that when Gamov wrote this book, he did not know about the later quark model. However, I felt at the time that even if there are more basic quarks, it does not matter whether matter is infinitely divisible: because the book says that mass and energy can be converted into each other. Then, no matter how great the energy is, it may not be able to “split” a most basic particle into two, but can only blast out a new whole particle. The more fundamental problem here is that at the level of basic particles such as electrons and protons, they are composed of like charges: like charges repel each other, which is the basic law of matter in nature. But why does this basic law fail after “getting to the bottom of the matter”, and repelling things can gather together and not explode?
Another difficulty is that the book talks about stars exploding when they are "in their twilight years", which puzzles me: any object should explode when its fuel is sufficient, not when it is "in its twilight years". Looking up at the sky, the nuclear fusion of the sun and stars is luminous and stable, and has not exploded. The book also talks about the early days of the solar system, which was not stable, and may have encountered a large collision of interstellar matter, so that the nuclear synthesis material was thrown out at the beginning of the solar nuclear fusion, and then the planetary system including our earth was formed. This makes me even more surprised: this seems to indicate that the sun has exploded once in the early days, but it did not explode completely, so our earth was formed. I feel that as long as the nuclear synthesis of the sun is turned on, it should explode in a chain reaction like an atomic bomb detonating a hydrogen bomb. Why did the sun not explode completely in the early days, and why does it not explode now when the fuel is the most sufficient, but has to wait until the "twilight years" to form a final explosion?
The "shock and explosion" of life phenomena came from my earlier confusion: China reported the event of Chinese scientists synthesizing bovine insulin as "artificially synthesized life". The teacher's class lecture surprised me: if one amino acid molecule of synthetic bovine insulin is "docking" incorrectly, there will be no biological vitality, and there will be no reaction when injected into mice. All "dockings" must be accurate, and mice will convulse after injection. The "Mystery of Life" chapter in "From One to Infinity" describes the third explosion. When talking about the isomers of biological molecules, Gamov first gave the example of the three configurations of α, β, and γ of TNT explosives. This made me think that all biological molecules contain N- ions like TNT, so the vitality of life may be related to the explosive power of TNT, and it is also related to the convulsions of experimental mice. The source of my analysis of the physical mechanism of life phenomena in the following text comes from the "shock and explosion" brought to me by the above mice and TNT.
In the 1990s, I envisioned using the geomagnetic reversal model to explain global warming, but I did not get the approval of my doctoral supervisor, Academician Pu Fuke, which became a pain point in my life. But later, two Chinese academicians, Academician Tang Chao of the Chinese Academy of Sciences, proposed the seesaw model, and Academician Wen Xiaogang of the American Academy of Sciences proposed the concept of topological order, which made me feel that the geomagnetic reversal model could be revived. So, I started physics research again in 2013. This article is about my thinking process from the above three difficulties as a child to today. The breakthrough point came from the magic number expression I found when I was studying nuclear physics as an undergraduate: 82=2+4+6+8+12+20+30, which will be discussed in the first section below. This will bring about the concepts of Indistinguishable and synergetic state that are different from equilibrium state. My alternative physics thinking so far has been based on the above three types of states and finally formed the idea of evolutionary platform, which also comes from the combination of the above seesaw and topological order concepts.
The above is the cause of the formation of the content of this article. As an introduction, I will briefly describe the consequences of the evolutionary platform idea I formed so that readers can understand this article more smoothly. A piece of scientific news in 2013 once again changed the trajectory of my life: in order to "understand the mutual inhibition and mutual balance relationship between mesoderm genes and ectoderm genes in the reprogramming process", two biological research groups at Peking University, Professor Tang Chao, proposed a seesaw model that may determine the maintenance and change of cell fate. This article was published in the international top biology journal Cell: Shu et al., Cell, 153(2013)963. The cover of this issue of the magazine is a schematic diagram of the seesaw model.
The cover photo of a Chinese boy and girl playing on a seesaw is an interpretation of the mouse stem cell research photo below, which shows that cells may be in three states. The ME state and the ECT state are the consequences of the dominance of two inductive forces, and the pluripotent state is a higher energy state in which these two inductive forces inhibit each other. Why do the above two pictures bring such a huge shock to me? This comes from the following two reasons: First, this seems to have the same physical cause as the geomagnetic reversal model I proposed in my early years, and second, I think that the seesaw, combined with the concept of topological order, can construct a new evolutionary platform idea.
Before talking about the geomagnetic reversal model in detail, I would like to explain that Tang Chao also participated in proposing another very famous BTW model in 1987 (T refers to Tang Chao). I will analyze this model later, but here I will only explain the meaning of the concept of self-organized criticality in this model - this is reflected in the evolution of non-equilibrium systems, which are often driven to the critical state very slowly and have spontaneous time and space scaling, which is different from the previous cognition of physics. The concept of renormalization scaling formed by people in the past was artificial rather than spontaneous. For this reason, I had imagined in my early years why the N and S poles of the earth's magnetic field would reverse once every tens of thousands of years or hundreds of thousands of years. The physical mechanism is still unclear, but is it also reflected in the time scaling of self-organization under slow driving?
Moreover, there is another feature of geomagnetic reversal, which can be found in Wikipedia: 183 reversals have occurred in the past 83 million years, with an average reversal every 450,000 years. However, the reversal period is only between 2,000 and 12,000 years, and the Earth's magnetic field remains stable in the magnetic field direction of the N pole or S pole most of the time. I think this denies the possibility of classical motion. If geomagnetic reversal comes from some slow classical rotation under the special charge distribution of the core, the geomagnetic intensity will show continuous periodic fluctuations. Geomagnetic reversal obviously has discontinuous macroscopic quantum effects on a large time scale: I guess that the earth itself constitutes a quantum potential barrier, and the magnetic field will perform quantum tunneling between the north and south poles of the earth, just like the N- ions in ammonia NH3 tunneling back and forth on the three H+ ion planes. This forms a quantum steady state, which will be discussed in detail later.
Furthermore, the cycle of global warming is not synchronized with the geomagnetic reversal cycle, the former is much longer. My understanding is that the continuous reversal of the Earth's magnetic poles itself constitutes a steady state, which can also be called a topological steady state. In his early years, Wen Xiaogang also called the concept of topological order a topological steady state, but he deleted this term in his recent Wikipedia topological order entry. This means that as long as the geomagnetic reversal process is maintained and not unstable, the Earth's temperature will not warm up. So far, we can make an analogy with the seesaw model. Is the steady state of the Earth's magnetic poles reversing back and forth comparable to the high-energy state of the multi-potential state of the model? Once the steady state is unstable, the seesaw will be reversed by some induced force, which is reflected in the geomagnetic reversal, which is the collapse of the steady state and causes the Earth to warm up. For this reason, the above geomagnetic reversal model combines the self-organized criticality and seesaw model proposed by Tang Chao, as well as the concept of topological order steady state proposed by Wen Xiaogang.
The core of the above view is that the periodic reversal of the Earth's magnetic poles is itself a macroscopic quantum steady state, which is similar to the seesaw model. Such a steady state should also be maintained by two "equally matched" forces. When I shared this idea with my mentor and fellow students at the Institute of Physics, Chinese Academy of Sciences, they first felt that it was unreliable to regard the entire Earth as a macroscopic quantum effect. But why does no one question the macroscopic quantum behavior of superconductors? With the comparability of the seesaw model at the cellular level, is this not outrageous? In fact, the magnetic field behavior only involves the steering behavior of individual quantum magnetic moments. I regard it as the tunneling cooperative behavior of individual quantum magnetic moments, which is similar to the self-organized synchronization of photon frequencies in lasers. The macroscopic evolution of geomagnetic reversal also reflects self-organized criticality: after instability, it will be converted into thermal energy, thereby warming the Earth. Its subsequent evolution will also be reflected in the slow driving of the Earth's magnetic field again and the re-formation of the geomagnetic reversal cycle.
However, since the above geomagnetic reversal model has no microscopic physical image support, I cannot "stick to it" with this model, but I have formed the idea of evolutionary platform. This is reflected in the evolution of the universe, the evolution of life and the evolution of economic systems, which actually have platform characteristics, which is similar to the evolution of computer science from machine language platform, operating system platform to today's AI platform: the evolution of the system will become more complex under the norms of the platform framework. In this article, I also constructed a series of concepts such as evolution parameters, cyclic paths, bifurcated paths, coordinate degeneracy and topological degeneracy, and polarity to illustrate such an evolutionary platform. This overly advanced idea cannot and does not intend to be written into a standard paper submission, but only wants to provide ideas for young scholars in the form of online articles. As an introduction, let me first talk about the basis for the concept of evolutionary platform - system view. I will illustrate this through the following two examples.
The first example is that when I was teaching in college, I had to give an open class on the hydrogen atom model, and I looked up a lot of information on the history of physics. I learned that in 1913, Bohr first gave the Rydberg constant of the hydrogen atom model, R=109737.315 cm−1, which had an error of about 0.05% from the experimental value of 109677.58 cm−1. For this reason, Bohr made a revision in 1914, changing the model to a center-of-mass system, and replacing the mass of the electron with the reduced mass of the protons and electrons of the nucleus, which eliminated the error. This example inspired me that any measurement is reflected in the behavior of the measured object relative to its center-of-mass system. The luminescence of the electron movement is not reflected in its movement as an individual relative to our measuring instrument, but is based on the overall system behavior of its center-of-mass system. This simple example led me to form the concept of path dependence based on the system, which is not available in today's physics.
The application of the above system view to the description of a larger system will bring us a new understanding - the concept of path dependence. For example, dark matter is usually only found in spiral galaxies, and does not seem to exist in elliptical galaxies. This makes me think that this may be related to the entropy force based on the system meaning in the process of galaxy formation: when the Fermi system, which is mainly composed of protons, forms a galaxy, its early Fermi energy will form entropy force to counteract the gravitational effect: the stellar motion speed revealed by the galaxy rotation curve is the same, but it only exists in the spiral galaxy of the independently formed interstellar cluster, which reflects the path dependence of the entropy force. However, I think the formation of elliptical galaxies is caused by the attraction of stellar matter groups that were thrown away by spiral galaxies in the early stage, which leads to the fact that most of the stars in elliptical galaxies are old stars (I will analyze it later), and it also reflects that the entropy force acts only in the inner core, and the old stars outside the elliptical galaxy only have gravitational effects.
The second example is that I switched to condensed matter physics in the 1990s. When I read articles on the quantum Hall effect, I found that they were all described using the path integral of quantum mechanics. Although I had studied quantum field theory before, the field theory course in the 1980s did not teach path integrals, so I had to make up for it. The path integral description comes from a Wick rotation of the Shrödinger equation, which is equivalent to the partition function of statistical physics. The imaginary time in quantum mechanics becomes temperature after the Wick rotation. But there is a special mathematical requirement. This is based on the fact that the energy spectrum of the Shrödinger equation must be written in a positive definite form, and the quantum ground state cannot be degenerate, that is, the energy spectrum structure of the Shrödinger equation must be 0<E0 <E1 ≤E2 ≤…. For this reason, I feel that the quantum Hall effect does not seem to meet the above conditions that can be described by the path integral. Many physics papers use mathematics blindly, without taking into account the above limitations of the path integral description.
The above two mathematical limitations allowed me to further form a physical understanding based on system evolution. First, since any measurement must be reflected as the behavior of the measured object relative to its center of mass system, the positive and negative signs of the ground state energy level E0 of the measured object have special meanings and cannot be positively determined by a translation. For this reason, the concept of evolution parameters and the entropy energy coefficient proposed later in this article must be based on the absolute value of the ground state energy. Second, understanding the concept of topological order proposed by Wen Xiaogang from the perspective of evolution actually reflects that the system may evolve to either disordered entropy or ordered entropy state - this is reflected in the fact that when the system evolves, it may spontaneously adjust to a system with ground state degeneracy and energy gap, or it may separate the quantum degenerate system. This reflects that the system may also evolve to the ordered entropy maximization mode.
This further forms the concept of ordered entropy under topological degeneracy, which is also a generalization of the concept of topological order proposed by Wen Xiaogang. I briefly summarize its physical meaning as follows: The usual spatial degeneracy reflects the symmetry formed by the combination of negative energy between individual systems, such as gravity leading to time reversal symmetry, and chemical bonds between atoms leading to spatial translation symmetry of crystal structure. However, topological degeneracy reflects the symmetry of the system's energy being identical in momentum space after the Fourier transformation of space - the system may evolve to the aforementioned thermodynamic disordered entropy state where the energy spectrum structure satisfies 0<E 0 <E 1 ≤E 2 ≤..., or it may evolve to a topological degenerate state where all individual energies are identical, or there is an energy gap above the degenerate ground state and tunneling: this is a state where the system's Shannon information entropy value is larger, and I also call it a topological degenerate state.
Therefore, the evolution of the system will present a situation where two types of entropy maximization coexist: one is the maximization of thermodynamic disordered entropy caused by the combination of negative energy between individuals, and the other is the maximization of ordered entropy by positive energy individuals maintaining full homogeneity through mutual quantum tunneling. The above geomagnetic reversal model is hindered by its unclear microscopic mechanism, but its application to condensed matter physics seems to be able to describe superfluidity and superconducting states: topologically degenerate quantum tunneling brings about the maximization of ordered entropy of the system, which I will describe in detail in the article. This will further expand the seesaw model and form the concept of evolutionary platform - the genetic material of life, DNA, has both the characteristics of negative energy disordered entropy maximization linked by chemical bonds and the characteristics of positive energy ordered entropy maximization composed of quantum tunneling of a certain topologically degenerate state (which may come from N- ions). In this way, the physical evolution platform description of the life system is "1 gene 1 enzyme", the cornerstone of the concept of operons, and also reflects the physical image of the pluripotent state.
This article will comprehensively explain the formation process of the evolution platform idea that I have given to the complex network systems of the universe, life and economics since the 1980s. I will point out that today's physics, which is based on the interaction between space-time and matter, should shift its paradigm to the evolutionary criterion analysis framework. To this end, I will use a series of specific cases below to illustrate that physics should shift from the existing observation paradigm of summarizing laws through experimental observation to the state reason thinking - all physical observations correspond to a certain state of the material evolution process, and physical laws should be reflected as the reasons for the formation of the state, which is manifested in the entropy energy criteria I and II. The evolution platform will present three types of regular patterns: deterministic equations dominated by energy, probabilistic patterns dominated by entropy forces, and cellular automata or other diverse pattern evolutions under the "equal power" of energy and entropy.
1. The “Two Explosions” Problem: The Three States of Matter and Their Implications for the Understanding of Elementary Particles and the Evolution of the Universe
Let me start with the concept of magic numbers for nuclei. Everyone knows the Mendeleyev periodic table of chemical elements, which is understood from the number of electrons outside the nucleus. But the number of electrons outside the nucleus should be equal to the number of protons inside the nucleus. In addition to the number of electrons outside the nucleus, any atomic number also corresponds to the same number of protons inside the nucleus. People have found that if the number of protons or neutrons in the nucleus is 2, 8, 20, 28, 50, 82, or 126, the nucleus is relatively stable. These 7 numbers are also called magic numbers. However, the largest atomic number found is 118, so 126 as a magic number only means neutrons. Nuclei with both nucleons being magic numbers are extremely stable: for example, oxygen atoms have 18 isotopes, but 16O, which has 8 protons and neutrons, is extremely stable, accounting for 99.8%. Lead also has more than 40 isotopes, but 208Pb is the most stable, accounting for more than half, with 82 protons and 126 neutrons respectively.
There are two main existing nuclear models. The shell model is based on the motion of independent particles in the mean field, and its magic number formula is k(k+1)(k+2)/3, but only the first three of the given 2, 8, 20, 40, 70, 112... are consistent with reality. More importantly, this formula cannot explain why the magic number has an upper limit, which also reflects the most important saturation characteristics of the nuclear in the summer after I finished the nuclear physics course that year, I suddenly had an idea: Could the physical essence of the magic number come from the spherical symmetry of the fully homomorphic spherical state under quantum tunneling?
The above fantasy involves several professional physics terms, and I will discuss their meanings later. Here, let's talk about the simplest geometric meaning, which is the uniformly distributed points on the sphere with full symmetry. The picture below is from a regular polyhedron toy in kindergarten. In geometry, there are only five regular polyhedrons named Plato, namely regular quadrilateral, hexahedron, octahedron, dodecahedron and icosahedron. However, the concept of fully symmetrical equidistant points on the sphere requires two more numbers. First, the two endpoints of the diameter of the sphere, such as the two points at the north and south poles of the earth, have complete symmetry, which should be added. Secondly, the midpoints of the 30 edges of the regular dodecahedron or the regular icosahedron should be added, that is, the midpoints of the edges of the top regular pentagon and the bottom regular triangle in the picture below, they are all 30, which is also the equidistant point (I realized it later). In addition, are there any other equilibrium points? Fullerene, with the famous molecular formula C60, is not, because the sphere contains both regular pentagons and hexagons.

Therefore, the effective equipartition points of the three-dimensional sphere are only 2, 4, 6, 8, 12, 20, and 30. If these 7 numbers are called the filling layers from 1 to 7, this is similar to the principal quantum number of the electron filling of ordinary atoms. The magic number filling rules are as follows: 8=2+6, 20=2+6+12, 28=2+6+20, 50=2+6+12+30, 82=2+4+6+8+12+20+30. This conclusion makes me very excited, and I think it may be the correct explanation of the magic number of the nucleus. The most important point is that the energy level filling inside the nucleus should be completely opposite to the electron filling in the outer layer of the nucleus. The opposite charges between the electrons and the nuclear charges in the atom attract each other, and the electrons fill the outer layer with higher energy and the easier it is to escape. But there is only Coulomb repulsion inside the nucleus. The innermost nucleus has higher energy, and the way to escape is through quantum tunneling, that is, to tunnel out from the center of the innermost layer. The innermost layer is a magic number of 2, and 2 protons + 2 neutrons constitute an alpha particle. Why do nuclei with nucleon numbers usually have to be greater than 100 to have alpha decay? After constructing the above magic number model, I suddenly realized it.
In addition, the above magic number filling rules further reflect the essence of quantum homogeneity, which is the quantum tunneling of nucleons in the same spherical layer. When the number of nucleons is small, the filling usually requires more than one layer to prevent the quantum tunneling of the same spherical layer from being disturbed by the cross-layer Coulomb repulsion. However, when the number of nuclei increases to form a full shell, there will be a nucleon squeezing effect, and each layer must be filled. The last number 82=2+4+6+8+12+20+30 reflects the complete and sufficient filling of the number of nucleons, which is by no means a coincidence. But there is also a question: why can't the largest magic number 126 be constructed? After thinking about this problem repeatedly, I think that 126 is only valid for neutrons but not for protons, but 126-82=44, and 44/30≈30/20, which shows that although this number is not an effective equipartition point value, it still belongs to the peak point of the collective motion of nucleons in equidistant layers, which is in line with the concept of synergetic state I will propose below.
Furthermore, when I was a college student in my early years, I thought that if the above thinking is limited to the filling of the atomic nucleus, that is, it is just regarded as completely similar to the Mendeleyev periodic table of electrons filling outside the atomic nucleus, it may underestimate its physical meaning. In fact, N nucleons constitute a completely symmetric Indistinguishable state , which means that there are N! corresponding quantum states: relative to the number of classical individuals, the surge in the number of quantum states means an increase in the entropy value of the system, and the entropy force that constrains the system to form a steady state, although the corresponding system energy may not increase (this is also a problem that physics has not yet studied clearly). The ordinary thermal equilibrium state formed by thermal contact is not a steady state. Heat will diffuse and propagate from high temperature to low temperature, which reflects the absorption and emission process of photons. However, the nucleons in the atomic nucleus under the microscopic individual, including stable elementary particles and even some atoms or molecules, have the above homogeneous steady-state characteristics. For example, α particles do not absorb photons to form excited states like ordinary atoms, which should belong to Indistinguishable states with steady-state meaning.
To this end, I will divide the material system into three states: general equilibrium state, Indistinguishable state (also called as homogeneous) and synergetic state (also called as cooperative). The synergetic state is a material state between equilibrium and Indistinguishable state. These three states will be the basis for the analysis of the above three physics problems. When I was an undergraduate student in my early years, I never thought that the above magic number analysis could be used as a research paper. But after the continuous accumulation of thoughts, I did not submit it to professional research journals for various reasons. Until today, I would rather throw out a brick to attract jade, express my subsequent thinking process in full and write it as a popular science, and deliberately point out which problems are valuable and which problems I have not thought clearly, so that young scholars can combine their own research and publish them as papers in their respective professional research fields, which may be more meaningful. In this section, I will first try to analyze the first two elementary particles and stellar universe problems, and explain the physical concepts of renormalization, super-energy threshold and primordial angular momentum.
Pingbo Zhao My lifelong dream is that all kindergarten students who graduate in the future will know the formula for the magic number of the nucleus: 82=2+4+6+8+12+20+30.
This article attempts to explain in the form of a popular science report rather than an academic paper that existing physics theories have gone astray in four directions, and proposes specific correction plans until an evolutionary platform framework is formed. This article's analysis of various fields of physics, including molecular biology and economics based complex network , will be expressed in accordance with the ideas of correcting deviations and forming an evolutionary platform, rather than following the conventional research report paradigm. Such ideas and content jumping patterns do not meet the requirements of regular scientific and technological journals, and I have no intention of deleting or revising my own views in order to obtain publication, but I strongly hope to be recognized and forwarded by netizens: the value of this article is reflected in the fact that it specifically points out which issues are worthy of further thinking and exploration. Below, I will briefly describe the four wrong paths in physics that need to be corrected:
The first wrong path is taking things for granted, which is especially reflected in the understanding of elementary particles and cosmology. Physicists take it for granted that when it comes to leptons and quarks, they can only analyze interactions and cannot do structural analysis. In this article, I will give the expression of the magic number of the atomic nucleus: 82=2+4+6+8+12+20+30, and introduce the concept of full homogeneity to prove the steady-state nature of the electronic structure. The difficulty of the electron's spatial scalelessness and its renormalization divergence comes from its steady-state structure. It is also taken for granted that the universe was in a high-temperature, thermally disordered state at the beginning of the Big Bang. The primordial angular momentum hypothesis believes that the universe should be in an ordered state of quantum gyroscopes at the beginning, but the vortex direction is disordered. This will spontaneously lead to the flatness of the universe without the need for an inflationary universe model, and the galaxy rotation curve does not need dark matter but can be explained by the path dependence of entropy force. It can also give the causes of the three types of galaxies, namely vortex, elliptical, and irregular, and explain the stability of stellar nuclear fusion. The dark energy problem does not exist.
Wrong path 2 is that too many of today's physicists' research comes from imagination, rather than seeking physical laws based on the questions raised by the research object. The "1 gene 1 enzyme" of molecular biology, the central dogma, and the concept of operons are called the three cornerstones, but physicists have never thought about extracting new physical laws from the above cornerstones. In my early years, I believed that the central dogma of DNA→RNA→protein, which embodies irreversibility, can be understood as a cyclic path, which is physically manifested as the combination of two types of phase transitions, gas-liquid and liquid-solid, and the two types of entropy maximization play a role in gene repair. I also proposed the concept of bifurcated paths for the other two cornerstones a long time ago, but it was not until 2013 when I learned about Tang Chao's seesaw model that I felt that I could combine Wen Xiaogang's concept of topological order to give a description of the evolutionary platform based on self-organized criticality. For the complex network platform of economic development evolution, the concepts of energy entropy and temperature can also be introduced to deduce the power-law distribution of the economic system.
The third wrong path is that physics research ignores the entropy force analysis under the meaning of the system. Landau's liquid helium superfluid theory believes that the entropy of the superfluid state is 0. How can this explain the Rollin film? After the λ phase transition, the liquid helium in the open container will climb over the edge of the bowl and drip out, which reflects the violation of the second law of thermodynamics by doing work from a single heat source. This paper proposes an order entropy explanation for liquid helium. The Cooper pairs in the BCS theory are electron pairs with opposite momentum directions: how can their passing by reduce the energy of the system? If the energies of all Cooper pairs are not strictly equal, how can we explain the never-ending superconducting circulation? I believe that superconductivity must come from the same energy of all Cooper pairs, and the quantum tunneling of the topological degenerate state confined to the two-dimensional surface of the superconductor must also reflect the order entropy drive, so that the magnetic field can be squeezed out of the superconductor to form the Meissner effect. Order entropy is reflected in three types of self-organized macroscopic quantum effects: the cooperative tunneling state comes from the geomagnetic reversal model that explains global warming, the quantum entangled state is a special manifestation of the same state, and the topological degenerate state comes from the extension of the concept of topological order .
Wrong path 4 is reflected in the fact that today's science only starts from individual associations or interactions, but has not built a platform concept. People first try to give deterministic associations between different concepts, such as F=ma and E=mc2 , which are called laws. If a deterministic association cannot be given, a probabilistic relationship is constructed, such as the Shrödinger equation or statistical distribution. If all of these do not work, it is considered to be uncertain, such as the uncertainty relationship. However, physicists have not yet built a system evolution platform thinking: computer science has developed from the machine language platform of 0 and 1 opened by von Neumann, to the operating language platform, and then to today's Internet and AI platform. If physical laws are built on an evolutionary platform, we must first reflect on whether the existing scientific thinking system needs a paradigm shift. The complexity of using an evolutionary platform to describe the complex network of cosmic expansion, life evolution, and economic development reflects the common characteristics of polarity, diversity, and criticality.
Introduction: The causes of the 3rd problem in physics and the consequences of forming an evolutionary platform
After the college entrance examination in 1978, a popular science book called "From One to Infinity" changed my life choice. I originally wanted to study mathematics in college because Xu Chi's reportage "Gottbach's Conjecture" shocked the whole of China that year. This conjecture is considered a "pearl" in the crown of mathematics: Chinese mathematician Chen Jingrun has proved 1+2, but there is still "one step away" from the final 1+1. That year, there were many candidates who wanted to pick this "pearl", and I was one of them. But this popular science book written by Gamov, which was newly translated and published that year, made me feel that the difficulties in physics are far more meaningful than this mathematical conjecture of Gottbach. After reading this book, I summarized the difficulties in physics into three "one surprise and three explosions", which involve the smallest elementary particles, the largest celestial body stars, and the vitality and evolution of life. Therefore, I chose to apply for the Department of Modern Physics of the University of Science and Technology of China.
Let’s talk about the first explosion problem first. In From One to Infinity, Gamov talked about the fact that the structure of matter only has “three entities” that are no longer divisible: nucleons, electrons, and neutrinos. This is “getting to the bottom of the matter”. The translator’s note adds that when Gamov wrote this book, he did not know about the later quark model. However, I felt at the time that even if there are more basic quarks, it does not matter whether matter is infinitely divisible: because the book says that mass and energy can be converted into each other. Then, no matter how great the energy is, it may not be able to “split” a most basic particle into two, but can only blast out a new whole particle. The more fundamental problem here is that at the level of basic particles such as electrons and protons, they are composed of like charges: like charges repel each other, which is the basic law of matter in nature. But why does this basic law fail after “getting to the bottom of the matter”, and repelling things can gather together and not explode?
Another difficulty is that the book talks about stars exploding when they are "in their twilight years", which puzzles me: any object should explode when its fuel is sufficient, not when it is "in its twilight years". Looking up at the sky, the nuclear fusion of the sun and stars is luminous and stable, and has not exploded. The book also talks about the early days of the solar system, which was not stable, and may have encountered a large collision of interstellar matter, so that the nuclear synthesis material was thrown out at the beginning of the solar nuclear fusion, and then the planetary system including our earth was formed. This makes me even more surprised: this seems to indicate that the sun has exploded once in the early days, but it did not explode completely, so our earth was formed. I feel that as long as the nuclear synthesis of the sun is turned on, it should explode in a chain reaction like an atomic bomb detonating a hydrogen bomb. Why did the sun not explode completely in the early days, and why does it not explode now when the fuel is the most sufficient, but has to wait until the "twilight years" to form a final explosion?
The "shock and explosion" of life phenomena came from my earlier confusion: China reported the event of Chinese scientists synthesizing bovine insulin as "artificially synthesized life". The teacher's class lecture surprised me: if one amino acid molecule of synthetic bovine insulin is "docking" incorrectly, there will be no biological vitality, and there will be no reaction when injected into mice. All "dockings" must be accurate, and mice will convulse after injection. The "Mystery of Life" chapter in "From One to Infinity" describes the third explosion. When talking about the isomers of biological molecules, Gamov first gave the example of the three configurations of α, β, and γ of TNT explosives. This made me think that all biological molecules contain N- ions like TNT, so the vitality of life may be related to the explosive power of TNT, and it is also related to the convulsions of experimental mice. The source of my analysis of the physical mechanism of life phenomena in the following text comes from the "shock and explosion" brought to me by the above mice and TNT.
In the 1990s, I envisioned using the geomagnetic reversal model to explain global warming, but I did not get the approval of my doctoral supervisor, Academician Pu Fuke, which became a pain point in my life. But later, two Chinese academicians, Academician Tang Chao of the Chinese Academy of Sciences, proposed the seesaw model, and Academician Wen Xiaogang of the American Academy of Sciences proposed the concept of topological order, which made me feel that the geomagnetic reversal model could be revived. So, I started physics research again in 2013. This article is about my thinking process from the above three difficulties as a child to today. The breakthrough point came from the magic number expression I found when I was studying nuclear physics as an undergraduate: 82=2+4+6+8+12+20+30, which will be discussed in the first section below. This will bring about the concepts of Indistinguishable and synergetic state that are different from equilibrium state. My alternative physics thinking so far has been based on the above three types of states and finally formed the idea of evolutionary platform, which also comes from the combination of the above seesaw and topological order concepts.
The above is the cause of the formation of the content of this article. As an introduction, I will briefly describe the consequences of the evolutionary platform idea I formed so that readers can understand this article more smoothly. A piece of scientific news in 2013 once again changed the trajectory of my life: in order to "understand the mutual inhibition and mutual balance relationship between mesoderm genes and ectoderm genes in the reprogramming process", two biological research groups at Peking University, Professor Tang Chao, proposed a seesaw model that may determine the maintenance and change of cell fate. This article was published in the international top biology journal Cell: Shu et al., Cell, 153(2013)963. The cover of this issue of the magazine is a schematic diagram of the seesaw model.
Before talking about the geomagnetic reversal model in detail, I would like to explain that Tang Chao also participated in proposing another very famous BTW model in 1987 (T refers to Tang Chao). I will analyze this model later, but here I will only explain the meaning of the concept of self-organized criticality in this model - this is reflected in the evolution of non-equilibrium systems, which are often driven to the critical state very slowly and have spontaneous time and space scaling, which is different from the previous cognition of physics. The concept of renormalization scaling formed by people in the past was artificial rather than spontaneous. For this reason, I had imagined in my early years why the N and S poles of the earth's magnetic field would reverse once every tens of thousands of years or hundreds of thousands of years. The physical mechanism is still unclear, but is it also reflected in the time scaling of self-organization under slow driving?
Moreover, there is another feature of geomagnetic reversal, which can be found in Wikipedia: 183 reversals have occurred in the past 83 million years, with an average reversal every 450,000 years. However, the reversal period is only between 2,000 and 12,000 years, and the Earth's magnetic field remains stable in the magnetic field direction of the N pole or S pole most of the time. I think this denies the possibility of classical motion. If geomagnetic reversal comes from some slow classical rotation under the special charge distribution of the core, the geomagnetic intensity will show continuous periodic fluctuations. Geomagnetic reversal obviously has discontinuous macroscopic quantum effects on a large time scale: I guess that the earth itself constitutes a quantum potential barrier, and the magnetic field will perform quantum tunneling between the north and south poles of the earth, just like the N- ions in ammonia NH3 tunneling back and forth on the three H+ ion planes. This forms a quantum steady state, which will be discussed in detail later.
Furthermore, the cycle of global warming is not synchronized with the geomagnetic reversal cycle, the former is much longer. My understanding is that the continuous reversal of the Earth's magnetic poles itself constitutes a steady state, which can also be called a topological steady state. In his early years, Wen Xiaogang also called the concept of topological order a topological steady state, but he deleted this term in his recent Wikipedia topological order entry. This means that as long as the geomagnetic reversal process is maintained and not unstable, the Earth's temperature will not warm up. So far, we can make an analogy with the seesaw model. Is the steady state of the Earth's magnetic poles reversing back and forth comparable to the high-energy state of the multi-potential state of the model? Once the steady state is unstable, the seesaw will be reversed by some induced force, which is reflected in the geomagnetic reversal, which is the collapse of the steady state and causes the Earth to warm up. For this reason, the above geomagnetic reversal model combines the self-organized criticality and seesaw model proposed by Tang Chao, as well as the concept of topological order steady state proposed by Wen Xiaogang.
The core of the above view is that the periodic reversal of the Earth's magnetic poles is itself a macroscopic quantum steady state, which is similar to the seesaw model. Such a steady state should also be maintained by two "equally matched" forces. When I shared this idea with my mentor and fellow students at the Institute of Physics, Chinese Academy of Sciences, they first felt that it was unreliable to regard the entire Earth as a macroscopic quantum effect. But why does no one question the macroscopic quantum behavior of superconductors? With the comparability of the seesaw model at the cellular level, is this not outrageous? In fact, the magnetic field behavior only involves the steering behavior of individual quantum magnetic moments. I regard it as the tunneling cooperative behavior of individual quantum magnetic moments, which is similar to the self-organized synchronization of photon frequencies in lasers. The macroscopic evolution of geomagnetic reversal also reflects self-organized criticality: after instability, it will be converted into thermal energy, thereby warming the Earth. Its subsequent evolution will also be reflected in the slow driving of the Earth's magnetic field again and the re-formation of the geomagnetic reversal cycle.
However, since the above geomagnetic reversal model has no microscopic physical image support, I cannot "stick to it" with this model, but I have formed the idea of evolutionary platform. This is reflected in the evolution of the universe, the evolution of life and the evolution of economic systems, which actually have platform characteristics, which is similar to the evolution of computer science from machine language platform, operating system platform to today's AI platform: the evolution of the system will become more complex under the norms of the platform framework. In this article, I also constructed a series of concepts such as evolution parameters, cyclic paths, bifurcated paths, coordinate degeneracy and topological degeneracy, and polarity to illustrate such an evolutionary platform. This overly advanced idea cannot and does not intend to be written into a standard paper submission, but only wants to provide ideas for young scholars in the form of online articles. As an introduction, let me first talk about the basis for the concept of evolutionary platform - system view. I will illustrate this through the following two examples.
The first example is that when I was teaching in college, I had to give an open class on the hydrogen atom model, and I looked up a lot of information on the history of physics. I learned that in 1913, Bohr first gave the Rydberg constant of the hydrogen atom model, R=109737.315 cm−1, which had an error of about 0.05% from the experimental value of 109677.58 cm−1. For this reason, Bohr made a revision in 1914, changing the model to a center-of-mass system, and replacing the mass of the electron with the reduced mass of the protons and electrons of the nucleus, which eliminated the error. This example inspired me that any measurement is reflected in the behavior of the measured object relative to its center-of-mass system. The luminescence of the electron movement is not reflected in its movement as an individual relative to our measuring instrument, but is based on the overall system behavior of its center-of-mass system. This simple example led me to form the concept of path dependence based on the system, which is not available in today's physics.
The application of the above system view to the description of a larger system will bring us a new understanding - the concept of path dependence. For example, dark matter is usually only found in spiral galaxies, and does not seem to exist in elliptical galaxies. This makes me think that this may be related to the entropy force based on the system meaning in the process of galaxy formation: when the Fermi system, which is mainly composed of protons, forms a galaxy, its early Fermi energy will form entropy force to counteract the gravitational effect: the stellar motion speed revealed by the galaxy rotation curve is the same, but it only exists in the spiral galaxy of the independently formed interstellar cluster, which reflects the path dependence of the entropy force. However, I think the formation of elliptical galaxies is caused by the attraction of stellar matter groups that were thrown away by spiral galaxies in the early stage, which leads to the fact that most of the stars in elliptical galaxies are old stars (I will analyze it later), and it also reflects that the entropy force acts only in the inner core, and the old stars outside the elliptical galaxy only have gravitational effects.
The second example is that I switched to condensed matter physics in the 1990s. When I read articles on the quantum Hall effect, I found that they were all described using the path integral of quantum mechanics. Although I had studied quantum field theory before, the field theory course in the 1980s did not teach path integrals, so I had to make up for it. The path integral description comes from a Wick rotation of the Shrödinger equation, which is equivalent to the partition function of statistical physics. The imaginary time in quantum mechanics becomes temperature after the Wick rotation. But there is a special mathematical requirement. This is based on the fact that the energy spectrum of the Shrödinger equation must be written in a positive definite form, and the quantum ground state cannot be degenerate, that is, the energy spectrum structure of the Shrödinger equation must be 0<E0 <E1 ≤E2 ≤…. For this reason, I feel that the quantum Hall effect does not seem to meet the above conditions that can be described by the path integral. Many physics papers use mathematics blindly, without taking into account the above limitations of the path integral description.
The above two mathematical limitations allowed me to further form a physical understanding based on system evolution. First, since any measurement must be reflected as the behavior of the measured object relative to its center of mass system, the positive and negative signs of the ground state energy level E0 of the measured object have special meanings and cannot be positively determined by a translation. For this reason, the concept of evolution parameters and the entropy energy coefficient proposed later in this article must be based on the absolute value of the ground state energy. Second, understanding the concept of topological order proposed by Wen Xiaogang from the perspective of evolution actually reflects that the system may evolve to either disordered entropy or ordered entropy state - this is reflected in the fact that when the system evolves, it may spontaneously adjust to a system with ground state degeneracy and energy gap, or it may separate the quantum degenerate system. This reflects that the system may also evolve to the ordered entropy maximization mode.
This further forms the concept of ordered entropy under topological degeneracy, which is also a generalization of the concept of topological order proposed by Wen Xiaogang. I briefly summarize its physical meaning as follows: The usual spatial degeneracy reflects the symmetry formed by the combination of negative energy between individual systems, such as gravity leading to time reversal symmetry, and chemical bonds between atoms leading to spatial translation symmetry of crystal structure. However, topological degeneracy reflects the symmetry of the system's energy being identical in momentum space after the Fourier transformation of space - the system may evolve to the aforementioned thermodynamic disordered entropy state where the energy spectrum structure satisfies 0<E 0 <E 1 ≤E 2 ≤..., or it may evolve to a topological degenerate state where all individual energies are identical, or there is an energy gap above the degenerate ground state and tunneling: this is a state where the system's Shannon information entropy value is larger, and I also call it a topological degenerate state.
Therefore, the evolution of the system will present a situation where two types of entropy maximization coexist: one is the maximization of thermodynamic disordered entropy caused by the combination of negative energy between individuals, and the other is the maximization of ordered entropy by positive energy individuals maintaining full homogeneity through mutual quantum tunneling. The above geomagnetic reversal model is hindered by its unclear microscopic mechanism, but its application to condensed matter physics seems to be able to describe superfluidity and superconducting states: topologically degenerate quantum tunneling brings about the maximization of ordered entropy of the system, which I will describe in detail in the article. This will further expand the seesaw model and form the concept of evolutionary platform - the genetic material of life, DNA, has both the characteristics of negative energy disordered entropy maximization linked by chemical bonds and the characteristics of positive energy ordered entropy maximization composed of quantum tunneling of a certain topologically degenerate state (which may come from N- ions). In this way, the physical evolution platform description of the life system is "1 gene 1 enzyme", the cornerstone of the concept of operons, and also reflects the physical image of the pluripotent state. This article will comprehensively explain the formation process of the evolution platform idea that I have given to the complex network systems of the universe, life and economics since the 1980s. I will point out that today's physics, which is based on the interaction between space-time and matter, should shift its paradigm to the evolutionary criterion analysis framework. To this end, I will use a series of specific cases below to illustrate that physics should shift from the existing observation paradigm of summarizing laws through experimental observation to the state reason thinking - all physical observations correspond to a certain state of the material evolution process, and physical laws should be reflected as the reasons for the formation of the state, which is manifested in the entropy energy criteria I and II. The evolution platform will present three types of regular patterns: deterministic equations dominated by energy, probabilistic patterns dominated by entropy forces, and cellular automata or other diverse pattern evolutions under the "equal power" of energy and entropy. 1. The “Two Explosions” Problem: The Three States of Matter and Their Implications for the Understanding of Elementary Particles and the Evolution of the Universe Let me start with the concept of magic numbers for nuclei. Everyone knows the Mendeleyev periodic table of chemical elements, which is understood from the number of electrons outside the nucleus. But the number of electrons outside the nucleus should be equal to the number of protons inside the nucleus. In addition to the number of electrons outside the nucleus, any atomic number also corresponds to the same number of protons inside the nucleus. People have found that if the number of protons or neutrons in the nucleus is 2, 8, 20, 28, 50, 82, or 126, the nucleus is relatively stable. These 7 numbers are also called magic numbers. However, the largest atomic number found is 118, so 126 as a magic number only means neutrons. Nuclei with both nucleons being magic numbers are extremely stable: for example, oxygen atoms have 18 isotopes, but 16O, which has 8 protons and neutrons, is extremely stable, accounting for 99.8%. Lead also has more than 40 isotopes, but 208Pb is the most stable, accounting for more than half, with 82 protons and 126 neutrons respectively. There are two main existing nuclear models. The shell model is based on the motion of independent particles in the mean field, and its magic number formula is k(k+1)(k+2)/3, but only the first three of the given 2, 8, 20, 40, 70, 112... are consistent with reality. More importantly, this formula cannot explain why the magic number has an upper limit, which also reflects the most important saturation characteristics of the nuclear in the summer after I finished the nuclear physics course that year, I suddenly had an idea: Could the physical essence of the magic number come from the spherical symmetry of the fully homomorphic spherical state under quantum tunneling? The above fantasy involves several professional physics terms, and I will discuss their meanings later. Here, let's talk about the simplest geometric meaning, which is the uniformly distributed points on the sphere with full symmetry. The picture below is from a regular polyhedron toy in kindergarten. In geometry, there are only five regular polyhedrons named Plato, namely regular quadrilateral, hexahedron, octahedron, dodecahedron and icosahedron. However, the concept of fully symmetrical equidistant points on the sphere requires two more numbers. First, the two endpoints of the diameter of the sphere, such as the two points at the north and south poles of the earth, have complete symmetry, which should be added. Secondly, the midpoints of the 30 edges of the regular dodecahedron or the regular icosahedron should be added, that is, the midpoints of the edges of the top regular pentagon and the bottom regular triangle in the picture below, they are all 30, which is also the equidistant point (I realized it later). In addition, are there any other equilibrium points? Fullerene, with the famous molecular formula C60, is not, because the sphere contains both regular pentagons and hexagons.
1.1 Understanding the Indistinguishable state of elementary particle stability and the synergetic state of stellar nuclear fusion
Let's first talk about the problem of elementary particles not exploding. Why don't electrons, as elementary particles, explode due to Coulomb repulsion? Could it be that the reason is similar to the aforementioned homomorphic analysis of the nucleus, coming from the exchange symmetry and indistinguishability of the "particles" that make up the system? For this reason, the usual understanding of quantum field theory is that electrons should be described by quantum fields, and the following will discuss the very successful field theory calculation of the Landé factor. However, there is no good physical understanding of the renormalization problem of two infinities subtracted, and it is still regarded as a mathematical technique. For this reason, after having the concept of Indistinguishable under the concept of magic numbers mentioned above, I will first use it to explain the renormalization problem. Imagine that the charge e of an electron is composed of N charged gas "particles", and the charge of each "particle" is e/N. Does this mean that the homogeneous entropy force of the "particles" from N→∞ constrains the electron system, so that there is no explosion?
For those who are not familiar with quantum field theory, let me briefly explain the background. In 1928, the Dirac equation was proposed to describe a type of particle with spin 1/2. But later it was found that the equation is only suitable for describing electrons, and cannot describe protons with internal structures. In the 1940s, people proposed quantum field theory, using Dirac fields to describe electrons, and electrons are reflected as quantum fields without spatial structure. This does show that as far as electrons are concerned, Gamov's previous statement is correct, and they are "rooted out" and belong to indivisible particles, although nucleons also include protons and neutrons composed of more basic quarks. The most important result of quantum field theory is the calculation of the Landé factor, whose theoretical value is 2.002,319,304,402, which is consistent with the experimental value of 2.002,319,304,376 to 10 significant figures. For this reason, the correctness of the theory has been recognized. But there are two questions. First, although the electron's spatial structure has been experimentally verified, it also requires physical understanding. Second, the renormalization calculation of the above quantum field theory involves the subtraction of two infinities, which requires a physical explanation.
The first question is that the experiment proved that electrons have no spatial structure. This work was first completed by Chinese physicist Samuel Ting in the 1960s. As mentioned earlier, if electrons are regarded as composed of classical "particles" of charged gas, starting from the energy corresponding to the rest mass of the electron being equal to the Coulomb repulsion energy, the classical radius of the electron can be calculated to be 2.82 x 10-15 m . If quantum field theory can be used to describe electrons, it means that electrons must have no spaceless scale, and their radius must be at least smaller than the above classical radius. For this reason, Samuel Ting's experiment gave a rigorous measurement result, and the electron radius would not exceed 10-16 m . Experiments in the 1980s raised this result to an upper limit of 10-22 m . This shows that quantum field theory is completely correct in regarding electrons as fields without spatial structure, but what is this if it is not a classical "particle" in physics? But there is no physical explanation so far.
The second renormalization problem has a greater impact on today's physics, and has evolved into any description of physical theory that must be renormalizable. Because mathematically, increasing the dimension of physical space can lead to renormalization, so superstring theory must be built on 11-dimensional space. But my early ideas are different from those of today's physics masters. I tried to use the fact that all Indistinguishable state will form a steady state to explain the above two problems of electrons having no spatial structure and renormalization at the same time. Today's physics considers it from the perspective that any force must exchange some virtual particles, so I won't popularize this point of view. My perspective is that the movement of matter must form a steady state under any different interaction forces. For this reason, why the earth revolves around the sun and why electrons don't explode, these seem to be two unrelated questions, but they both reflect that the movement will tend to equilibrium or steady state - I will explain the first two electronic questions from the analogy of the two.
To this end, let's first talk about the rotation of the earth around the sun. As a star, why did the sun eject its matter to form the earth and other planets? It could have chosen not to eject it, but to speed up its rotation. The sun's rotation period is about one month, and there is still a lot of room for growth. It can also choose to eject more and form binary stars, which account for about half of the universe. But the solar system has formed an eight-planet system, which is rare. Whether it was caused by the collision of other interstellar matter in the early days of the sun is a conjecture. But the formation of the solar system can be understood from the negative energy system, that is, the Lagrangian minimum action principle with negative interaction energy: Assume that T is the total kinetic energy of the solar system, including the rotation of the sun and the kinetic energy of all planets, V is the sum of the total gravitational energy of the entire solar system is negative, T-V is the action of the entire solar system, and the Lagrangian action principle shows that it should be kept to a minimum during the evolution process.
The reason why the sun did not evolve to accelerate its rotation or form a binary star is to minimize the action T-V, thus forming the special planetary structure of the solar system. The gravitational energy between any two objects is negative. If the evolution causes the distance between them to be farther, the gravitational energy |V| will be greater, and the total kinetic energy T of the relative motion will be smaller. Therefore, the relationship between the spatial scale and the total kinetic energy of the gravitational system is a trade-off: the total kinetic energy will decrease when the spatial scale is larger, and the gravitational effect will pull the system back to a smaller scale. Conversely, when the spatial scale is smaller, the total kinetic energy will overcome the gravitational energy and expand the spatial scale of the system. This is a process that tends to a steady state, which is exactly the same as the principle of thermal expansion and contraction of ordinary matter. The molecules of ordinary matter also have mutual attraction. After heating, the kinetic energy of the molecules increases, causing expansion, and after cooling, the negative molecular attraction causes the scale of the object to shrink.
However, for the "particle" system of the electron charge gas, this is similar to the nuclear force system, and they belong to the system with positive mutual repulsion energy V. If it is a classical system, the more the space scale is compressed, the more the total kinetic energy T of the "particle" system will increase, but the total Coulomb repulsion energy V will increase faster, which will inevitably lead to the disintegration of the system and it is impossible to form a steady state. Even if the repulsive classical system is "forced" to be "compressed" together by external force, it will also reflect the explosion problem I mentioned above. However, if it is believed that electrons are composed of charge "particles" with quantum identical properties in momentum space, they have exchange symmetry, and they need to form N! identical quantum states-this seems to greatly increase the total quantum kinetic energy T of the system, so that the total kinetic energy is equivalent to the Coulomb repulsion energy V? In the early years, I opened a quantum field theory course in the five-year undergraduate program of USTC. I was distracted when studying this course, thinking that the steady state of electrons is likely to come from the convergence of these two energies T and V, due to the Lagrangian principle of least action.
Specifically, I assumed that electrons were "particles" of charged gas, and regarded them as forming a special spatial structure, namely a two-dimensional ring-shaped quantum Indistinguishable. Such a structure can also explain the existence of electron spins - in today's language, it is equivalent to a topological "doughnut", but I did not have this concept at the time. I just considered that since the principle of least action is to minimize T-V, the charged gas "particles" must be shrunk to the spatial singularity as much as possible, so that the larger the V, the more conducive to minimizing T-V. This not only explains the difficulty of electrons not exploding, but also explains the above two questions: the physical reason for the electrons' lack of spatial scale and renormalization is that both T and V of the electronic system tend to infinity at the same time. Mathematical analysis shows that both T and V will diverge logarithmically, and renormalization may be related to ln N! ≈ N ln N will diverge logarithmically, and the Coulomb repulsion energy in two-dimensional space is also logarithmic.
The above is what I tried when I was an undergraduate. I think the above mathematical calculations have the potential to give the relationship between the electron's charge, spin and mass. Furthermore, this physical image also has the following two physical meanings: First, we need to understand the structure of electrons from the perspective of momentum space. In momentum space, electrons are ordered, but in the coordinate space observed by humans, electrons are disordered. This is the same as the principle that electron gas is disordered in coordinate space, but forms an ordered Fermi sphere in momentum space. Second, it is not a problem that the background energy of quantum field theory is infinitely divergent. The electron's action T-V is more basic than the total energy T+V. Physical measurements should be based on the action with relativistic Lorentz invariance, rather than the total energy of the system. This has a similar meaning to the string-net condensation theory proposed by Wen Xiaogang, which I learned many years later. All fundamental particles in the universe are in an "excited state" above a divergent "ground state".
However, after further thinking, I found that it was too difficult. To give a result comparable to the renormalization of quantum mechanics through the above analysis, in order to prove that the Landé factor reflects the difference between a certain divergent "excited state" and the "ground state", there are too many mathematical problems to solve, which is far beyond my ability. So, I had to postpone this budding idea. As a result, more than 40 years have passed since I postponed it. I hope that writing it down today can still inspire people. However, the more important reason why I did not continue to think about this problem is that I am not very interested in the renormalization problem of elementary particles. After all, this is just a pure mathematical problem, although it is very important. At that time, my interest quickly shifted to the problem of stellar explosion, which has a stronger physical meaning. However, before continuing to talk about the stellar problem, I have a few more words to say.
I am not optimistic about the prospect of a unified theory of physics with four interacting forces. The strong nuclear force and gravity may both belong to two types of entropy forces. The former comes from the quantum homogeneity of Coulomb repulsion, and the latter was also proposed by Verlinde in 2009, but it is still somewhat different from my understanding above. I personally believe that after the electroweak unified theory, there is no need to continue to seek a unified theory, but a more accurate physical understanding and mathematical description of the entropy force must be given. Furthermore, I agree with Wheeler's view that the laws of physics originate from the Big Bang. Therefore, the starting point of the unified theory of physics is the beginning of the universe: "God" set the initial state for the evolution of the universe, which must reflect the evolution platform based on time and space. The embodiment of physical laws is to make the subsequent evolution of the universe as complicated as possible. The structure of electrons and the evolution of the universe are both the ideological basis of the concept of evolution parameters that I will propose later.
Let's talk about the second star explosion problem. When I was at USTC, we visited the Institute of Plasma on the Science Island in the western suburbs of Hefei and watched the Tokamak device of its nuclear fusion experiment. At that time, we were only concerned about the ignition temperature, which was two orders of magnitude away from 100 million degrees. But the researchers who introduced the device said that it was not difficult to reach the ignition temperature, and the most difficult thing was to control the energy output after ignition. This made me think that artificial nuclear fusion is essentially different from solar nuclear fusion : artificial nuclear fusion is so difficult to control, but why can the stars in the sky, including the sun, maintain stable nuclear fusion for billions of years? The difference between the two is not that the nuclear reaction process is different. Although the raw material for artificial nuclear fusion is deuterium, the sun, which accounts for 70% of protons, is produced by the cascade nuclear fusion of 4 protons. But the most essential difference may come from the fact that the solar protons are similar to the protons in the non-magic number collective motion in the nucleus, thus forming a synergetic state? For this reason, I will continue to talk about the meaning of the synergetic state that was not fully explained in the previous article, and give the concept of super-energy threshold.
Let me first explain the synergetic state characteristics of protons in the sun. As mentioned above, the synergetic state is between the thermal equilibrium state and the Indistinguishable state. The thermal equilibrium state comes from the disordered collision between the individual systems, which will form a classical or quantum statistical distribution. The individual systems are distributed at different energy values. I think the statistical distribution of proton motion in the sun belongs to the synergetic state. Although this also comes from the collision between different protons, there is a strong repulsive force between them, which is different from the classical statistical distribution formed by the collision of ordinary molecular gases. This will form a normal distribution - the energy of all proton individuals will fluctuate around a certain average value, and the reason for the fluctuation also includes collisions with other particles, such as electrons or He nuclei in the sun. This leads to the temperature characteristics of protons as a synergetic state, which is different from the Indistinguishable state under quantum tunneling, which has no temperature meaning. Of course, I think that non-magic number protons in the nucleus are also synergetic states, but this is also different from the physical image of the concept of synergetic states of protons in the sun as a multi-body system, so I will not elaborate on it.
What is the physical essence of the above concept of synergetic state? This is reflected in the fact that the statistical distribution of multi-particle systems will have information entropy maximization under two types of constraints: if there is only total energy constraint, the system distribution will be reflected as an energy exponential decay distribution, but if the mean constraint is added, it will be a normal distribution. I will give a mathematical description in the next section on entropy energy criterion. But with the above physical image, I can talk about the concept of super-energy threshold. This means that the mean energy of solar protons in the synergetic state under normal distribution must be maintained at an energy threshold that exceeds the nuclear fusion reaction, which can explain the star problem: if the nuclear reaction inside the sun is too hot, it will increase the kinetic energy of its protons, so that the energy mean will further exceed the threshold energy of nuclear fusion, which will reduce the intensity of nuclear fusion. The reduction in the intensity of nuclear fusion will cool the star, thereby reducing the average kinetic energy of protons, which will in turn enhance nuclear fusion. Such a negative feedback process will lead to the fact that the star can maintain a steady state for billions of years, but at the end of its life, the overall mean will reach the fusion threshold, resulting in a big explosion.
The above understanding of the physics of stars by the super-energy threshold also explains why artificial nuclear fusion is difficult to control stably: The ignition process of nuclear fusion achieved by the Tokamak device is promoted from low energy to high energy, so the above stabilization mechanism does not exist. But is the above physical understanding correct? There are still two hurdles that cannot be overcome. This was raised by my supervisor, Professor Wu Dandi, when I was doing my undergraduate thesis. Before talking about the specific doubts, I would like to say a few more words about the background: When I graduated from college, I applied for a postgraduate degree in solar physics at the Department of Astronomy of Nanjing University. I applied for the examination with the above question of why solar nuclear fusion is so stable and why artificial nuclear fusion is so difficult to control. But later, when I was assigned to the Institute of High Energy Physics of the Chinese Academy of Sciences to do my undergraduate thesis, Professor Wu asked me what research direction I was interested in, and I said it was stellar physics. He said that I could also choose this direction for my graduation thesis. The idea of the above super-energy threshold was formed during my physical thinking during my undergraduate thesis.
Regarding the above concept of super-energy threshold, Professor Wu raised two questions: This is inconsistent with the energy of the electrons emitting light from the surface of the sun, and it is also inconsistent with the process of cosmic evolution. First, the surface temperature of the sun is about 5,000 degrees, which means that the energy level difference of the emitting electrons is also in this order of magnitude, about a few eV. According to the physical understanding of the equilibrium state of statistical physics, the protons and electrons on the surface of the sun are in thermal equilibrium. For this reason, is it possible that the proton energy at the center of the sun is so much greater than the proton energy on the surface? The temperature of the protons on the surface of the sun should be similar to that of the electrons, and the proton energy inside it will gradually increase, so that the proton energy at a certain radius will be exactly equal to the threshold energy of He fusion, causing the sun to explode. Secondly, galaxies and stars are formed after the decoupling of matter and radiation in the Big Bang. The cosmic decoupling temperature is about 3000K. The temperature of all matter in the process of forming galaxies and stars through gravity is gradually rising, which also has the problem of crossing the fusion threshold energy. Both questions indicate that stars will explode in the early stage of nuclear fusion and will not be stable.
Therefore, the above idea of super-energy threshold was not included in my undergraduate thesis. But after graduating from university, I was still thinking about this problem. If we assume that the kinetic energy of electrons and protons at the beginning of the universe is much greater than that of thermal equilibrium, the above doubts can be resolved. The intensity of electromagnetic radiation is inversely proportional to the fourth power of the mass of the particle. Therefore, in the process of cosmic evolution, the electromagnetic radiation of electrons with much lower mass is much greater than that of protons. In this way, the energy of electrons in plasma stars may be much lower than that of protons, so two thermodynamic systems will be formed in the star. I consulted plasma textbooks for this purpose and found that there will be two temperature systems for electrons and protons in rarefied plasma. Although the interior of a star is dense plasma, which is different from rarefied gas, it is accompanied by fusion reactions during the process of stellar evolution and formation. This is a dynamic process of continuously generating energy. Can two temperature systems also be formed?
Therefore, the further question involves the evolution of the early universe. Is it possible that the proton energy must be higher than the electron energy, so that it is in a steady state of synergetic state and different from the thermal equilibrium state? In this way, the above idea of super-energy threshold may be correct, but the existing Big Bang theory will be completely rewritten. People have always believed that matter and radiation must form a thermal equilibrium in the imagination of the initial state of the universe: since the temperature of the cosmic background radiation has dropped to 2.7K, the earlier the universe is, the higher the temperature of its radiation must be. Therefore, it is mainly the average energy of baryons, that is, protons and neutrons, and of course it must match the radiation energy of the same period. This should be reflected in the same thermal equilibrium temperature before the decoupling of matter and radiation. The Big Bang cosmology was constructed along this line of thought. But this may limit our imagination of the initial state of the universe, which leads to the dark energy problem.
1.2 The hypothesis of the primordial angular momentum at the beginning of the universe, the three types of galaxy structures, and the problem of dark matter and dark energy
What if we believe that the origin of the radiation background of the universe mainly comes from the electromagnetic radiation of electrons, rather than the thermal equilibrium state spontaneously formed by the collision of matter? This brain-opening idea may lead to a completely different imagination of the initial state of the universe. This means that the energy of electrons and protons at the beginning of the universe is much higher than the assumption of the Big Bang theory. But what is its physical basis and what consequences will it bring? If we want to reconstruct a cosmic model different from the Big Bang, it is not enough to just illustrate the rationality of the concept of super-energy threshold. We must also have a stronger explanatory power for existing cosmic observations. Therefore, I thought of the assumption of the primordial angular momentum at the beginning of the universe - which can explain the flatness of the cosmic structure and the cause of the two-level structure of galaxies and stars. In the early 1980s, one was Guth's inflationary cosmic model, and the other was the galaxy rotation curve discovered by Rubin, which brought me alternative imagination.
Let’s talk about Guth’s inflationary universe model first: I find it difficult to agree with the physical image of “having a beginning but no end” that cannot end after inflation. For this reason, since the principle of least action is a description of elementary particles, it should also be able to describe the beginning of the universe. When describing elementary particles, the Lagrangian minimization of TV is reflected as the subtraction of two positive quantities, and the space is compressed to a singularity. However, the gravitational energy V between cosmic matter is a negative value, so TV must be reflected as the evolution at the beginning of the universe, and extremely high-energy homogeneous particles must be converted into gravitational energy to cause the universe to expand: I imagine that all particles from the beginning of the universe belong to quantum Indistinguishable states. In this way, the expansion of the universe does not start from the thermal equilibrium state of the Big Bang, but must be reflected as the expansion at the beginning of the universe, and is driven by the evolution of the minimum action from the quantum Indistinguishable state. After the primordial nucleosynthesis, the Indistinguishable state will be transformed into a synergetic state to form a galaxy structure.
The above-concepted image of the evolution of the universe from Indistinguishable state to synergetic state to thermal equilibrium state shows that if the two cosmic nucleosynthesis events are compared to two non-equilibrium phase transitions, the evolution of the universe is comparable to the non-equilibrium phase transition of lasers. Lasers are similar to ordinary luminescence at low energy, which makes them in thermal equilibrium. When the energy increases, continuous lasers will be emitted from the laser resonator. This is what Haken refers to as photon cooperative luminescence in synergetics. This is why I call it a synergetic state. As will be discussed below, the movement speeds of various stars in the universe and galaxies before stellar nucleosynthesis are similar, which is comparable to the photon synergetic state of continuous lasers. Furthermore, the beginning of the universe should not be the disordered state of thermal equilibrium of the Big Bang, but should be reflected in the ordered state of quantum homogeneity. This is similar to the state of pulsed lasers, which reflects the time indistinguishability when the system energy density is extremely high, and only then will pulsed lasers with quantum homogeneity be formed.
Then we analyze the galaxy rotation curve discovered by Rubin, that is, the speed of stars revolving around the core of the galaxy is uniform over a large range. My understanding of this is that this is the consequence of the concerted state of the galaxy formation process. But now people compare it with the movement speed of the planets in the solar system, which must satisfy the Virial theorem. The relationship between the average kinetic energy T and the gravitational energy V given by the Virial theorem for the gravitational system is 2<T> + <V> = 0. The average kinetic energy of the system is only half the size of the gravitational energy, which is obviously only valid for the planetary system of the solar system, and the gravity constrains the planets from escaping. But do the stars in the galaxy satisfy the Virial theorem? Don't look at Rubin's measurements for this, just observe the photos of all spiral galaxies. The kinetic energy of all stars is close to the gravitational energy on the one hand, but it is slightly beyond it, thus presenting a marginal state of super-energy threshold, that is, T is slightly greater than |V|. When analyzing the evolution parameters later, I will also prove that this is a marginal state.
Therefore, the super-energy threshold is not only reflected in the above description of stellar nuclear fusion, but also in the galaxy rotation curve. The vortex structure of the galaxy implies the collective motion of stars, which must present a huge rotational angular momentum. The vortex galaxy is extremely flat, which can be clearly seen from the picture of the Pinwheel Galaxy below, and is also reflected in the measured data of our Milky Way: its diameter is between 100,000 and 180,000 light years, but its thickness is only about 2,000 light years, a difference of 50-90 times. This is far more exaggerated than a flying saucer, and more like a piece of paper. If the consequences of the Big Bang are only reflected in translational momentum, its evolution is like throwing objects from the earth into the sky. If we throw too much force, we will be thrown away from the earth's gravity, or if we don't throw enough force, we will be sucked back by the earth's gravity. This will not be able to explain the vortex characteristics of the galaxy structure. The existing Jean clustering theory only describes the formation of galaxy evolution from the perspective of thermal disturbance, which can only give the approximate scale structure of the galaxy, but cannot describe the vortex characteristics.
The above picture is taken from the Pinwheel Galaxy photo on Wikipedia. All the luminous stars in the galaxy are on the "cantilever" and are thrown away, which reflects a kind of critical state of "on and off".
How to explain the above two points of cosmic matter, the super-energy threshold of kinetic energy of galaxies and stellar matter and the vortex structure? I thought of the experience of playing with spinning tops when I was a child. Back then, we children made our own spinning tops. When playing together, we had to pull hard on an open space. After the call to stop, we started a competition to see whose spinning top could spin longer. The reason why I lost more than I won was not because of other reasons, but because I was left-handed, and the vortex direction of my spinning top was opposite to that of other children's spinning tops. As long as my spinning top touched any other spinning top, the two spinning tops would bounce far away, and a large part of the vortex kinetic energy would immediately be converted into kinetic energy of linear motion, so that the two spinning tops would stop spinning quickly. However, when spinning tops with the same vortex direction collide, they will not be ejected far and have little impact. I call this effect that the closer the vortex direction is, the more it gathers together, and the farther it is, the more it separates. This makes me think that the hypothesis of the original angular momentum at the beginning of the universe based on the ejection effect can replace the Big Bang hypothesis.
The primordial angular momentum hypothesis regards the beginning of the universe as a quantum vortex particle with equal energy compressed from protons and electrons. These particles rotate at high speed like a gyroscope - I call such electron-proton pairs with equal energy, high-speed rotation and completely disordered and homogeneous spin directions, primordial angular momentum. This reflects the characteristics of the early particles in the universe. The individual vortex energy must be extremely high and the interaction gravitational energy is extremely low, so that the fluctuations in the early universe are very small to meet the uniform and isotropic cosmological principles. But why not call it high-energy hydrogen atoms but primordial angular momentum? This is because the hypothesis only has electrical neutrality and vortex characteristics, and it is impossible to guess its material composition, and the material composition is not important, as long as it reflects the future evolution of these particles, they will definitely separate into protons and electrons. But each particle must have the following two characteristics: one is the homogeneous and high-speed rotation state, and the other is the completely random spin direction.
The primordial angular momentum will lead to the flatness of the universe's expansion, which is reflected in the evolution of the minimum action, which requires the kinetic energy of the quantum homogeneity to be converted into gravitational energy with an expanding spatial scale. There is a lot of room for imagination in the specific physical description of this process. For example, the generation of particles comes from the flattening of curved space, which is similar to Hawking evaporation, which can also avoid the difficulty of cosmic singularity, etc. These ideas may create many papers. But the more important physical image, I think, is not the flatness of the universe at the beginning that the primordial angular momentum can bring, but it is reflected in the physical image of the evolutionary platform of the universe's expansion, which can explain the following three points: first, protons will always be in a super-energy threshold state until stellar nuclear fusion, second, the formation mechanism of three types of galaxies, vortex, elliptical and irregular, can be explained from the gyroscopic ejection effect of the primordial angular momentum, and third, this will also lead to the explanation of the synergistic state entropy force of the galaxy rotation curve, not from dark matter.
As for the evolution and formation mechanism of galaxies, I will only talk about two points due to space limitations. First, the primordial angular momentum is assumed to separate gyroscopes with different vortex directions , while gyroscopes with close vortex directions will be gravitationally "adsorbed" together to form galaxies. This process will lead to a power-law distribution similar to the scale-free network effect, which is a complex network concept that was formed after the Internet . But in the early 1980s, I believed that gravitational systems might have such "adsorption" characteristics of material clusters . Second, the stellar population effect will further lead to the formation of three types of galaxies. Astrophysics calls our sun, the young stars that currently emit higher energy and tend to be blue, stellar population I, and the old stars with lower energy and tend to be yellow are stellar population II. The super-giant stars that existed in the early universe but are now dead are judged by the explosion fragments as stellar population III. I will explain below that the power-law distribution based on gyroscope ejection analysis and its stellar population effect are the reasons for the formation of three types of galaxies.
The concept of scale-free network comes from a complex network model proposed by Albert and Barabási in 1999, which reflects that the pattern of newly added nodes in the network is often distributed in a power law, and the power law index of a completely random network is 3. I believe that in the process of cosmic evolution in the formation of galaxies, if the relationship between the overall kinetic energy T and the gravitational energy V is just reflected in the marginal state of the super-energy threshold, that is, T is just slightly greater than V, then the matter clustering after primordial nucleosynthesis will also be distributed in a power law. This should reflect that from the smallest to the largest cluster, the number of stars that can be evolved k should present a power law distribution of P(k)∝k-3 . Of course, there is only a mass cluster distribution in the early universe, and galaxies and stars will evolve later. From the actual cosmic observation, 95% is small particles of non-luminous interstellar matter, and the galaxy structure ranges from dwarf galaxies with 108 stars to giant galaxies with 1014 stars. The difference from the power law distribution is mainly reflected in the middle part, that is, the galaxy structure below the free star to the dwarf galaxy is missing. In addition, the number of dwarf galaxies to giant galaxies is also much smoother than the power law.
This is where the revision of the stellar population effect comes into play: the gyroscopic ejection effect causes the smaller the clusters formed by "adsorption", the greater their translational momentum, and the smaller their overall vortex angular momentum, which will lead to the formation of stars first. The individuals of Population III formed in this way are also very large, three orders of magnitude larger than ordinary stars and have already exploded. The remnants of the explosion will be absorbed by the gravity of other galaxies, thereby flattening the power-law distribution of the original matter clusters. Furthermore, the more massive the galaxy formed by the gyroscopic "adsorption", the more its overall vortex angular momentum will be distributed in the periphery, the more the matter clusters in the periphery will be in an escaping state, and the slower the process of star formation will be. The matter clusters with the lowest angular momentum in the center will form stars first. Therefore, the earliest formed stars, as "heavy objects", will be ejected from the center first to form Population II, that is, the old stars observed now. Such ejection in spiral galaxies is also reflected in the spiral arms that exist in almost any spiral galaxy.
The above physical images show that the lack of galaxy structure from free stars to dwarf galaxies is mainly due to the explosion of Population III. Furthermore, spiral galaxies with spiral arms should be mainstream galaxies. Why do such mainstream spiral galaxies go from power law distribution to bifurcation into elliptical galaxies and irregular galaxies? This is because medium-sized and small spiral galaxies will have a greater speed as their mass decreases, forming a capture effect: for galaxies with translational momentum and overall vortex in the middle, a large number of old Population II stars will be captured to form elliptical galaxies. This has been mentioned in the introduction of the previous article, and it explains that elliptical galaxies have less entropy force components and will not have dark matter characteristics. Smaller galaxies are more likely to merge gravitationally to form irregular galaxies due to their greater speed. The evolutionary platform of the universe's galaxies given above is of course too simple and rough, but it reflects that the vortex motion mechanism based on the primordial angular momentum gyroscope must be based on the vortex motion mechanism to explain the three types of galaxy structures.
Furthermore, the above more reasonable assumption of primordial angular momentum will deny the existence of dark matter. At least the uniform motion characteristics of stars inside galaxies come from the synergetic state. The entropy force drive of the marginal state of the super-energy threshold is higher than the local gravitational energy drive. I will analyze the effect of information entropy maximization later. As for the dark energy problem, I think it does not exist. The assumption of the primordial angular momentum of the universe means that its value is adjustable. The physical law comes from the spontaneous evolution of the evolution platform at the beginning of the universe to the complexity . For this reason, the above physical picture shows that the expansion of the universe is from the fully Indistinguishable state to the synergetic state of the galaxy, and then to the entropy force drive, and the stellar nucleosynthesis of the super-energy threshold of nuclear fusion is formed. This physical picture should be more reasonable than the hot disordered big bang universe. At the same time, it needs to be adjusted that primordial nucleosynthesis and stellar nucleosynthesis should be reflected in the same fusion reaction of 4 protons, and the physical cause of the explosion of primordial nucleosynthesis is the same as that of the explosion of stellar nuclear fusion.
Regarding the physical images of galaxy formation mentioned above, I am more interested in the formation of the solar system and planetary system described by Gamov in From One to Infinity: This may come from the beginning of the formation of the solar nuclear fusion, coinciding with the collision of other huge interstellar matter with the sun. However, the big impact theory was later denied by other astrophysicists, and there is no conclusion so far. But I think that the solar big impact may be necessary: the deviation of the moon from the ecliptic plane, the existence of irregular satellites in the solar system, etc., all illustrate this point. Furthermore, the timing of the big impact should be extremely special - the explosion of the Population III stars described in the previous article was absorbed by other galaxies, the spiral arms formed by the ejection of Population II stars from the galaxy, and the planets formed by the ejection of matter from the inside to the outside during the solar big impact. These three physical images have something in common: does this reflect that the universe has the characteristics of a common evolutionary platform? This question is left for readers interested in cosmology to think about.
In the mid-1980s, I decided that I couldn’t continue to have such wild thoughts, because I thought that the first two major physics problems had been basically solved. My wild assumptions about the fundamental particle problem of the fully homomorphic explanation in my early years led to my failure in the postgraduate entrance examination the year I graduated from undergraduate school. I had to take the postgraduate entrance examination again to solve the "surprise" problem of life evolution as my goal. This also reflects the follow-up of the above thinking about the evolution of the solar system, which is related to the formation of the earth's matter-the formation of the concept of evolutionary platform. The problem of the expanding universe is still relatively simple, and more importantly, it should be reflected in the mechanism of the formation of life on earth. I feel that the problem of life is challenging, and I chose the major of non-equilibrium statistical physics at Beijing Normal University to focus on life phenomena from the perspective of non-equilibrium self-organization. I can't fail again this time.
So, around 1985, I decided to stop thinking about cosmology, and submitted the above idea of primordial angular momentum, which I thought was the most creative, to the magazine "Potential Science" (now discontinued). However, such a rich idea was compressed into half a page, because it costs 50 yuan to publish a full page. When I submitted the manuscript, I attached a note asking if it was okay to publish only half a page? If a 25 yuan layout fee is required, I can consider paying it. At that time, I was an assistant professor at Hunan University and my monthly income was only 50 yuan. The editor later published it and did not write to ask for any fees: it seems that my idea of primordial angular momentum was at least recognized by the editorial department of the magazine at that time.
2. The "surprise" problem: the scaling mechanism of biological vitality and the swing mechanism of life evolution
As mentioned above, my idea is to explore the origin of life from the perspective of the formation of the solar system and the earth, starting from the elementary particles of the atomic nucleus, to the evolution of the stellar universe. This is why I had to find the answer to the "shock" question in my childhood from non-equilibrium statistical physics. For this reason, why didn't I choose biophysics as my major, but chose non-equilibrium statistical physics? It is necessary to explain here that studying the problems of biological vitality and life evolution from the perspective of physics is related to molecular biology or biophysics, but it belongs to a different level of analysis. Just like in computer programs, the basic platform and application are two different levels of problems: Facebook and WeChat belong to social media applications, while the computer principle proposed by von Neumann is the most basic platform, including storage program and program control. Exploring the problems of biological vitality and life evolution from physics is similar to exploring what the most basic storage program and program control are in life phenomena.
It is also necessary to explain that the 1980s had a great influence on my thinking: at that time, there were the "old three theories" in the scientific and technological circles: systems theory, cybernetics, information theory, and the "new three theories" of mutation theory, dissipative structure theory, and synergetics. As a college student at that time, I first taught myself the "old three theories". This knowledge reserve and the choice of the "new three theories" in the postgraduate entrance examination had a huge impact on the formation of my thoughts throughout my life. For this reason, during my master's degree, I made up some biological knowledge and consulted some biological literature, trying to study life phenomena from the perspective of the "new three theories". It was during this exploration process that I found that the application of basic principles of physics to the study of the mechanism of material evolution and the mechanism of life was very slow - the mechanism proposed by molecular biology has no physical explanation at all. This is the research direction I set for myself, to explore the physical mechanism of life vitality and life evolution. The formation of the concept of evolutionary platform later came from my physics thinking during this period.
I will point out that we need to understand the evolution of life from the process of the earth's evolution and the generation of matter. This reflects the swinging process driven by the maximization of two types of entropy, ordered entropy and disordered entropy. I will analyze the concepts of path dependence and far from equilibrium in detail in the next two sections. In this section, I first put forward two important concepts: spatial degeneracy and topological degeneracy (later I learned that Professor Wen Xiaogang also proposed a similar concept of topological order in the late 1980s, and I will also put forward the topological degenerate state with different meanings in the following text). This is related to biological chirality. Spatial degeneracy must be minimized to facilitate the formation and decomposition of all biological molecules, such as the folding and degradation of proteins. However, the energy contained in topological degeneracy is necessary for the translation and synthesis of DNA and RNA into proteins. The above life processes involve two basic physical mechanisms, which I call the scaling mechanism and the swinging mechanism. In this section, I will try to describe these two mechanisms in popular science language.
2.1 Protein scaling mechanisms that reflect biological activity
Let 's talk about the scaling mechanism first , the purpose of which is to understand the nature of biological vitality. The previous article mentioned that the bovine insulin synthesized by Chinese biologists can cause convulsions when injected into experimental mice . But before that, West Germany and the United States also announced that they had synthesized insulin but could not cause convulsions in mice, which shows that the molecular structure of their synthesis may be wrong, resulting in insufficient biological vitality. Therefore, what is the nature of biological vitality? I believe that biological vitality is not a manifestation of static biological molecular structure, but a dynamic folding based on protein or insulin, and its physical image leads to the energy characteristics of continuous contraction and blooming: this is different from the disordered movement of energy in the thermal equilibrium state of ordinary matter. Continuous scaling reflects the steady state under a specific energy - although this cannot be understood as the steady state of the homomorphic or synergetic state based on the magic number of the atomic nucleus mentioned above, it still has the meaning of maximizing information entropy and requires energy to maintain. This is very similar to the meaning of the steady state of computer storage components: those who are familiar with basic electronic circuits know that the bistable state of transistors requires energy to maintain.
The above understanding of steady state first comes from my surprise at the thermometer. Einstein talked about the creation of scientific theories from people's "sense of surprise" and the formation of "de-surprise" theories. Einstein's surprise at the age of 5 came from the compass. My surprise at the age of 5 came from the measurement of the thermometer: the temperature of any object in the room is the same, and the temperature of different people is also the same. Different people's physical indicators, such as height, weight, blood pressure and pulse, are very different, but why is the normal body temperature of people between 36-37°C? I have never heard of a person's normal body temperature being one degree higher, unless it is a pig, the normal body temperature of a pig is 38°C. Given that the absolute temperature is the above degrees Celsius plus 273.15K, the human body temperature is steady, with an error of only about 0.3% (the biological activity temperature range of plants is larger). For this reason, biological activity may come from the fact that protein folding forms a steady state in a specific temperature range. I will further explain the above understanding of physics as follows.
Different people have different blood types and skin colors, but their protein structures are the same. Regarding protein folding, there are two theories in biology today: the Levinthal paradox and the Anfinsen creed. The Levinthal paradox reflects the confusion about protein folding patterns such as the α-helix and the β-fold. A simple analogy is that there are 361 squares in Go, and each square can be used to place white or black pieces or empty spaces in three ways. There are a total of 3361 chessboard outcomes, which is far more than the number of nucleons in the universe. Peptides and proteins have dozens of peptide chains, and the number of molecules involved in folding is far greater than the number of Go squares. There are so many folding patterns. How do proteins find their correct folding patterns? The Anfinsen creed believes that proteins will definitely fold to the state with the minimum physical free energy. But the problem is that in addition to the minimum state, the free energy of the protein system also has countless sub-small states: just like there are infinitely many "pits" on the ground, and a ball may fall into any "pit". How can it always fall into the deepest "pit"?
Therefore, how to understand the physical nature of protein folding? The above Levinthal paradox and Anfinsen dogma are not understood from the perspective of biological vitality. All types of proteins in humans and other animals are roughly the same, with only a few amino acid differences. For example, human and pig insulin only differ by one amino acid residue. Therefore, it is reasonable to assume that protein folding in all organisms must be maximized in their specific temperature range, that is, biological vitality must be formed in a steady state at a specific body temperature. I think its meaning can be simply described by a scaling mechanism: the fully folded state is the contracted state, similar to a clenched fist, and the fully unfolded state is the blooming state, equivalent to the five fingers open. Biological vitality means that protein molecules must frequently perform the above contraction and blooming cycles. Although insulin has only two peptide chains, it should also have the vitality characteristics of protein: it is of course in thermal equilibrium at room temperature, and there will be no special endothermic or exothermic performance, but at a special temperature, it will trigger biological vitality and present a dynamic steady state.
The above steady-state meaning of the scaling mechanism actually reflects the meaning of energy and entropy based on the system. If described in the language of physics, it will present the following macroscopic probability distribution and two physical images of microscopic quantum tunneling:
First of all, this reflects the relationship between the macroscopic thermodynamic probability P(E)=Ω(E)e-βE and the system energy E. This is a very basic concept in statistical physics. Let me briefly explain it as follows: The above Ω(E) is the total number of quantum states of the material system at energy E, which is usually distributed in a power law Ω(E)∝EM , and the probability and energy E are in an exponential decay relationship e-βE , where β is the reduced temperature. For ordinary substances, such as gaseous molecules, the peak value of P(E) should be presented as the intersection of the power law distribution Ω(E) and the exponential decay e -βE , which should be reflected in the probability extreme value at a certain specific temperature. However, in terms of understanding protein folding, I feel that its Ω(E) should also be exponentially distributed with the growth of E, that is, Ω(E)∝eαE will fully bloom. Therefore, at the body temperature point α=β, the P(E) of the protein system is equally likely for all possible energy E values. This reflects the meaning of protein folding being fully scalable: at this temperature, the information entropy S=-ΣE P(E) ln P(E) is maximized.
The above picture comes from the protein folding entry in Wikipedia: the leftmost end is the fully relaxed state, and the rightmost end is the fully folded state, which is reflected in multiple folding pathways.
Secondly, if protein folding is equally probable at all energy points E, this means that all quantum energy states can be traversed "undamped". The concept of quantum tunneling has been introduced in the previous article when discussing the problem of magic numbers of atomic nuclei. In terms of protein folding, this means that chemical bonds will frequently break and connect, and will not be "stuck" in a certain state and "unable to tunnel through". The aforementioned Levinthal paradox and Anfinsen creed reflect that people used to understand the movement of protein molecules only from the perspective of classical motion or chemical bond energy, that is, to understand protein folding from interaction forces such as Van der Waals force and hydrogen bond. Although such an understanding has its correct side, because proteins obviously also need to be in a classical thermal equilibrium state of heat exchange with the environment. However, such an understanding ignores that as a biological molecule, the "undamped" characteristics of the microscopic physical image of the folding cycle of proteins do not come entirely from classical motion, but must also show quantum tunneling characteristics that transcend potential barriers. This is why the protein system can reach its energy minimum every time it folds.
Furthermore, in addition to being controllable so that the life process is orderly, protein folding also needs to be degraded into polypeptides and amino acids after its mission is completed, and then absorbed by the organism to form a cyclic biochemical process. To this end, the scaling mechanism must not only reflect the folding of proteins, but also the degradation, although this may require the assistance of degradative enzymes, but degradative enzymes are also proteins. This process is very complicated to understand from the existing molecular biology, but I will still use a simple step concept to explain it below, which should reflect that the protein system should try to avoid spatial degeneracy (this concept will be discussed in detail later) - this is necessary for folding and degradation. For this reason, if protein folding is regarded as a macroscopic quantum process, the fully contracted state is regarded as the energy ground state E0, and the fully open state is regarded as the maximum energy state EN, then the folding process should go through n energy steps from 0 to N {E0 , ...,En , ...,EN } .
In order to make the thermodynamic probability P(E) at each energy step equally probable, the system needs to avoid any energy level degeneracy of adjacent steps En = En+1 as much as possible, which reflects two meanings: First, degenerate energy levels may cause these two energy levels to be "easy" to be crossed by quantum tunneling, but other energy levels are "difficult" to cross. Eliminating degeneracy is the most conducive to overall folding, which also shows that the Levinthal paradox is redundant. Secondly, the closer to the quantum ground state, the more degeneracy should be avoided: because quantum tunneling depends on the ratio of the height of the tunneling potential energy to the energy. If each quantum energy level is regarded as a "pit" based on chemical bond binding, the lower the energy "pit" is, the smaller the quantum barrier is, and the more conducive it is to quantum tunneling. Otherwise, the Anfinsen creed will not be satisfied. Protein folding will be uncontrollable. For this reason, the folded structure may tunnel out from one "pit" and cannot shrink into another "pit", and the protein folding will be disordered. Therefore, it is very necessary to avoid spatial degeneracy as much as possible.
Finally, regarding whether the above physical understanding of the protein folding scaling mechanism is accurate, I would like to raise two questions: one is the issue of experimental verification, and the other is the issue of physical understanding.
Let's talk about experimental testing first. If the above scaling mechanism is established, then the activity of protein should be temperature-dependent. The simplest example is that pig insulin differs from human insulin by only one amino acid residue, so pig insulin can be injected into the human body to treat diabetes. However, the body temperature of a pig is 38°C, and pig insulin should show maximum activity at such a temperature, which is 1 degree higher than human body temperature. For this reason, if a diabetic has a cold and a low fever of 38°C, will the effect of injecting pig insulin be better than when he does not have a fever? I have not found any similar medical experimental analogies on the Internet. Of course, the imagination described in the above experiment may not be accurate. People who understand biomedicine can design other better experiments to test to verify the accuracy of the scaling mechanism.
Secondly, the above physical understanding of protein folding shows that the concept of entropy in physics should not be constructed on the thermodynamic limit of the system individual N→∞, but should be based on the information entropy under the meaning of an independent system: even if it is only a protein molecule, it has its own independent information entropy meaning. An important model in the history of the development of statistical physics is the Ising model. The Onsager solution based on this model, including the subsequent spontaneous magnetization solution and the Lee-Yang phase transition theorem, all want to construct all physical phase transitions on the system individual N→∞: although this is mathematically rigorous, its physical understanding is problematic. Such a physical understanding of the thermodynamic limit has led people to mistakenly accept the concept of so-called continuous phase transition, without establishing a correct understanding of gas-liquid phase transition and liquid-solid phase transition, which will be described in detail below.
2.2 The oscillating mechanism of life evolution
Let me continue to talk about the swing mechanism, which is a continuation of my previous thinking about the scaling mechanism. Since protein folding reflects entropy maximization with equal probabilities for all energies, this should also be reflected in the nucleic acid molecules of DNA and RNA that generate proteins . In addition, inorganic chemical reactions are usually fast, while organic chemical reactions are slow. This is the knowledge we learned in middle school. If all of this is linked to the physical understanding of entropy maximization, I have such a guess: entropy maximization may have two directions: disordered entropy and ordered entropy: the evolution of organic matter and life must be reflected in the swing between the two types of disordered and ordered entropy maximization. This is the physical understanding of life phenomena, and it is also the origin of my proposal of the swing mechanism. To this end, it is necessary to break through people's existing physical understanding of the concept of entropy. Next, I will start with the origin of the concept of thermodynamic entropy.
In the 1850s, Clausius first created the concept of entropy from macroscopic thermodynamics . Later, Boltzmann gave a physical understanding of the total number of microscopic states. The formula S = k log W was engraved on his tombstone. However, the above understanding of entropy only reflects disorder and implies the following idea: the ordered state must be reflected in the reduction of entropy. Shrödinger, one of the founders of quantum mechanics, first created the concept of negative entropy in his book What is Life? written in 1944. Prigogine's dissipative structure theory further proposed that stability far from equilibrium can only be maintained based on entropy flow. Such an understanding of entropy limits people's imagination of the evolution of life. For this reason, the meaning system of the swing mechanism must include the following two points in the understanding of entropy: First, there must be a concept of ordered entropy that is different from disordered entropy, otherwise it is impossible to explain the orderliness of life evolution. Second, the central dogma of molecular biology should show that the life process is both irreversible and cyclical . System evolution should be reflected in swinging between two types of entropy maximization.
In fact, people have tried to re-understand the meaning of entropy in the past few decades. For example, Jaynes gave the maximum entropy principle based on Shannon's information entropy. This principle attempts to prove the equivalence of thermodynamic entropy and information entropy, but it is subjective and has the color of prior probability. For example, in the absence of testable information, entropy maximization follows the universal "constraint" that the sum of probabilities is 1 and is uniformly distributed, which is of course reasonable. However, such a conclusion cannot be drawn from the Gibbs statistical ensemble. In addition, Wolfram developed von Neumann's early concept of simulating the self-replication of biological cells in the 1980s and proposed a cellular automaton with computation as the core. This reflects the use of entropy to describe the evolution of cellular automata, which can give the evolution a steady type, periodic type, chaotic type, etc. This inspired me that the evolution of life is driven by entropy. Based on their ideas, I proposed the entropy energy criterion:
Its mathematical derivation is based on the aforementioned energy set {E0 , ..., En , ..., EN }, and assumes that the corresponding probability set is {P0 , ..., Pn , ..., PN }. The energy that the system can use to do work is the energy E = Σn Pn (En - E0 ). The information entropy is S = - Σn Pn ln Pn . The maximization of the system's entropy value should be reflected in solving the maximization of the information entropy S under the constraint of the energy E. Its mathematical analysis can be based on the problem of finding the extreme value under the constraint conditions. First define the entropy energy coefficient X = S - βE. There are two extreme value solutions for the evolution of the system. One is X = ln Z, Z = Σn exp[-β(En - E0 )] is the statistical partition function. This is reflected in the maximization of disordered entropy, and the constraint parameter β is the thermal equilibrium temperature. However, as analyzed in the previous article on gravity, the evolution of the system may have another consequence: all energy levels En converge , and all probabilities Pn also converge. Therefore, the system also achieves the maximization of ordered entropy with equal energy and equal probability under no temperature: S = ln N.
The above simple mathematical argument has already shown that for any system evolution, there may be two types of entropy maximization, namely, disordered entropy maximization in thermal equilibrium and ordered entropy maximization with equal probability and energy. However, the above mathematical argument is obviously too abstract as a macroscopic expression, and it cannot show the physical image of any system evolution swinging between these two entropy maximizations. For this reason, I will further explain the physical basis of my thinking on this issue below - the above microscopic image of the swing mechanism of life evolution is actually from my understanding after reading Anderson's famous article "More is Different" published in Science 177 (1972) 393. The above mathematical argument is not imagined out of thin air, but also comes from the following microscopic physical image of quantum tunneling of NH3 molecules. The reason why this physical image shocked me is from the connection between TNT explosives and N- ions mentioned above.
The above picture is from the article "More is Different": 3 H+ ions form an equilateral triangle plane, on which N - ions can shuttle up and down to perform quantum tunneling, with a frequency of up to 3X1010 /s, thus causing NH3 to lose its electric dipole moment. However, if the N atom is replaced by a P atom, the P - of the PH3 molecule will also perform quantum tunneling oscillation on the H+ plane, but the frequency is only 1/10 of NH3. If H is further replaced by F, the P in the PF3 molecule cannot tunnel through the plane of 3 Fs, thus becoming a broken state and having an electric dipole moment. Anderson used quantum tunneling to illustrate that symmetrical structures will not have electric dipole moments, and only molecular structures with broken symmetry can have electric dipole moments. But this makes me feel that the quantum tunneling of the N- ions above seems to bring about topological degeneracy, which is completely different from the physical reason for eliminating spatial degeneracy in the aforementioned protein homeostasis, and thus presents two completely different physical states.
Let's talk about the concept of spatial degeneracy mentioned above. This is actually the understanding of conventional degeneracy in quantum mechanics, which comes from the symmetry of spatial structure. The picture below comes from Wikipedia, which shows two chiral enantiomers. Why do biological molecules show such symmetry breaking based on chirality? For example, almost all protein amino acids are L-type, while the ribose of RNA and DNA is D-type. I can't give the origin of chirality from physical analysis, but I can explain that if L-type and D-type molecules are in the same type of molecules, it will lead to the degeneracy of the molecule's energy level. As mentioned above, energy level degeneracy will affect the energy steps, which will be detrimental to the folding and degradation of proteins. Of course, the same is true for RNA and DNA. Only by eliminating such spatial degeneracy can biochemical processes such as translation or folding be beneficial.
Therefore, the meaning of spatial degeneracy is reflected in the symmetry of spatial structure, which in turn comes from the chemical bond energy of molecules or atoms based on spatial structure. Chiral enantiomers mean that the chemical bond energy inside their molecules is the same but the spatial structure is different. If molecules of different chirality are mixed together in the protein peptide chain, this will of course lead to energy level degeneracy - this is why I think that molecules of the same type in living organisms can only contain one chirality, although I cannot explain the origin of chirality. In addition, not only any biological molecule must avoid spatial degeneracy, but also the most common water molecule H2O, which is also the most abundant in any living organism : the two H+ ions at both ends of the O- ion are symmetrical and must be degenerated, presenting an angle of 104.45°C - this means that the quantum characteristics of the H2O structure must deliberately avoid spatial degeneracy. After the quantum state is degenerated, the system will reduce its ground state energy. This is the John-Teller effect that I will analyze later.
However, unlike the aforementioned spatial degeneracy, the above N- ion quantum tunneling intentionally leads to topological degeneracy, which reflects the essential difference. Topological degeneracy comes from the spontaneous creation of the system and increases the energy of the system, and its purpose is to achieve degeneracy - this is not caused by the chemical bonds between neighboring atoms in the spatial structure, but by the kinetic energy of the movement of N- ions in quantum tunneling. Even though it has been mentioned in the previous article that protein folding may also have quantum tunneling, it is obviously different from the quantum tunneling degeneracy of the above N- ions. It is only based on the tunneling process of a chemical bond breaking and then connecting to another chemical bond. In this way, quantum tunneling of chemical bonds breaking and then connecting does not reflect the equal quantum energy before and after tunneling. But the tunneling energy of N - ion topological degeneracy is not only equal but also solidified. Imagine that the energy of the N- ion system may change when it is hit with a small hammer, but it will still "return" to the steady state of inherent energy after stabilization.
Furthermore, the huge energy release of TNT explosives makes me think that the above physical image of the topological degeneracy of N- ions may actually exist in any biological molecule. Most of the basic groups in organic matter contain nitrogen atoms. Ammonia NH3 is mostly used in fertilizers and is required by all life. N- also exists in any protein and nucleic acid molecules, namely the amino - NH2 in protein amino acids , and the nitrogen-containing bases that play a pairing role in DNA and RNA. Therefore, I guess that the physical image of such topologically degenerate quantum tunneling is the energy source of any biological vitality, which is the basis of life. Furthermore, the above is only a description at the biomolecular level. From a systemic perspective, should the quantum tunneling effect of N- ions in different biomolecules also be reflected in their coordinated movement with each other, thus constituting the essence of the vitality of the entire life system?
Next, I will use more professional language to explain that the essential physical description of system evolution comes from the quantum time-dependent Shrödinger equation, which is the basis of the evolution of any material system. Physics textbooks usually regard the Shrödinger equation as a non-relativistic approximation, which is wrong. The description of relativistic particles in quantum field theory is also based on the Shrödinger equation, which should be reflected as an equation for system evolution. Therefore, topological degeneracy should enhance the degeneracy of the system, which is spontaneously eliminated by spatial degeneracy, reflecting two different processes of system evolution. Why are there two different processes of evolution of two types of systems? This is the thinking process of the tunneling synergetic state and the bipolar evolution platform that I will form later, and I will not elaborate here. I will just briefly describe the meaning of the swing mechanism of water-based and amino-based biomolecules formed in this way.
The topological degeneracy of NH3 molecules reflects the characteristics of a type of system, whose evolution tends to maximize the ordered entropy, and its temperature meaning disappears. H2O molecules represent another type, which is to eliminate spatial degeneracy and retain the direction of disordered entropy maximization with temperature meaning. For this reason, water-based and amino-based biological molecules represent two types of systems, respectively, and their evolution will lead to disordered and ordered entropy maximizations. The swing mechanism reflects the meaning that the life evolution process must swing between these two types of entropy maximizations. The accurate description of the swing mechanism is also based on the concept of evolution parameters that I will propose in the next section. Here I will just briefly explain that we need to explore the origin of life from the perspective of the formation of the earth in the solar system. The formation of all inorganic matter on the earth comes from a one-way path from high temperature to low temperature phase transition. However, the formation of organic matter in the process of life evolution is based on a cyclic path that integrates gas-liquid-solid phase transitions . The above water-based biochemical reactions will absorb or release disordered heat, and the quantum tunneling of amino groups will store and release ordered energy, which will constitute the two major physical foundations of life phenomena.
The above popular science introduction to the three difficult problems in physics is now complete. Next, in addition to my further thinking about physical problems, I will also talk about my belief that physics must undergo another paradigm shift after its development to date. I believe that current physics can be summarized into two analytical frameworks: inertial spacetime and interaction. The idea comes from the observation paradigm, that is, summarizing physical laws based on experimental observations, or inferring new physical laws through theoretical analysis, and seeking recognition and consensus to form a paradigm. But I personally believe that by constructing a physical description of the evolutionary platform, a system evolution criterion analysis framework must be formed, which leads to the state reason idea - based on the physical explanation of any material state, there must be a unified physical reason, which is the entropy energy criterion that I will analyze later. Furthermore, the evolutionary platform reflects the objective attributes of system evolution, while the evolutionary criterion analysis framework is a subjective analysis method and law expression, and the meanings of the two are consistent. Therefore, I will use the two concepts without distinction below.
3. Evolutionary Criteria Analysis Framework: System evolution parameter and the cycle path and bifurcation path of Life
After I graduated with a Ph.D. in 1996, I left the physics academic circle. It was not until 2013 that I started studying physics problems again after seeing Tang Chao's seesaw model. But during this time away from the physics academic circle, in addition to making a living, I continued to learn and pay attention to economic issues. I felt that economists and physicists have very different methods of analyzing problems. Physicists must focus on conclusions that can be verified by experiments, but economists don't care about this at all. Instead, they treat past events as sunk costs and only focus on the trend of future economic evolution. This leads to a very different analysis method in economics from that in physics. For example, economists often use the movement of the supply and demand equilibrium curve for analysis, and there are also interior point solutions and corner point solutions. This makes me feel very strange: the supply and demand equilibrium of the economic system seems to be comparable to the thermal equilibrium state of the material system, but why is there no such analysis method for curve movement in physics?
Another problem is that more than a decade ago, economist Professor Zhang Weiying wrote an article titled "Rethinking Economics", which is still available online. The general idea is that economics started to go astray after the marginal revolution in the 1870s. It's like painting a girl's portrait. It was not done well at the beginning. Later generations thought that some parts were not painted well and then they made some corrections. As a result, the more they were corrected, the worse it got. After reading it, I felt that the deviation of economics did not happen after the marginal revolution, but there may have been problems when Adam Smith created it. This issue will be discussed later. What I want to express below is that both economics and physics seem to have gone astray. However, economics can be combined with observed economic phenomena, and if the picture is wrong, it can be corrected. Correction of physics is much more difficult: every step of the development of the theory is supported by experiments. The development of physical theory will be "kidnapped" by verified experiments, and it is much more difficult to correct the deviation of people's understanding of physics.
With these two questions as an introduction, I will first talk about the comparison between economics and physics, and construct a concept of evolution parameters based on the comparison of the total kinetic energy and interaction energy of the system. The evolution parameters will be the basis of the concept of evolutionary platform together with the concepts of equilibrium state, synergy and Indistinguishable in the previous two sections, as well as the entropy energy criterion. They will constitute several of the most basic concepts in the framework of physics evolutionary criterion analysis. This is also the starting point for analyzing physics problems with the state reasoning idea - we need to first analyze the various different states of the material world before we can give reasons for the formation of various states. To this end, this section will start with the economic wealth model for analysis, and then use the concept of evolution parameters constructed from economics to continue analyzing the gravitational problem and the electromagnetic force problem. Finally, the concept of path dependence is established to describe the two types of physical mechanisms of the cyclic path and bifurcation path of life phenomena.
3.1 Establishment of the concept of evolution parameters: from the comparison between material system and economic system
Let's first talk about why the thermal equilibrium state in physics is not comparable to the economic equilibrium in economics. The creation of thermodynamics and statistical physics developed along with the technological progress of the steam engine era. It emphasizes the molecular collision based on the gas state and the work done to the outside. The meaning of thermal equilibrium in physics is not actually reflected in the equilibrium state formed by the internal forces in all material systems, but is only built on the basis of all internal molecular collisions in the thermodynamic system and their heat exchange with the outside, which is mainly reflected in the gaseous characteristics. Although Gibbs statistical ensemble theory also has the concept of microcanonical ensembles without heat exchange with the outside world, its analytical basis is still the collision of internal molecules. Therefore, the thermal equilibrium state in physics is not the same concept as economic equilibrium. Economic equilibrium reflects the balance of supply and demand within the economic system. Next, I will try to construct the concept of evolution parameters, which is a concept shared by physics and economics.
The concept of evolution parameters comes from my comparative analysis of two economic models based on physics. The wealth distribution model published by Yakovenko et al. in Rev. Mod. Phys. 81 (2009) 1703 is based on thermodynamic analysis of physical collisions. However, this article did not generate a positive response in the economics community because it could not provide the Pareto power law distribution that should exist in the economic system. However, the Black-Scholes option pricing model based on Brownian motion is very successful. What is the problem here? Physical collisions only have thermodynamic analysis, so they are not equivalent to economic transactions. This may be just a superficial reason. My greater feeling is that both economics and physics have gone astray. The reason is that the thinking patterns of both disciplines are too subjective, but the basis for forming laws should be built on objective descriptions: the former wealth distribution model is too subjective, while the latter option pricing is based on objective data analysis and is therefore very successful.
Let's first talk about why economics can be misled by subjective factors. This comes from the issues that economic research focuses on, which are factors that affect economic development. Conventional factors such as employment rate, interest rate and inflation are well known to everyone. Proposing unusual factors will often be "cheered" by economists. The most cited paper by Chinese economists was a strange article by an unknown scholar more than a decade ago: it believed that China's one-child policy caused Chinese parents to worry about their children's future marriage, which brought about investment-driven development of China's economy. I went to read this article specifically that year, and there were actually a lot of mathematical formulas to support its arguments. It really felt like a serious nonsense. But this shows the value orientation of economists, and unusual thinking perspectives are very important. For this reason, there are many examples of bizarre causal relationships that have led to the development of economic theories going astray. Superficial concepts such as the lemon market and the prisoner's paradox came and went quickly.
The fact that physics research is misled by subjective factors is reflected in the fact that it is not enough to explain material phenomena, but also to seek precision based on theoretical beauty. The evolution of the real material world actually comes from many factors, and it is impossible to give accurate conclusions if too many factors are considered. Therefore, the theories that have been preserved since the development of physics will have a "survivor bias" effect: the simpler and clearer the theoretical analysis, and the more accurately the mathematical conclusions given can match the experiments, the easier it is for scientific theories to be accepted by people. The calculation of the electron Landé factor by quantum field theory mentioned in the previous article is an example. The idea of falsificationism created by Popper has a greater influence, and this idea also comes from beauty and precision. Eddington's experiment verified the gravitational deflection experiment of the extremely beautiful general theory of relativity - Popper concluded from this that the more falsifiable a theory is, the more it should be recognized. But I will point out later that this view actually misled the development of physics theory.
For this reason, I believe that whether it is the construction of economic theory or physical theory, subjectivity should be eliminated and it should be based on objectivity with a more universal meaning. This requires trying to reduce subjective value judgments as much as possible. This is different from the existing empirical economics and normative economics in economics, and the meaning is not the same. Next, I will take the economic system as an example to construct a wealth model: the more developed the economy is based on objective meanings, the more economic connections there are between economic individuals. For this reason, I think the Yakovenko wealth model should be replaced by the Albert-Barabási model of complex networks mentioned above . Economic transactions are not spatially restricted. Comparing transactions to collisions means that people's transactions are also restricted by structures such as spatial dimensions, which is not an objective description. But if it is changed to a complex network, transactions come from an unlimited number of random choices from both the supply and demand sides, which can be represented by the following directed graph.
The arrows in the figure represent the provision of goods or services in exchange for monetary rewards. The edges can be added with numerical weights to represent the wealth growth brought by the transaction. This can reflect the essence of economic transactions. Such a complex network seems to be a more objective model framework for describing the social wealth model. Each node of the network is a system individual, reflecting the wealth ability of economic people similar to the kinetic energy of material individuals. The edges formed by the supply and demand sides formed by evolution reflect the transaction. The development of the economic system means that more wealth is reflected in the more system edges and the greater the total energy. Furthermore, the weighted directed graph itself reflects the economic equilibrium. Goods or services that cannot be connected to the nodes will become sunk costs over time. All possible edges are traded through currency, and all traded goods will be presented in increasingly standardized specifications, which reflects a certain indistinguishability, which can be represented by entropy. For this reason, the energy and entropy characteristics of the material system can be represented by the above weighted directed graph.
The above mathematical analysis based on complex networks will give the wealth distribution under the state of entropy maximization: if there is no increasing return to scale effect, it will only be a Poisson distribution of random graphs, and with increasing return to scale effect, it will lead to power law distribution - for the random edges of the economic system plus the exponential distribution weight of the increasing return to scale effect, such as the exponential factor in the Solo model, it is not difficult to deduce that the exponential factor should also be reflected as the power law factor of the Albert-Barabási model (interested readers can give a more in-depth analysis and write relevant papers on this) . The above analysis also reflects the energy and entropy characteristics of complex networks. Since the beginning of this century, complex network analysis has been valued by physicists. However, it seems that there has never been a clear physical image based on energy and entropy, and the existing analysis is basically based on sparse networks. In fact, complexity will only evolve after the economy develops to a dense network: if the aforementioned entropy energy criterion is built on network graph theory, it should be the basis for economic system analysis. I will do some analysis later when proving the bipolar evolution platform.
The above understanding of the economic system has in turn influenced my understanding of physical laws, and thus I want to make an analogy between economic development and cosmic evolution. The view of cosmic evolution mentioned above first came from Wheeler's lecture at the University of Science and Technology of China in 1981, which was later edited into the book "Physics and Simplicity" - this has greatly inspired the formation of my later physical thoughts. On a transparent slide of Wheeler's lecture, the turtle-carrying stele reads "There is no granite with physical laws pre-engraved, and even the physical laws come from the Big Bang", which is also an illustration in the book. However, my ideas are also different from Wheeler's. This is reflected in the fact that I believe that cosmic evolution is not the simplicity he emphasized, but like economic evolution, it must reflect the evolution of the system to move towards complexity. This is also based on the fact that the overall kinetic energy T and gravitational energy V of the evolution of the cosmic material system should maintain the trend of T≈|V|, which is similar to the "supply and demand equilibrium" of the economic system.
Wheeler believed that the laws of physics came from the Big Bang. Therefore, I believe that the evolution of the universe is like throwing objects from the earth into the sky. Only when the total kinetic energy of all matter is approximately the same as the total gravitational energy, it will tend to be complex.
At this point, in view of the comparison between the evolution of economic and material systems, I can give the concept of evolution parameter. Physical analysis should be based on the total kinetic energy T and the interaction energy V of the system. T+V, as the total energy of the system, is called the Hamiltonian conservation quantity, and its differential form with respect to time is the equation of motion of matter. The difference between the two, T-V, is the Lagrangian action mentioned above, and its integral form reflects the principle of minimum action of material motion mentioned above. But I also need to construct a new concept for the definition of T and V, the evolution parameter p=ln|T/V|. p>0 is called the positive state, which reflects that the system's T is not constrained by V, which of course includes but is not limited to the gas state. And p<0 is a negative state, which means that V constrains T, and also includes but is not limited to the solid state. For this reason, the p≈0 state between positive and negative quantities is called the marginal state. This concept from economics of course also includes but is not limited to the liquid state.
If the capital reserve of the demand side of the economic system is regarded as T, and the capacity of the products or services provided by the supply side is V, then the above directed graph representation reflects that T≈V is the interior point solution of the normal equilibrium state, corresponding to the evolution parameter p≈0, and normal economic development will remain in such a marginal state. If the imbalance T>V or T<V is reflected as a corner point solution of a shortage economy or a surplus economy. Will the evolution of material systems also have the same characteristics? The Lagrangian action T-V must be minimized. The formation of ordinary matter on the earth is an evolution from a positive quantity to a negative quantity state. This is a one-way path evolution, and finally forms a thermal equilibrium state. But my analysis below will point out that the gas-liquid-solid phase transition and ferromagnetic phase transition behavior of the material system also have marginal state characteristics, and the evolution of living matter will present a circular path and also reflect the marginal state. It will be pointed out below that the system that achieves equilibrium in economic development is comparable to the dynamic life system.
3.2 A new understanding of gravity and phase transition issues based on evolution parameters
At this point, the concepts of equilibrium state, synergetic state and Indistinguishable state described in the previous two sections, the entropy energy criterion described in the previous section will present two types of entropy maximization, plus the positive, negative and marginal meanings of the evolution parameter p=ln|T/V| in this section, form several basic concepts of the evolutionary criterion analysis framework that I am trying to build, thus reflecting that this will constitute a different analytical perspective from today's physics. In this section, I will analyze two very basic physics problems based on the evolutionary criterion analysis framework: one is the gravitational problem, and the other is the phase change problem based on electromagnetic force. This will reflect that the evolutionary criterion analysis framework is to analyze based on the evolution parameters of the system, which is very different from the existing physics that is completely based on individual interactions.
Let's talk about the gravity problem first. What I want to focus on is that the understanding that the motion of the planets in the solar system satisfies the Virial theorem has formed the basis for people's current belief in the existence of galactic dark matter. This view is problematic. Understanding the motion speed of the planets in our solar system from the Virial theorem is entirely based on the fact that all individuals must spontaneously form a stable multi-degree-of-freedom isolated system. As mentioned earlier, the conclusion given by the Virial theorem seems to be not much different from the marginal state of the aforementioned evolution parameters p=ln|T/V|≈0, with only a constant factor difference. However, why do the planets in the solar system satisfy the proportional relationship between speed and distance r -1/2 ? But in ordinary galaxies, especially spiral galaxies, this will reflect the problem of galaxy autobiography curves. Why do the motion speeds of all stars converge? This will reflect that starting from the Virial theorem and starting from the evolution parameters will have different understandings of the nature of gravity.
From the perspective of evolution parameters, we first need to show that our understanding of the meaning of the system is different. As for galaxies, as mentioned above, all stars in the galaxy are in a coordinated state. The earliest stars in the spiral galaxy are formed in the center of the galaxy, and then slowly ejected from the center to form spiral arms. In general, the movement or evolution of stars in this galaxy image is driven by entropy rather than gravitational energy. This shows that each star is an individual of the system, and entropy forces make these individuals converge into a coordinated state: the stars at the outermost part of the galaxy are in a "close and distant" state, which reflects that the consequence of the evolution of the system is the marginal state of the evolution parameter p=0+ , which is similar to the super-energy threshold state of protons in stars - this is also a slowly driven process, similar to the physical image of self-organized criticality mentioned above. This shows that in the early stage of the formation of the spiral galaxy, the stellar matter group may leave the spiral galaxy very quickly to form the Population II, but later it becomes slow, and the galaxy rotation curve reflects the physical image of the marginal state presented in the later stage.
However, we are more concerned about the planetary system of the solar system . As mentioned earlier, this comes from the beginning of the nuclear fusion of the sun , when other interstellar matter and the sun collided. Such an extremely accidental and rare collision in the universe is different from the critical state of the galaxy. This is first reflected in the fact that the planetary system formed by the ejected matter is an open system, not a critical system in the marginal state: Gamov described in "From One to Infinity" that more than 99% of the matter of the solar system has been ejected during the formation of the planetary system, and its remnants formed the eight planets. As mentioned earlier, from the analysis of the minimum action principle of Lagrangian's T-V, if the T-V is larger, the ejected matter of the sun will be more and will evolve into a binary star. If the T-V is smaller, it will remain a single star. Why did the solar system evolve into the planetary system we see now? This should be very special.
The special thing about the solar planetary system is that since more than 99% of the matter has left the solar system and taken away a lot of kinetic energy, the total kinetic energy of the eight planets formed by the remnants has dropped significantly, and the gravitational energy therefore accounts for a larger proportion than the kinetic energy, thus becoming a negative state of the evolution parameter p<0 - even if this seems to conform to the description of the speed by the Virial theorem, this is not the consequence caused by the Virial theorem. As mentioned earlier, the Virial theorem is based on the isolated system spontaneously evolved by the individual system, not an open system. However, the formation of the planets in the solar system is reflected in an open system, or more precisely, in a dissipative system. In fact, Gamov also analyzed in detail in the book why these particles thrown out of the outer periphery of the sun did not form a single large planet? Furthermore, why do the planets in the solar system have to form a certain distance apart, and the orbital radius of each planet is almost twice the orbital radius of the previous planet?

Gamov used a special term "bean necklace" to explain the formation of the solar system planetary system: particles rotating around the sun formed a necklace in the shape of beans, as shown in the figure above. Therefore, the formation of planets such as Mercury, Venus and the Earth is actually from the various "bean necklace" subsystems, which are condensed matter systems formed by energy-driven contraction. If the mass of each bean is too low, it can only form a structure similar to Saturn's rings, but cannot form a satellite. The above image is also the reason why I inspired the concept of evolution parameters. This shows that due to the large amount of matter being thrown away, the kinetic energy of each subsystem of the "bean necklace" is lower than gravity, which can connect the "necklace" that moves in a circle around the earth, thus forming stars that are separated from each other and do not collide. Therefore, this leads to the difference between it and the stars in the galaxy, which are "close and distant".
In fact, from the Indistinguishable state at the beginning of the universe to the stable synergetic state of galaxy star evolution, the evolution parameter is from the positive state of p>0 to the marginal state of p still being 0+ when the galaxy is in the super-energy threshold state, which reflects the entropy force driving in the early universe. However, since most of the light elements such as H and He in the solar system have been thrown away and taken away with kinetic energy, the remaining planetary system is in a negative state of p<0. The above description of the "broad bean necklace" obviously denies this from the Virial theorem. It can only be said that the planets formed by the contraction of these broad bean subsystems are the result of the joint drive of entropy force and energy. The system must have evolved into the result of maximizing the total amount of entropy energy coefficient, but the remaining planetary system has become a negative state of p<0. This ensures that the planets no longer escape the gravity of the sun, resulting in the planetary velocity v∝r-½ , which reflects the evolutionary behavior of the relative motion of different "broad bean necklace" subsystems, which is irrelevant to the Virial theorem.
Next, let's talk about my physical understanding of the gravitational theory from Newton to Einstein: Newton discovered the law of universal gravitation, which is regarded as a typical example of the leap from 0 to 1 in modern science. But it is also believed that such a theory based on absolute space-time is only an approximate theoretical result. Einstein's general relativity based on curved space-time is a more accurate description of gravity. This is also the source of Popper's falsificationism , which reflects people's subjective evaluation of the perfection of the theory: the more accurate the description of the falsifiable theory is, the more it is accepted by people. However, the question I want to raise here is: In addition to the theory of gravity, is there a second case that reflects that science is developing in the direction of increasingly accurate theoretical descriptions? It seems that there is no such case. For this reason, how should we understand these two theories of gravity based on different cognitions of space-time? As descriptions of gravity with different mathematical precision, are the two a reflection of the improvement of people's cognition, or are they just different material states presented by the evolution of the universe?
I think that if we understand from the perspective of Wheeler that the physical laws began with the Big Bang, the essence of gravity should have both entropy-driven and energy-driven meanings. This means that at the beginning of the universe, "God" actually set the evolution parameter or some more precise parameter to a certain value, in order to make the evolution of the universe more complicated. For this reason, the very early universe reflects the indistinguishability of primordial angular momentum as a fully homogeneous particle at various spatial points in the universe. As mentioned earlier, this is similar to the highest energy laser in a pulsed laser state, where photons that were originally supposed to emit light within a certain period of time are condensed to a point in time, thus becoming indistinguishable. The process of galaxy formation is that after the primordial nucleosynthesis phase transition, the universe evolved into a spatially coordinated state, similar to the time-continuous laser state of a laser system after the energy is reduced, which will emit light evenly over time and become a coordinated state.
However, the Earth in the solar system, our home, has entered the negative state of evolution parameter p<0 ahead of time because of the "broad bean necklace" effect of the aforementioned solar collision . For this reason, we should understand Newton's law of universal gravitation in this way: this is a physical law that can only be presented after the evolution parameter p has stably entered the negative state, and it is also reflected in the physical law that can only be presented under the drive of stronger energy. Furthermore, after the value of the evolution parameter p further decreases, that is, when the gravitational energy |V| is much larger than the kinetic energy T, the gravitational system will show Einstein's general relativity correction - this is comparable to the entropy force drive that also presents the full Indistinguishable and synergetic state when the evolution parameter p value is positive. Therefore, general relativity may only reflect the correction of the gravitational system between matter when the evolution parameter p is more negative.
Therefore, the above physical understanding may completely overturn our understanding of the early universe, and the initial state of the universe should have nothing to do with general relativity. I personally believe that general relativity may only be able to describe systems such as neutron stars or black holes where the gravitational energy is much higher than the kinetic energy, but it is not an effective description of the early universe, because the evolution parameter p>0 at the beginning of the universe means that the kinetic energy is much higher than the gravitational energy. Furthermore, the evolution of the universe reflects the bifurcation of the material system, from a single fully homogeneous bifurcation to a galaxy, and then to a star, and there is also the evolution of Fermi energy. As mentioned in the previous article, Fermi energy reflects the effect of entropy force, which may be the reason why dark matter is believed to exist. Of course, the above is just my personal understanding of the theory of gravity. In addition, the gravitational entropy force theory proposed by Verlinde in 2009 also illustrates this point, although this theoretical description is also very different from my understanding of gravity.
The above description of the primordial nucleosynthesis and stellar nucleosynthesis of the evolution of the universe will cause the evolution parameter p to change from positive to marginal and then to negative. This reminds us that the formation of the earth also experienced two phase transitions, gas-liquid and liquid-solid, which is also the evolution process of the evolution parameter p from positive to negative. For this reason, it makes me think of re-understanding the physical phase change problem from the perspective of evolution parameters. Below, before analyzing the phase change problem in detail, I would like to talk about two typical examples. One is to re-understand the liquid state problem by the physical understanding of the formation of laminar and turbulent fluids. The second is that the statistical physics analysis based on the Ising model, which has been mentioned in the previous article, does not match the real ferromagnetic phase change, which requires me to re-analyze the ferromagnetic phase change problem. The following are some of my thoughts on constructing the concept of evolution parameters in my early years.
The first thought came from when I just graduated from university. I read an influential article published by Academician Hao Bailin in Progress in Physics, "Bifurcation, Chaos, Strange Attractors, Turbulence and Others". The article mentioned that it is controversial whether the NS equation can describe turbulence. This aroused my great curiosity, so I searched for a lot of literature on the flow around a cylinder and found that the digital simulation calculations made by the NS equation did not match the experimental values. What is the problem? By constantly tracing the original literature on this issue, I found an early quantum mechanics literature, an article published by E. Madelung in Z. Physik 40 (1927) 322. The title is translated into Chinese as Quantum Theory in the Form of Fluid Mechanics (the mathematical derivation can be found at https://mp.weixin.qq.com/s/QjA24L0qwcfh4CbkuLnzpA). After reading it, I suddenly realized that the reason is that the NS equation is just a separate expression of the real and imaginary parts of the Shrödinger equation. Therefore, the exact description of liquid fluid should be the Shrödinger equation in which the real part and the imaginary part cannot be separated.
For this reason, the NS equations based on the Reynolds number are not simply classical equations. They are essentially quasi-quantum mechanical, and are therefore only valid for fluid motion in the absence of laminar flow and turbulence, when the flow velocity is very low and the evolution parameter p<0. If the temperature of the system is too high and the individual kinetic energy T is very large, it belongs to the gas state with the evolution parameter p>0, and must be described by statistical physics. Therefore, the complexity of laminar and turbulent problems actually reflects that the system is in a marginal state with a certain evolution parameter p≈0. Combined with what I have already discussed in the previous analysis of the path integral representation of quantum mechanics, for the degenerate and non-degenerate Shrödinger equations of the ground state, they correspond to the entropy maximization under two constraints respectively. Therefore, starting from the concept of evolution parameters, positive quantities, negative quantities and marginal states seem to roughly reflect the classification of matter in gaseous, solid and liquid states. The liquid state should be in a certain marginal state, and can only be accurately described by the quantum mechanical Shrödinger equations in which the real and imaginary parts are not separated. Laminar and turbulent flows may reflect the characteristics of quantum degenerate subsystems.
At this point, I would like to briefly talk about my early attempts to revise the theory of gravity. I feel that general relativity may not be correct, because the conservation of energy is no longer guaranteed in Riemannian space. The theory of gravity may also need to be reflected in a revision similar to the Shrödinger equation rather than based on curved space-time. For this reason, the quantization of the gravitational field I envision is different. It should be reflected in replacing the Coulomb potential with the gravitational potential to come up with a mathematical form based on a similar Shrödinger equation. Its first-order approximation is Newton's universal gravitation equation, but it corresponds more accurately to the problem of Mercury's precession with greater gravity, so higher-order approximations should be considered. Unfortunately, my math is not good enough to come up with an ideal result. It is difficult for me to accept that the early universe should be described by general relativity, because there is no observational data showing that the evolution of the universe reflects the process of transition from curved space-time to flatness.
The second point of thought comes from when I was taking the doctoral examination for the magnetism major at the Institute of Physics of the Chinese Academy of Sciences in the early 1990s. One of the test options was to report research interests. My preparation came from a magnetism question I had long had: any magnetic material will only show magnetism after being magnetized by an external field. However, when talking about ferromagnetic phase transition in statistical physics textbooks, it is based on the classical Ising model or the quantum Heisenberg model, both of which have spontaneous magnetization. But this is inconsistent with my understanding of magnetism in my childhood: if you put a magnet into the fire and then take it out, the magnetism will automatically disappear, and magnetic materials will not spontaneously magnetize after cooling down. For this reason, I consulted a large number of literature on the measurement of specific heat of early magnetic materials and believed that natural magnetic materials should have a phase transition point formed from paramagnetic to magnetic domains, which is equivalent to a gas-liquid phase transition. When the temperature drops and the magnetic domains become larger and larger, a phase transition point of saturation of the magnetic domain scale will be formed, which is equivalent to a liquid-solid phase transition, and the ferromagnetic Curie phase transition point in the usual sense is between the two, which is just a manifestation of the hysteresis effect.
The materials I used to prepare for my doctoral exams in my early years no longer exist, but I recently checked the Chinese literature online: Xu Shaoyan et al., Acta Physica Sinica, 55 (2006) 2529, which believes that in the process of transition metals Fe, Co, and Ni changing from ferromagnetic state to paramagnetic state, in addition to the usual Curie point temperature, there are also paramagnetic Curie points and ferromagnetic Curie points. Ferromagnetic materials really have three Curie points, which is consistent with my above analysis. This also shows that the concepts of positive quantity, negative quantity, and marginal state formed by evolution parameters are of universal significance. In this way, there are actually two types of material phase transitions based on multi-body systems, from positive quantity state to marginal state, and from marginal state to negative quantity state. However, the current physics research on gas-liquid-solid phase transitions is very rough, and they are simply classified as first-order phase transitions with latent heat of phase transition.
Furthermore, the concepts of latent heat of phase change, i.e., heat of condensation, solidification or sublimation, in the above evolutionary process should be physically reflected as the steady-state characteristics of the system, that is, if any material system is at such a phase transition point temperature, the system temperature will remain unchanged to a certain extent when external heat energy is added or reduced to the system. This should be reflected as having the same physical reason as the need for living organisms to maintain a certain body temperature. To this end, I will combine the above evolution parameters with the concepts of equilibrium state, synergetic state and Indistinguishable state that I have established in the previous article, and give a physical description that is different from the meaning of the first-order and second-order phase transitions or Landau breaking in existing physics textbooks - for all physical phase change problems, the following two types of phase transitions can be used to uniformly describe them, positive-marginal phase transitions and marginal-negative phase transitions.
I can use a simple example of iron smelting to illustrate the positive-marginal phase transition. When I was in elementary school, there was an iron foundry next to the school. The scrap iron that our students handed in that year was recycled into cast iron products. We children would lie on the windowsill and watch the masters add carbon to the scrap iron and pour it into the furnace to burn, and then pour the molten iron into cast iron products. After watching it many times, I heard experienced masters say more than once: It’s over, it didn’t burn through this time, and we need to add carbon and burn it again. I didn’t understand the reason when I was a child, but later I went to find books on iron smelting and understood: ordinary iron blocks are easy to oxidize and rust, which means that Fe combines with O atoms and has lower energy after oxidation. But iron smelting is a reverse oxidation process. Reducing rusty iron to iron crystals means that the binding energy of the metallic bonds between iron atoms is higher. Why is the reverse oxidation process valid and can solidify substances to form a higher energy state?
This reflects the effect of entropy in the iron-making process. The Fe atoms are linked by metallic bonds. Although the energy of the system is higher, the energy of each metallic bond converges, forming an isoenergetic synergetic state in momentum space with the energy of all Fe atoms converging, and the information entropy value of the system is larger. This is exactly the result of entropy dominance in a positive quantity system with p>0. Fe atoms form isoenergetic ordered entropy, which means that in the high-temperature iron-making process, all Fe atoms must first be separated from chemical bonds to become isoenergetic free ions before metallic bonds can be formed between iron ions. If the temperature is not high enough, the O and Fe atoms in the iron oxide molecules in the molten iron are not completely and fully separated. This is why the master mentioned above said that it was not burned through and carbon had to be added and reburned. It can be seen that the first type is a positive quantity-marginal phase transition, which reflects that under extremely high temperature conditions, the entropy force can order the individuals of the material system to be arranged in a crystalline shape only after they are fully separated.
The second type of phase transition is the marginal-negative phase transition. This physical image is actually closer to the physical understanding of continuous phase transition in current physics. This is not the result of entropy force but energy dominance. Although it has also evolved into a uniformly distributed synergetic state, it is the product with the lowest chemical bond binding energy in the coordinate space. Taking the ferromagnetic phase transition I mentioned above as an example, at extremely high temperatures, the individual system is in a completely thermally disordered state, from the completely independent paramagnetism of the individual to the beginning of the formation of the magnetic domain, which should be reflected in the aforementioned positive-marginal phase transition, and also has the latent heat of phase transition. However, the evolution process of the magnetic domain from small to large after cooling is consistent with the continuous phase transition image of current physics. The clusters of magnetic domains with short order will continue to expand as the temperature decreases. In the language of the renormalization group, the adjacent magnetic moments will Kadanoff merge to form larger and larger magnetic domains, but their merging energy is insufficient, and they will not form an integrated spontaneous magnetization, but stop at a certain magnetic domain scale-this is why ferromagnetic materials will not show magnetism if they are not magnetized by an external field.
The process of the synergetic state formed by the marginal-negative phase transition is far away from the mutual collision of the positive state. It is a solidified structure formed based on the interaction energy of the subsystem clusters, so the material structure will have path dependence. No snowflakes falling from the sky in winter are the same. The reason is obviously from the path taken by each snowflake to form, and they have experienced different external environments with different temperature and humidity, so each snowflake will have subtle differences. Similarly, ice also has a variety of crystal structures. Since the 1980s, I have been paying attention to Phys. Rev. B. Almost every year, new crystal structures of ice are discovered, which obviously comes from the different paths of the experimental freezing process. Various crystals formed by the evolution of the earth, such as carbon, can form graphite or diamond. There are also various alloys of metals such as iron or copper, which all reflect that they are metallic bond structures generated through different paths.
Furthermore, the two types of synergistic effects reflected by the above two types of phase transitions, positive-marginal phase transition and marginal-negative phase transition, both reflect that the maximum entropy principle of equal energy and equal probability drives the system evolution, but they are reflected in momentum space and coordinate space respectively. This is exactly what Jaynes described, that entropy maximization follows the universal "constraint" that the sum of probabilities is 1 and is uniformly distributed . The former positive-marginal phase transition reflects the large-scale effect driven by entropy force: at high temperatures, the synergistic effect of the system dominated by separated individuals will bring similar individuals together, so that the process of evolving from the positive state of p>0 to the marginal state of p≈0 is an evolution dominated by entropy force. The latter marginal-negative phase transition is the process of evolving from the marginal state of p≈0 to the negative state of p<0, which reflects that the total momentum T of the system individuals is insufficient, and is therefore dominated by the system interaction energy V, which is the consequence of energy-dominated system evolution. This reflects the path dependence of the evolutionary process.
3.3 Cyclic and bifurcated paths under the path dependence of the evolution of life on Earth
The above description of the path dependence of phase transition is a more accurate description of the complexity of system evolution, which is different from the existing physical phase transition theory, which is usually built on the concept of order parameters. Furthermore, the formation of earth minerals and the formation of life on earth is also a process from high temperature to low temperature, and the evolution parameter changes from positive to negative. Does this also reflect the path dependence characteristics? Furthermore, like any physical phase transition of a system composed of atoms and molecules, the formation of earth inorganic matter and life organic matter all come from the action of electromagnetic force. However, the formation of the former inorganic matter is mainly the electromagnetic force between atoms, with an intensity of eV, while the electromagnetic force between biological molecules mainly comes from dehydration condensation and hydrogen bonding, which is on the same order of magnitude as 1/40eV at normal temperature on earth. The difference in energy leads to essential differences in the path dependence of the evolution of these two types of matter. Before the specific analysis, I would like to explain that the formation of the concept of path dependence comes from my experience in studying biology.
When I was studying for my master's degree at Beijing Normal University, I studied biology to understand non-equilibrium processes. In addition to catching up on some molecular biology knowledge, I also paid special attention to group biology. Some organisms would rather sacrifice themselves for the benefit of the group. For example, when I was naughty as a child, I poked a hornet's nest and was stung by a hornet so hard that I couldn't open my eyes. Later, I heard from adults that the hornet would die soon. This is easy to understand. However, there are many cases in group biology that are difficult to understand: for example, some birds actively kill some of their offspring. The biological explanation is that the inheritance of genes is not equivalent to the reproduction of individual life. When the ecological environment cannot tolerate too many such organisms, such organisms will choose to kill some of their offspring - this made me form the concept of path dependence: the inheritance of genetic genes from generation to generation is the path of inheritance of life, not the survival of each specific individual life.
In physics, people have not established the concept of path dependence. Later, I learned that the concept of path dependence in economics was not formed until the 1980s, because of the research on new institutional economics. With the concept of path dependence, the idea of the above two types of phase transitions can be extended and applied to the analysis of the evolution of the earth's matter and life, and the following conclusions can be drawn: The path dependence of the above two types of phase transitions may be very critical for our understanding of the formation of all earth materials. To this end, I will first analyze the formation of the earth's inorganic mineral products in a one-way path. Next, I will talk about my understanding of life phenomena-a more accurate description of the swing mechanism of life evolution given in the previous article requires the construction of the two concepts of cyclic path and bifurcation path.
Let me first briefly explain that the ordinary inorganic matter on Earth was formed in a one-way path. As mentioned above, the formation of Earth's matter came from the ejection of the products of nuclear fusion in the early days of the Sun. Most of the ejected matter left the solar system, which also took away a large amount of light nuclear matter such as H and He, so that the Earth's matter is dominated by metals: Astrophysics calls all substances heavier than He metals, and the proportion of metal content in the Earth is much higher than that in the Sun. Furthermore, the extremely heavy elements in the Earth settled to the Earth's core due to gravity, so that they are still generating energy through nuclear fission, allowing the Earth's core to continuously radiate heat. For this reason, it is the ejection effect and the sedimentation effect that allow the Earth's surface to present the current material composition. However, why is the Earth's surface that we observe neither a uniform mixture in a thermal equilibrium state nor a uniform sedimentation state, but rather a variety of minerals are formed to form a colorful geological world? Is this also due to the maximization of thermodynamic entropy?
The answer is yes. Thermodynamic entropy maximization leads to two types of phase transitions and forms a one-way path, which makes the earth's matter non-uniform and generates various minerals. As mentioned earlier, at the beginning of the formation of the earth, it was in an extremely high temperature state of nuclear fusion ejection, and all substances were in a gaseous state. The entropy force of the positive-marginal phase transition would gather similar substances together, thus forming the non-uniformity of the earth's matter under a larger scale distribution. This is why it is rich in a certain special element and forms minerals. Next, why do these inorganic minerals on the earth show diversity? For example, iron ore usually appears in the form of magnetite, hematite, limonite, etc. Similar to the different crystal structures of snowflakes and ice mentioned above, this is reflected in the evolution parameter from 0 to negative, and the energy-dominated path of the marginal-negative phase transition: the formation of the earth's inorganic matter comes from the one-way path caused by two types of phase transitions after the high temperature drops.
Next, we can talk about the more important cyclical path of life evolution. First, let me briefly explain that there are two backgrounds for me to propose the concept of cyclical path. The first background is to explain why the earth evolved life substances, which is different from the usual meaning of organic matter. The four types of biological molecules are carbohydrates, lipids, nucleic acids and proteins. In addition to containing only special elements such as carbon, hydrogen, oxygen, nitrogen, phosphorus and sulfur, they are also different from inorganic substances in that they belong to the irregular crystal structure pointed out by Shrödinger. The "unit cell" of this irregular crystal is not a single atom but a biological molecule. As mentioned above, the chemical bond energy between them is at the level of 1/40eV at the earth's normal temperature. Therefore, it is reasonable to assume that life was formed in the periodic changes of the temperature cycle of the earth's rotation. The formation of life substances and other inorganic substances on the earth should be physically homologous, and they are both products of two types of phase changes. It's just that inorganic substances belong to a one-way path, while life substances come from the cyclical path of periodic changes in the earth's temperature environment.
The second background is my personal opinion. In physics, the cognition of the origin of irreversible processes must go from individuals to systems. It is the entropy force of the system that drives irreversibility. Physics has not yet established an accurate physical understanding of irreversible phenomena. When Prigogine created the theory of dissipative structures, he was already aware that irreversible processes were related to chemical reactions. However, the theory gave several examples of Benárd flow to BZ reaction, and then Professor Prigogine's attention to the arrow of time in his later years. This shows that the understanding of irreversibility has not changed since the Boltzmann era. It is just regarded as a phenomenon based on individuals such as molecular collisions or chemical reactions, and it is not linked to the systematic description of the phase change mechanism under the action of entropy. For this reason, the concept of the cycle path must be further linked to the aforementioned evolution parameters. This is to understand the DNA→RNA→protein process under the central dogma of molecular biology, that is, the irreversibility of the multi-body system of biological molecules, from the system perspective of total kinetic energy and total interaction energy.
For this reason, the aforementioned scaling mechanism and swing mechanism mainly reflect some of my early thoughts, which is to divide the steady state of life into two physical understandings, which are respectively reflected in the biochemical reactions of water-based and amino groups. The latter involves the concept of synergy of quantum tunneling to be analyzed in the next section, which specifically refers to the energy source of living organisms, such as animals can move, plants must grow against the gravity of the earth, and so on. But in this section, I only analyze the basic concepts of the two life evolution paths, the circulation path based on water-based and the bifurcation path of amino groups. I have already talked about the concepts of water-based and amino groups in the previous section, but I didn’t explain them thoroughly. What I want to explain in detail below is the physical and biochemical meanings of the concept of water-based, which involves the circulation path. The energy meaning of the concept of amino groups involves the bifurcation path based on quantum tunneling, which I will discuss in detail later.
Let's first talk about the physical meaning of water base. Its macroscopic meaning is reflected in the fact that many factors such as solar luminous intensity, distance between the sun and the earth, and the earth's gravity determine the abundance of water on the earth's surface. Imagine that the most abundant gas in the earth's air is N2 nitrogen. If the distance between the sun and the earth is larger or the solar luminous intensity is weaker, the temperature of the earth's surface may be 200 degrees lower. In this way, water cannot appear on the earth's surface in the form of rivers, lakes, seas, rain, ice and snow. The earth may be a stage effect of liquid nitrogen gasification and liquefaction. The microscopic meaning is reflected in the fact that the normal temperature of about 20°C on the earth's surface reflects the large heat capacity of liquid water, which constitutes the basis of all biochemical reactions. The biochemical reactions of all biological molecules will be most active in this energy range of about 1/40eV.
Let's talk about the biochemical meaning of water base. Its physical basis has been discussed in the previous article. This is because the degeneracy effect of quantum mechanics causes water molecules to form electric dipole moments. The angle between two H+ ions is 104.45°C, and they can be easily decomposed into hydrogen ions and hydroxide ions: H 2 O⇌H+ + OH− . This is not only the basis of any biological molecular structure and biochemical reaction, such as carbohydrate substances usually have the molecular formula Cn (H2O)n , the ratio of hydrogen to oxygen elements is always 2:1, and the dehydration condensation reaction of amino acids in proteins, its amino and carboxyl groups embody hydrogen ions and hydroxide ions. Furthermore, this also reflects that water is a necessity for any life molecule: 50%-70% of the human body is water. People can go without food for ten days but not without water for three days. Furthermore, if the above description is linked to the aforementioned evolution parameter p, there will be a deeper meaning of system evolution, and it must be linked to the above concept of path dependence.
The path dependence of life evolution is the core concept of the evolution platform, which reflects two points. First, since the two types of phase transitions, positive-marginal phase transition and marginal-negative phase transition, have been condensed and merged into one, this means that the binding energy between organic molecules is the aforementioned energy range of about 1/40eV at normal temperature on Earth, which is much lower than that of inorganic matter at several eV levels. But more importantly, the fusion of the two types of phase transitions must be accompanied by biochemical reactions. The energy of marginal-negative phase transitions must be lower than that of positive-marginal phase transitions, but the energy should be closer to the energy threshold of biochemical reactions. Therefore, the heat generated will raise the temperature of the system and merge the two types of phase transitions. Second, this leads to the central dogma of biology, that is, the irreversible process of DNA→RNA→protein. But this process must first be cyclic, that is, the above biochemical reactions must be repeated after protein degradation. At the same time, reflected in the broader process of life evolution, it must also be bifurcated - the bifurcated path must be inseparable from the cyclic path, otherwise it would be impossible to evolve more and more complex and advanced life.
Furthermore, the formation of the concept of circular path reflects the concept of evolution parameters formed by my analogy between economic system and material system. The construction of the concept of evolution parameters actually originated from my thinking about biology, but it became clearer when I continued to think about economic issues. For a long time before constructing the concept of evolution parameters, I thought that since the swing mechanism came from the biochemical reaction in the process of life evolution, it should be related to the biochemical reaction environment with super-threshold energy: in the early years, when Chinese scientists artificially synthesized bovine insulin, they had to be divided into three research groups to generate A chain and B chain respectively, and to link these two chains, but later it was much easier to synthesize molecules far more complex than insulin. The physical basis behind such technological progress is obviously that people finally found a more suitable biochemical reaction environment through repeated attempts. I originally thought that this environment was reflected in the super-energy threshold, just as the aforementioned solar nuclear fusion has a super-energy threshold environment, but artificial nuclear fusion is still difficult to control in an environment below the fusion threshold.
But after forming the concept of evolution parameters, I realized that this is not the concept of biochemical reaction environment under super-energy threshold. Super-energy threshold only reflects the positive state of evolution parameters above the marginal state, that is, the state of p>0. Biochemical reactions must be reflected in the swing across the two-level phase transition, that is, between the positive system driven by the entropy force of the evolution parameter p>0 and the negative system driven by the energy of p<0, which crosses the marginal state p=0, so as to reflect the irreversibility of the central dogma of molecular biology. For this reason, the cyclical path of the evolution of life on Earth is different from the unidirectional path of minerals and the process of stellar evolution from youth to aging: the unidirectional path only evolves from the state of p>0 to the state of p<0, and never looks back. The death of a star is just the evolution from the higher blue-light-emitting to the lower yellow-light-emitting evolution parameter p>0 super-energy threshold. However, the swing mechanism of biochemical reactions based on life processes must be reflected as a dissipative system, which is constantly metabolized, so that the evolution parameter p will continuously change its sign from positive to negative in the cyclical path.
Let's talk about bifurcated paths. The evolution of life must also go from simple to complex, that is, the molecular chain structure of DNA and RNA must have an evolutionary process from short to long, although this process may not be continuous. Therefore, when I later studied synergetics, I felt that this should be a slow time span, reflecting a certain evolutionary process of self-organized criticality. On the one hand, this evolutionary process must be related to a certain inheritance and mutation of certain cell genes, and on the other hand, it must also be related to the development process of life embryos. This reflects the concept of bifurcated paths. This cannot be related only to the survival of a single organism, but the inheritance of genetic material. But what is the physical image of this? Is this only reflected in the formation stage of sperm and egg, or in the embryonic development stage, or is it also related to the evolutionary tree formed by all life on Earth? I have never found the right cognitive perspective. The above concepts of loops and bifurcations based on path dependence had to be "put on hold" in my mind until I learned about the seesaw model in 2013.
The biggest inspiration that the seesaw model brings to me is that the physical image separation from the cyclic path to the bifurcated path should occur at the embryonic stem cell stage . In fact , the swing mechanism I proposed in my early years only reflects the mechanism of maintaining cell fate in the seesaw model and is only reflected as a cyclic path, which exists in both stem cells and ordinary cells . However, only embryonic stem cells may have bifurcated paths . This is not what we used to understand that biological evolution comes from genetic variation of local individual molecules, but it reflects that life evolution comes from bifurcations in the expression of different genomes.
Therefore, there are two physical understandings of the two inductive forces A and B and the pluripotency state given by the seesaw model in the figure above. Let's talk about the first understanding first. I think the seesaw model itself also wants to express this meaning, that is, this should be reflected in the fact that the state of any living body comes from the physical energy drive. Since it involves the reprogramming process of mesoderm genes and ectoderm genes , this may be reflected in the evolutionary process of biochemical reactions of a DNA genome with a larger scale. For this reason, there are three processes. The dominant inductive force A reflects an evolutionary process driven by a certain energy, and the dominant inductive force B is another energy-driven evolutionary process. The seesaw balance of the two inductive forces A and B in the figure below shows that both energies try to dominate the drive, forming a state of mutual inhibition. This reflects that all three states may exist. But after careful consideration, I feel that such an understanding seems to reflect that the evolutionary process of life will have a certain contingency, which may not be correct.
The above accidental thinking is reflected in the understanding of inheritance and variation in the usual sense, which may not be a problem. Gene mutation refers to a sudden and permanent change in a certain section of DNA, but this is considered to be the "driving force" of life evolution: natural selection will eliminate unfavorable mutations, while favorable mutations will be inherited. Gene mutations have no effect on evolution in ordinary cells, but they will cause permanent mutations in embryonic stem cells. However, the seesaw model describes the process of embryonic development and growth, which should be very accurate and not accidental. It is impossible to imagine that a certain type of cell, such as brain cells, will be randomly distributed throughout the embryo. Each type of cell should be closely clustered together. For this reason, the first understanding mentioned above may be correct as a mechanism shown in the experiment, but as an actual process that describes the change of cell fate and reflects the bifurcation path, it should be deterministic. This should be reflected in the platform structure formed by the aggregation of a certain type of cells-such a platform structure should not be accidental.
This led to my second understanding after repeated thinking : First, my imagination of the seesaw has set aside the specific meanings of A-inducing force and B-inducing force, thus forming the concept of an evolutionary platform. The evolutionary platform has two meanings: one is the cycle path where any cell on the platform, including ordinary cells and stem cells, may work; the other is the bifurcated path that reflects the division and differentiation of stem cells.
The cyclic path means that biological macromolecules are regarded as individuals similar to quantum multi-body systems that are always in a cycle. This is an irreversible phase change process of DNA→RNA→protein that satisfies the central dogma of biology, and the evolution parameters will change alternately from p>0 to p<0. In fact, from the perspective of the synthesis of nucleic acids and proteins, the driving force of the ordered entropy force of the system with p>0 is required to make all biological molecules converge to the equal energy quantization. However, after being synthesized into macromolecules, the DNA molecules themselves will be constantly damaged, and proteins may also be incorrectly folded. They all need to be repaired, which will be reflected in the driving force of some other ordered entropy with p<0 that allows chemical bonds to "bind". The two types are ordered entropy in momentum space and coordinate space that form self-organizing structures. The cyclic path is actually the basic process of the evolution of life. Whether it is the process I will call the bifurcation path below, such as the process of embryonic totipotent stem cells evolving into pluripotent cells or sub-pluripotent cells or even ordinary cells, or the ordinary cells that satisfy the central dogma of DNA→RNA→protein, they all have the characteristics of a cyclic path.
However, there may be a bifurcation path for the differentiation of the pluripotent state . As should have higher energy. It should not be seen as a static state but as another physical state that reflects the fulcrum of the seesaw . This cell state may undergo different divisions or differentiations . What is the corresponding physical image? The quantum tunneling state of N- ions in NH3 gives imagination, which reflects the higher energy meaning under the bifurcation path. In the DNA macromolecule, this does not seem to be reflected only in the quantum tunneling of N- ions expressed in a certain genome , but may also be reflected in the quantum tunneling "resonance" between two or even more genomes. I just want to point out that the significance of the bifurcation path should be reflected in the overall process of stem cell division and differentiation. In this way, the cyclic path is actually equivalent to the swing mechanism I mentioned above, but the bifurcation path reflects more of the meaning of evolution.
Specifically , the meaning of the bifurcation path as an evolutionary platform reflects the following two points. First, it is a platform for the development and growth of embryonic stem cells. Second, such a platform must also reflect the process of life evolution and the overall mutation of the system. The latter point may be more important for our understanding of biological evolution. For example, the overall DNA of pigs and humans is 98% similar, which shows that humans and pigs come from a common ancestor. However, the mutation of a gene in the DNA or RNA of this ancestor can only be reflected in the embryonic cells of its bifurcation path, that is, the mutation that occurs when it evolves to a certain pluripotent state. For this reason, this is not an induced gene mutation under a swinging cycle path, but an evolution that will present a bifurcation path solidified at a certain p=0- such evolution has two polarities, and I will call it a bipolar evolutionary platform.
The loop path and the bifurcation path are the embodiment of the two processes of life evolution. This can also be analyzed by analogy with the development of the current computer field. In the early computer von Neumann proposed the problem of stored program and program control . There was only a machine language represented by 0 and 1. Later, operating system platforms such as DOS and Windows were formed, until today's AI - these three stages obviously correspond to three platforms. The loop path of life evolution I mentioned above is the concept of evolution only on the same platform, just like an application you built on Windows, which may be slightly modified to use my application. Conventional artificial genome editing in biology can only operate at this loop path level. However, the bifurcation path of life evolution must reflect the overall characteristics of species mutation and must be cautious. Gene editing at this level may edit out the differences between humans and pigs.
4. Three self-organized states: cooperative tunneling state, quantum entanglement state and topological degenerate state
Many people believe that there will be another revolution in physics, which is based on the fact that phenomena such as dark matter, dark energy, and quantum entanglement cannot be explained by existing physics theories. However, Professor Wen Xiaogang has a different perspective on the understanding of the revolution in physics. He believes that each revolution in physics is driven by mathematics. The first was Newton's discovery of the law of universal gravitation, and the mathematical tool was calculus. The second was Maxwell's electromagnetic revolution, and the mathematical tool was differential equations, which later developed into fiber bundle theory. The third was the relativity revolution, when Einstein proposed the general theory of relativity, and the corresponding mathematical theory was Riemannian geometry. The fourth was the quantum mechanics revolution, and the new mathematics was linear algebra and tensor product. For this reason, Wen Xiaogang believes that quantum mechanics will have a second revolution, and algebraic tools such as category theory may have to be used - the mathematical foundation of physics may have to move from the geometric era to the algebraic era. He also proposed a new idea that matter is information.
My thinking was greatly inspired by Professor Wen Xiaogang, but my specific ideas are quite different from his. I think the revolution in physics should not break out because of insufficient mathematical tools, but because of problems in the most basic understanding of physics. The existing understanding of physics is based on concepts such as space-time, inertial system, and interaction. The new idea I envision is to build physics on the systematic meaning of energy and entropy, and form an evolutionary criterion analysis framework - all physical measurements must be reflected in the projection of energy and entropy in space-time. The meaning of the inertial system actually reflects the relationship between the measured object and the observer, but this only has a clear meaning in classical mechanics. In a broad sense, this should be revised to be based on the center of mass system of the system itself. The interaction force between individuals is also a manifestation of the more general entropy energy criterion. Therefore, the revolution in physics should be based on a new idea of state reason. The core of this idea is to regard the observation of material phenomena as the state of matter, and the theory of physics is to give the reason for the formation of the state.
Furthermore, Professor Wen Xiaogang ignored the creation of thermodynamics and statistical physics, which I think is the most important revolution in physics. However, the non-equilibrium physics that developed along this path later did not become mainstream. Nowadays, people do not distinguish between thermal equilibrium and steady state, and the "one surprise and two explosions"3 problems I proposed in my early years reflect the need to form a concept of steady state that is not equivalent to thermal equilibrium. This is also the physical reason for the concept of synergetic state or Indistinguishable state that I want to propose. I am not talking about this to refute Professor Wen Xiaogang - it is precisely the different perspectives of understanding the problem that Professor Wen Xiaogang's thoughts have brought me great inspiration. As mentioned in the introduction, the topological order he proposed is also called topological state, which inspired me to generalize it to topological degenerate state, and then to form a classification of various states of matter. For this reason, I will propose three quantum many-body states in this section, cooperative tunneling state, quantum entangled state and topological degenerate state, which are actually non-equilibrium self-organized states, and believe that the reason for constructing such states comes from the drive of ordered entropy maximization revealed by the entropy energy criterion.
4.1 Tunneling synergetic states: from muscle movement, Jahn-Teller effect to the Earth magnetic pole reversal model
First, let me explain why I imagined the existence of tunneling synergetic states in my early years. This came from my confusion about the phenomenon of terminal lucidity: I have witnessed the abnormal short-term improvement of critically ill patients before they die. I have also heard a lot about it, such as the well-known scholar Professor Yuan Longping. There is also a term in English to describe this phenomenon. My early imagination of this was to link it with the final explosion of stars. If stellar nuclear fusion is to be presented as a super-energy threshold, the final explosion of stars reflects the simultaneous realization of the fusion threshold, and the final lucidity may be the manifestation of such a physical phenomenon in the human body. Many of the examples mentioned by Professor Yuan Longping and that I have heard of were in the early 1960s when people ran quickly and then fell down due to hunger when they were dying. This shows that the energy of the final lucidity may not come from the accelerated digestion of residual food, but more likely from the quantum tunneling of some of the aforementioned amino N - ions: this may be that before some energy storage is exhausted, the energy in the system is self-organized and converted into kinetic energy.
This in turn reminded me of synergetics proposed by Haken on the principle of lasers. In the early years at Beijing Normal University, our graduate students had dissipative structure theory in their basic courses but no synergetics because no teacher could fully understand and teach this course. But my mentor, Professor Hu Gang, later went to Haken as a visiting scholar, which made me particularly curious about synergetics. After I graduated with my master's degree, there was a Chinese translation of "Synergetics" written by Haken, but I still found it difficult to understand. However, synergetics' explanation of the non-equilibrium self-organization of lasers made me feel that it was more convincing than the dissipative structure theory, which has no examples to support it. For this reason, based on the existence of amino N- ions in all protein molecules , will the distortion of their molecular structure cause quantum tunneling ? The "resonance" of a large number of tunneling N- ions may trigger a synergistic effect, thereby maximizing information entropy, which explains the aforementioned phenomenon of afterglow. Below, I will further discuss the above thinking process and use the concept of tunneling synergy to construct a model of the reversal of the earth's magnetic poles to explain the problem of global warming.
The evolution of life discussed in the previous section has both a cyclic path that swings between the evolution parameters p>0 and p<0, and a bifurcated path that solidifies in a marginal state of p≈0 for quantum tunneling "resonance" . But in my early years, I had no concept of the above cyclic path and bifurcated path . I just thought that the muscle movement of living organisms seemed very similar to the synergetics of photon movement in lasers. Muscle movement may come from human motor neurons or from external current stimulation. Obviously, the energy caused by motor neurons and external current is very weak, but the muscle movement of living organisms is very fast and strong, which must be caused by triggering some kind of energy "switch". All proteins contain amino groups -NH2 , which makes me think that there may be quantum tunneling of N- ions in muscle proteins or cells, which may be similar to the resonant cavity reflection of photons in lasers. Originally, it was a disordered tunneling of N- ions. Once the "switch" was closed due to the stimulation of neurons or external current, all molecules evolved into unidirectional muscle coordinated movement, which is similar to the synergistic effect formed by lasers.
For this reason, the tunneling synergy effect of muscle movement that I envision will have two processes. First, this should be reflected at the individual level of biological molecules in a single protein, which can be quantum tunneling of N- ions or other molecules that should be smaller . This belongs to the maximization of information isoenergetic entropy at the individual level . Its physical essence does not come from the collision between individuals, nor the association of chemical bonds: even if it is not quantum tunneling, it can only be something like the round-trip movement of laser photons in a resonator. Second, the round-trip movement at the individual level must further reflect the synergistic effect, which is not reflected in a biological molecule, but the collective movement effect of the quantum multi-body system of the muscle group: this is not only a cross-molecular but also a cross-cellular muscle group, which will reflect the idea of the evolutionary platform. In this way, the disordered movement of different molecules will evolve into a unidirectional ordered movement, which will be reflected in the maximization of isoenergetic ordered entropy at the evolutionary platform level.
The tunneling synergetic state mentioned above is different from the scaling mechanism of ordinary protein folding and the swing mechanism describing the central law of molecular biology mentioned above, but manifests itself as a self-organizing behavior in a special sense. Of course, this is just my guess - the core physical image here is the maximization of ordered entropy at two levels, one is the individual information entropy of the quantum few-body system at the molecular level, and the other is the information entropy of the multi-body cooperative motion at all molecular levels. Such a guess influenced the formation of my later physical thinking. As I will give below, the physical understanding of superfluidity and superconductivity is based on the maximization of two types of entropy, individual and overall. However, in my early years, I did not think about the problems of superfluidity and superconductivity, nor did I have the concept of evolutionary platform. I just thought that there was no precedent for using quantum mechanics to describe biological muscle movement. To further support this physical image, I need to find a simpler physical model to explain it.
I have heard that Chinese scholar Professor Fang proposed a physical idea very close to Haken as early as the 1960s. If it were not for the Cultural Revolution, it is very likely that Chinese scientists would have contributed to synergetics. The purpose of my study of Haken's "Synergetics" was to try to find other cases that support the idea of tunneling synergetic state mentioned above. Since synergetics is difficult to understand, I went to check the laser research article published by Professor Fang in the Chinese "Acta Physica Sinica" before the Cultural Revolution. This article proposed that the concept of electromagnetic mode coupling does have the meaning of self-organization, which is more concise and clear than the description of the Slaving Principle of synergetics. The idea of maximizing entropy at two levels mentioned above comes from the association after reading this article. However, the above imagines muscle movement as a tunneling synergetic state based on N - ions . Although this has the characteristics of round-trip motion with the description of the synergetic state of lasers, one is a biological muscle cell and the other is a laser photon. There is still a big gap between the two.
But I think the concept of the above tunneling synergetic state is very important, reflecting the self-organized behavior of matter far from equilibrium. At that time, I also hoped to find another non-laser case with such a physical mechanism, preferably closer to biological molecules. If there is a collective motion behavior of some atoms or molecules, it would be an even better case. I accidentally discovered that Professor Fang continued to use the idea of laser electromagnetic mode coupling to study the Jahn-Teller effect. This is to understand the coupling effect in solids from the perspective of the static distortion energy of electrons and the quantum energy of coupled vibrations. This article entitled "Theory of the Strongly Coupled Dynamical Jahn-Teller Effect", Acta Physica Sinica, 22 (1966) 471-486, does it reflect both the quantum tunneling of microscopic subsystems and the cooperative effect of the macroscopic overall system? This seems to explain why the lattice of solid physics is distorted and destroys the symmetry, which aroused my interest.
For this reason, I deliberately studied the Jahn-Teller effect, which I had not known before. Solid-state physics textbooks usually do not introduce this effect, and only structural chemistry has a quantum mechanical analysis of this effect. Furthermore, I feel that the above concept of Jahn-Teller coupling may reflect the tunneling synergy effect of biomolecules that I mentioned above. The above picture comes from Wikipedia. The [Cu(OH2)6]2+ ion has an "elongated octahedral" structure, with two axial Cu-O bonds of 238 pm and four coplanar Cu-O bonds of 195 pm.
The usual physical understanding of the John-Teller effect is that desimplification reduces the energy of the system, thus causing the distortion of the above chemical bonds. However, I think such a conventional understanding of the John-Teller effect may be problematic. If desimplification reduces the energy of the quantum system and stabilizes it, then why is only one axis distorted when there are three axes, x, y and z? It seems that desimplification is not enough. The degenerate chemical bonds in the ground state must be further desimplified to further reduce their energy, just like the two symmetrical H + ions in the water molecule must also be desimplified and form an angle of 104.45°C to lower the ground state energy . This is common sense in quantum mechanics for desimplification calculations. In addition, the John-Teller effect only exists in some octahedral complexes and is not universal, which is also difficult to explain. Why do some crystal structures need to be desimplified and some do not?
To this end, I have given an alternative explanation for the John-Teller effect, which is very similar to the quantum tunneling symmetry of the NH 3 ammonia molecule described by Anderson. The John-Teller effect should be presented as a change in the microscopic quantum structure caused by entropy force - the system evolves from a completely symmetric state to a distorted state, which reflects topological degeneracy rather than spatial degeneracy. This also reflects quantum tunneling at the subsystem level. The static distortion energy and coupled vibration energy proposed by the strong coupling dynamics Jahn-Teller effect theory should reflect the maximization of entropy values at two levels. First, the quantum tunneling of electrons in the lattice subsystem constitutes the maximization of the energy information entropy of the degenerate state. Furthermore, the further coupling of the subsystem lattice distortion seems to be reflected in the modification of the electronic band structure in a certain lattice, and has a synergistic effect driven by the system entropy force, which is the maximization of the information entropy of the overall effect of the system.
The above is my alternative explanation of the John-Teller effect and its coupling. If there is no distortion, all the electrons outside the [Cu(OH2)6]2+ ion in the above figure are completely symmetrical, and various energies {E1 , E2 , E3 , E4 , ....} may be occupied, and the average energy of the system is <E>=Σi piEi . However, after the lattice is distorted, the symmetry is broken: the chemical bond lengths of the z axis and the x and y axes are separated, which should be reflected in the formation of an energy gap above the ground state of the system's degenerate symmetry. This is very similar to the NH3 molecule described by Anderson above, which belongs to the topological degenerate state: Can this be regarded as the separation of the two quantum states |z↑> and |z↓> on the z axis, and the formation of mutual quantum tunneling of Ez↑ =Ez↓ ? The reason why the system evolves into such a distorted state is based on the following two conditions: First, the quantum tunneling energy Ez↑ =Ez↓ is lower than the average energy <E> before the crystal is distorted. Second, the energy gap must be large enough, so large that energy cannot tunnel to energy above the energy gap, otherwise the system will still be in thermal equilibrium.
herefore, the John-Teller effect of [Cu(OH2)6]2+ reflects that this is a state of equal energy information entropy maximization for the degenerate subsystem , but this is only a description of the information entropy maximization of the subsystem. The larger system composed of all [Cu(OH 2 ) 6 ] 2+ ions needs to be further coupled to show synergy - this is my understanding of the physical Jahn-Teller strong coupling system of Professor Fang. Furthermore, referring to the John-Teller effect entry on Wikipedia, in octahedral complexes, due to the different number of d-orbital electrons, some show strong John-Teller deformation effects such as d=4, 7, 9, some only have weak deformations such as d=1, 2, 5, 6, and the rest have no deformation at all - this makes me think that this may be similar to the fully homomorphic and synergistic states composed of nucleons in the atomic nucleus analyzed in the previous article.
However, the above physical understanding still cannot explain the complexity of life phenomena. The self-organizing physical mechanism of the tunneling synergetic state of muscle movement mentioned above is different from the physical image of John-Teller coupling in that muscle movement must reflect certain critical characteristics of the tunneling synergetic state: weak neuron or current stimulation can cause large movements of muscle cells or proteins. This must reflect the conversion between the ordered entropy state and the thermal equilibrium state, which is also similar to the characteristics of the marginal state of the evolution parameter p≈0. However, the strong, weak or non-existent effects of the above John-Teller coupling do not show the mutual conversion characteristics under the control of external parameters. For this reason , it will be of great significance if the above tunneling cooperative effect can still be found in nature as a case of mutual conversion between synergetic state, homomorphic state or thermal equilibrium state.
The physical phenomenon I hope to find is that the system will switch between two macroscopic states: the increase in system energy after absorbing heat does not evolve into the maximization of disordered entropy, but forms a quantum multi-body tunneling synergetic state, and after releasing heat, it evolves into an ordered state of a certain crystal structure, or even a completely thermally disordered state, which is consistent with the muscle movement pattern of living organisms. In 1992, the year I took the doctoral exam, I happened to be given an opportunity - I accidentally discovered on TV that the United Nations Framework Convention on Climate Change was passed, and countries around the world will control carbon emissions in the future. I immediately thought that this convention was absurd. From the perspective of achieving temperature balance through thermal contact, the absolute amount of carbon emissions is unlikely to cause global warming. At most, the growth rate of carbon emissions is too fast, which may affect the earth's climate for a time. Of course, there is nothing new in talking about doubts. The key is to find other physical reasons for global warming that may be convincing.
At that time, I thought whether this might be related to the reversal of the Earth's magnetic poles. Of course, the period of the Earth's magnetic pole reversal does not coincide with the time of climate change, but this can be understood from the physical mechanism of the above tunneling synergetic state. The quantum tunneling frequency of N- ions in NH3 described by Anderson in the H+ plane is as high as 3x1010 /s. The larger the mass of the tunneling ion, the lower the tunneling frequency. If the frequency of quantum tunneling is so low that the Earth's magnetic pole reversal is once every tens of thousands to hundreds of thousands of years, it will show critical characteristics. Is this possible? Self-organized criticality has been mentioned in the introduction of the previous article, but the evolution parameter has not been mentioned. The Earth's magnetic field may be in a marginal state of the evolution parameter p≈0 that I defined. This means that the frequency of quantum tunneling will slow down due to the increase in the volume of the individual "clusters" of the system tunneling. This has already reflected the meaning at the individual level in Anderson's description at that time. But such individual tunneling must also reflect the meaning of cooperative collective movement.
Therefore, the physical picture I give of global warming is that as long as the Earth's magnetic poles are still in a cycle of continuous reversal, which is reflected in the fact that the Earth's magnetic moment molecules are in a periodic state of tunneling cooperation, no exothermic reaction will be formed. Despite this, the period of tunneling cooperation, as the period of geomagnetic reversal, may be as short as tens of thousands of years or as long as hundreds of thousands of years, and the time period is very long. Only when such a tunneling cooperation cycle stops will heat be released to form a thermal equilibrium state. Today's global warming may be at the moment when the tunneling cycle is coming to a stop. Of course, after the evolution of the Earth has spanned a long time in the future, the Earth in a thermal equilibrium state may absorb heat again and return to the periodic state of geomagnetic reversal of tunneling cooperation. The previous article has already mentioned that this belongs to self-organized criticality, but there is also a "critical slowing down" effect, which is a concept proposed by Mr. Hao Bailin in his early years on the problem of chaotic bifurcation, and his review article mentioned in the previous article has an introduction.
However, the cause of the Earth's magnetic field is very complicated. There is a dynamo theory in geology, but how to link it with the problem of magnetic pole reversal? For me, a young scholar who only graduated with a master's degree at that time, my knowledge reserve was not enough to do such analytical and argumentative work involving the Earth's magnetic field. For this reason, I had to find a strong collaborator. At that time, I was preparing to take the doctoral examination and switch to the field of condensed matter physics. I chose Academician Pu Fuke of the Institute of Physics of the Chinese Academy of Sciences as my mentor, and wanted to work with him to complete this model of geomagnetic reversal to explain global warming . Mr. Pu was the only Chinese member of the Magnetism Branch of the International Union of Theoretical and Applied Physics and the director of the Magnetism Professional Committee of the Chinese Physical Society. I originally thought that Mr. Pu had enough professional knowledge and would definitely support me in conducting research on the causes of global warming.
At that time, I thought that writing a doctoral thesis was similar to undergraduate and master's theses, and I could choose any topic I wanted. However, after entering the Institute of Physics, I learned that I was recruited as a doctoral student to complete a national project. This was a major national research project applied for by an experimental group that year. The theoretical research group was also allocated some funds, and several doctoral students including me were recruited to publish papers, because theoretical articles are easier to publish and cost less: I was required to publish at least 5 SCI papers before graduation. In order to get a degree, I had to do a job that I had no interest in, split an idea into multiple papers, and finally completed the task with one paper exceeding the quota. Thinking about it afterwards, it is not surprising that Mr. Pu denied the idea of the geomagnetic reversal model without even listening to me finish talking about it . Constructing a theory to deny the United Nations Framework Convention on Climate Change is a research work that can neither be confirmed nor falsified, and it is bound to be criticized. As an academician, it is unlikely that he would joke about his academic reputation.
Scientific research has long been a place of fame and fortune in Chinese society. After three years of experience as a doctoral student, I felt that I could not conduct research centered on my own interests, and could only strive for fame and fortune, thus staying away from the academic circle. However, it is always difficult for people to completely transcend. After I occasionally saw the scientific news about Tang Chao's seesaw model in 2013, I felt that the geomagnetic reversal model that I had thought of above from the muscle movement of biological systems might be revived - the magnetic poles of the geomagnetic reversal model tunnel back and forth, which is very similar to the seesaw model, and there may be hope to reconstruct it . In 2018, I took the opportunity to return to China and returned to the Theoretical Research Laboratory of Magnetism of the Institute of Physics to find my former brothers and sisters, wanting to tell them about my ideas at that time. My brothers and sisters at that time were already doctoral supervisors. When they wanted to treat me to a meal, I specifically asked for lunch, hoping that after the meal, I could explain my early geomagnetic reversal model to them again.
However, I kept the prepared speech in my pocket. They were all busy people and had to go to work after lunch. However, I still think that the main reason was that I didn’t think as clearly as I do now: I spoke too hastily during lunch, and didn’t explain the whole process of my thought formation from muscle movement to self-organized criticality: the thinking process and how to express it are very important. For this reason, I continued to think for several years, and felt that I had to separate the two concepts of cooperative tunneling state and topological degenerate state (I didn’t think of distinguishing these two concepts at that time). The latter can be used to explain the Rollin film climbing of liquid helium, and may also be related to another phenomenon of life, the anti-gravitational transport of nutrients absorbed by plants. I will explain it in detail in the third section later.
Today, more than 6 years later, I feel that my thoughts are clearer, and I should re-express my complete thoughts, so I have this long article. Whether this geomagnetic reversal model can explain global warming, in addition to a more complete theoretical analysis, I also hope that someone can design and construct an experiment to test it. The core is that such a geomagnetic reversal system must be in a marginal state of evolution parameters. This experiment is very difficult. However, the muscle movement of living animals and the nutrient absorption of plants are indeed in a certain critical marginal state.
4.2 Quantum entanglement: understanding from the overall system characteristics of the concept of quantum state
In 2013, I wanted to return to the physics academic circle. The first thing I thought of was of course to find a job in the circle. I was just over 50 at that time, and I felt that I still had a chance to get a position in a certain university in China. However, after being away from the academic circle for many years, I felt that I still had to catch up on the cutting-edge knowledge of physics. What aroused my great interest was the work of my senior Wen Xiaogang. Over the years, I have become his loyal fan: I try my best to read and watch his papers and videos of various lectures. However, I feel that after being away from the physics community for many years and having too much systematic thinking in macroeconomics in my mind, I feel different when I return to physics. The tunneling synergetic state analyzed in the previous article belongs to a special type of synergetic state. In this section, I will further analyze the concept of full Indistinguishable state, which can be divided into two categories: quantum entangled states without energy gaps and topological degenerate states with energy gaps. I will further analyze the latter in the next section. In this section, I will first talk about my understanding of quantum entangled states.
Professor Wen Xiaogang has his own special understanding of quantum entanglement. This is completely different from the usual view that quantum entanglement is only a characteristic of quantum few-body systems, whose quantum individuality is lost and only has the overall entanglement characteristic. Wen Xiaogang understands quantum entanglement based on quantum many-body systems, or in other words, the concept of topological order he proposed in his early years itself has quantum entanglement: this is to divide quantum entangled states into two categories, short-range entangled states that can be described by direct product states are ordinary quantum states, and those that cannot be described by direct product states belong to long-range entangled states. Such an understanding obviously comes from the concepts of short-range and long-range programs of physical phase transitions. However, Wen Xiaogang believes that such entanglement should be understood using category theory, and further believes that this will constitute a revolution in physics, and has repeatedly talked about the final breakthrough being just a "last shot". For this reason, I have been looking forward to the moment of shooting, and tried to integrate my ideas with those of Professor Wen Xiaogang. But unfortunately, I did not wait until this moment, and I had to express my different understanding of quantum entanglement in this section: this is related to the concept of topological order, but it is not exactly the same. This has to start with my understanding of the concept of physical state in physics.
My understanding of all states of matter is based on the system: this is to regard any system as composed of subsystems. Such a system view has of course already existed: from the perspective of divisibility, any substance is composed of molecules, molecules are composed of atoms, and atoms are further divided into nuclei and electrons, etc. From a larger scale, there are stars, galaxies and even the entire universe system in the material world. However, the limitation of such a traditional system view is that it already implies the meaning of spatial scale, rather than starting from energy and entropy. If the system and subsystem are divided based on energy and entropy, the extensive meaning of energy and entropy must be reflected. The total energy and entropy of a large system must be the sum of the energy and entropy of its subsystems, which must include a quantitative understanding. This has been reflected in the entropy energy criterion I constructed earlier, and the description of the tunneling synergetic state in the previous section also reflects that the overall entropy of the system is the sum of all subsystems.
I will not repeat the concepts of entropy energy criterion and entropy energy coefficient in Section 2. Next, I will start from the relationship between subsystems and large systems and try to give the meaning of physical states under different statistical distributions. To this end, I will start with the Fermi distribution system in which there is no interaction between subsystems. From the perspective of physical measurement, the meaning of the subsystem is reflected in the energy set of the canonical ensemble {E0 , E1 , ..., En }, and any subsystem with any energy value may be measured. From the perspective of the canonical statistical ensemble, each possible measured state corresponds to a subsystem consisting of the ground state |E 0 > and the kth quantum energy level state |Ek>, where k=1, 2, ..., n. If εk =Ek -E0 , the kth subsystem should be reflected as the maximization of the entropy energy coefficient, so Xk = Sk -βEk = ln[1+e^(-βεk)]. This means that in a Fermi distributed system, each subsystem has its own specific entropy value Si and its energy Ei, and all subsystems must share two points: the quantum ground state |E0 > or the quantum empty state |0>, and the same constraint parameter β, which is the temperature of the system.
The above description reflects my understanding of the system: the entropy energy coefficient of the entire system is maximized, which should be reflected as the sum of the entropy energy coefficients of all subsystems X = Σ k xk , which is also the logarithm of the partition function of the canonical statistical ensemble. Such a summation formula is not only valid for Fermion systems, but also for all material systems, but the meaning of xk in different subsystems is different. It can be seen that the evolutionary criterion analysis framework is to understand the formation of material structure from the perspective of system evolution built on subsystems. For quantum systems, the evolution of subsystems depends on two steps: the first step is to form a shared empty state, and the second step is that all subsystems must further form a common constraint parameter. This will reflect different material states, and there will be different understandings of information entropy constraints and temperature meanings: Fermi statistics reflect energy mean constraints, while synergetic states reflect the common constraints of mean and variance, both of which have the usual temperature meaning. The fully homomorphic state only has a constraint of total probability 1, which will lose the temperature meaning in the special case of quantum entanglement, which will be discussed later.
Let's continue to analyze Bose statistics. What is the difference between it and Fermi statistics? This is reflected in the difference between Boson and Fermion. The same quantum energy level of the system can be occupied by multiple particle states. I first studied general physics in college. I learned that when Plank proposed the distribution of blackbody radiation in the early years, it was based on the energy of photons in parts, and any integer multiples of a certain possible energy were possible. But later when I studied statistical physics, I found that later physicists denied Plank's early derivation method of blackbody radiation, and instead used the number of microscopic states used in Bose-Einstein statistics to calculate the statistical distribution. At that time, I thought that such a denial of Plank's quantization idea might not be physically reasonable: Plank's idea at the time might also be correct, depending on how we understand the meaning of the quantum state based on Boson. For this reason, I proposed the difference between Bose statistics and Fermi statistics, which should be reflected in the fact that the subsystem of Bose statistics should also include special Bose synergetic states, and its meaning is as follows.
Since the Fermion subsystem of the aforementioned Fermi statistics only has the shared empty states |0> and |εk>, then a very natural generalization is that Bose statistics is also composed of Boson subsystems, but unlike Fermion subsystems, Boson subsystems also contain synergetic states with any number of particles and the same energy, that is, from |2εk>, |3εk>, .... to the |nεk> state of any number of particles. In this way, Bose statistics also have to be reflected in two steps: First, the first step is to merge the shared empty state |0> and all quantum synergetic states with energy εk |ε k>, |2εk>, |3εk>, ... into the Boson subsystem, and its entropy energy coefficient is xk = ln[1 + e^(-βεk) + e^(-2βεk) + e^(-3βεk)+...] = - ln[1 - e^(-βεk)]. Then, in the second step, the Boson subsystems are merged into the entropy energy coefficient of the large Bose statistical system, that is, X = Σk xk . This is very similar to the description of the tunneling synergetic state mentioned above. The overall system should be presented as the sum of each subsystem in terms of energy and entropy. At the same time, this is also similar to Fermi statistics, which can simply give the basic formula of Bose statistics.
The above reflects the evolutionary criterion analysis framework I want to describe. It is easier to construct the classical Boltzmann-Maxwell statistical distribution from this. Classical statistics should reflect that all subsystems are nearly independent, that is, a particle has no correlation with other particles and satisfies the ergodic assumption. The kth particle is an independent kth subsystem, and the classical statistical distribution is that any energy E k,j is possible, j represents the energy of the subsystem traversing all ergodic states, but does not include the quantum empty state. This also means that the state with an absolute energy value of 0 is excluded, so xk = ln [Σ j e ^ (-βEk,j )]. Therefore, the third law of thermodynamics that absolute zero cannot be reached is clearly reflected in the formula X = Σk xk : the entropy energy coefficient of the classical system will not be equal to zero. The evolutionary criterion analysis framework does not regard classical statistics as an approximation of quantum statistics, but rather that matter has evolved to a state where the quantum empty state is lost.
The above concepts of shared empty state and absolute temperature mean that the energy value of any subsystem state has an absolute standard, which is reflected in the energy difference with the empty state |0>, namely εk, which is similar to the understanding of chemical potential in statistical physics today. In physics, since Newton's absolute view of space and time was denied, all measurements seem to be relative, but physics still retains an absolute physical quantity, which is the absolute temperature β. Its meaning is that the shared empty state is an energy benchmark based on the system. Therefore, the absolute temperature β not only constitutes the basis of statistical physics analysis, but also reflects the intrinsic properties of the system in my opinion. For example, if I have a cold and a fever, it does not mean that you also have a fever. You and I belong to two different life systems, and the temperature β values are not the same. But the sun has a constant surface temperature, and its photon frequency distribution is also similar to blackbody radiation, which shows that the entire electronic system on the surface of the sun is a system with a shared empty state, and its light emission will form a specific frequency distribution, which is also reflected in the microwave background radiation of the entire universe.
Next, I will talk about the physical understanding of quantum entanglement based on the analysis of subsystems and the overall system. It is necessary to clarify the definition of quantum entanglement first. In the Wikipedia entry for Quantum entanglement, the definition of quantum entanglement is ambiguous. For example, the cascade luminescence of Ca40 will successively emit light green light with a wavelength of 551.3nm and light blue light with a wavelength of 422.7nm. These two beams may be left-handed or right-handed at the same time, which is considered to be a quantum entangled state. But I think that this is not simultaneous but only reflects the luminescence of the time sequence. It can only mean that the polarization states of the two photons emitted successively are correlated, and it cannot be considered that this is quantum entanglement. My understanding of quantum entanglement should be reflected in that this is a quantum few-body system with a shared empty state, but the temperature constraint is replaced by time indistinguishability, thus forming a system quantum homomorphic property that is independent of spatial distance: the key point of understanding that quantum entangled states are different from ordinary quantum statistical states is here.
The above understanding of the indistinguishability of quantum entangled states based on time first comes from the analogy with synergetic states and Indistinguishable states under spatial structures . The synergetic states based on momentum space tunneling discussed in the previous section, including the synchronous vibration states of each atom under various crystal structures based on coordinate space, can be regarded as quantum synergetic states. This belongs to a system in which the individual system is driven by entropy or related by chemical bond energy, and the total energy of the system is limited. In addition to the synergetic state, the state of the nucleons in the nucleus has quantum exchange symmetry and indistinguishability due to its higher energy density, and may further form Indistinguishable states. For this reason, for other ordinary substances, as the energy density increases and under special conditions, will it also evolve from an ordinary thermal equilibrium state to a tunneling synergetic state, and then to a Indistinguishable state? This is also the reason why I defined the |nε> synergetic state for Boson in the previous article, but lasers with higher energy must be understood as Indistinguishable states. For this reason, in addition to the spatial Indistinguishable state of the nucleus, I will also introduce the concept of temporal Indistinguishable state.
The formation of the concept of Bose synergetic state |nε> and laser homogeneity comes from the inspiration of Haken synergetics' description of laser: when the energy injected into a system is high enough, the thermal equilibrium state will evolve into a synergetic state, so that the entropy energy coefficient is higher. Then the energy is further increased, so what state will the Bose synergetic state evolve into? This is the homogeneity based on time indistinguishability. For example, the meaning of the homogeneity of three photons is not reflected in the fact that the synergetic state emits three distinguishable photons to form an ordered entropy that is higher than the disordered entropy value of blackbody radiation, but it is reflected in the formation of six time-indistinguishable quantum states of |123>, |132>, |213>, |231>, |312>, |321>, which is what I understand as the meaning of pulsed laser. However, there is a very ambiguous point in this, is this equivalent to 3 photons or 6 photons? This is also something I have not thought clearly about. Judging from the total energy, there should be 3 photons, but the total number of indistinguishable quantum states may be 6. This is the root cause of the strange properties of quantum measurement and my understanding of quantum entanglement.
The above understanding of the time-identical state of the quantum entangled system is reflected in the fact that this is a temperature-free meaning in which the energy of two or more quantum states is strictly equal, that is, a self-organizing system far from equilibrium - the individual system has a quantum shared empty state and time indistinguishability, but the other physical properties of each quantum state, such as the spin, can be different. This will cause all quantum states in the system to be "inseparable" forever. Even if they are separated by a great distance in space, they still have to show quantum entanglement correlation through the shared empty state. Such an understanding of quantum mechanics means that extremely high-energy identical particles, such as pulsed lasers in lasers, the identical states of atomic nuclei or the resonant states of elementary particles, can also be regarded as systems with spatial or temporal indistinguishability, and therefore all belong to the generalized quantum entangled state. The above physical understanding of quantum entangled states will further lead to the following two conclusions:
The first conclusion is that only Boson can form quantum entanglement, because only Boson has indistinguishability based on exchange symmetry, and Fermion can only form antisymmetry. For this reason, my personal intuition is that only when photons, pions, and Cooper pairs of superconducting electrons composed of two Fermions constitute a Boson system can a quantum entangled state be formed. So far, all experimental discoveries of quantum entanglement are only reflected in the above Boson system. The EPR paradox was later restated by Bohm and became a pair of positrons and electrons with reverse spins, one for Alice and one for Bob. This physical image has evolved into the most popular example of introducing the quantum entanglement effect, and it is mentioned in almost all popular science articles about quantum entanglement. However, I have searched a large number of literature on various quantum entanglement experiments. Starting from the first article confirmed that Wu Jianxiong discovered quantum entanglement, I have not found any experiments showing that quantum entanglement exists between electrons.
After I posted the above viewpoints in several scientific discussion WeChat groups, a scholar who studies quantum entanglement pointed out the following article, G. Vittorini et al, Phys. Rev. A 90(2014)040302, which explains the quantum entanglement between electrons. But I have read this article. The entanglement between quantum memories cannot be regarded as the entanglement between electrons. It only reflects the entanglement between photons derived from the atomic system. This is the same as the physical conclusion given by Wu Jianxiong's first article on the discovery of quantum entanglement experiment, that is, the measured annihilation of positronium produces two entangled photons: the two photons after the annihilation of positronium and electron are entangled, but it does not mean that the positronium and electron that generate the photon pair are entangled, although many documents regard such indirect quantum measurement correlation as quantum entanglement. For this reason, I still insist on this view that quantum entanglement can only occur between Bosons, and entanglement cannot exist between different Fermions.
The second conclusion is that if we want to understand quantum entanglement from the perspective of a system with exchange symmetry, quantum entanglement cannot exist between individuals or subsystems whose energies are not strictly equal. The system perspective here can be that a system bifurcates into two subsystems that share an empty state. For example, the most common preparation of entangled photons is to irradiate a nonlinear barium metaborate crystal with a laser beam. This will form a twin photon pair with a certain probability, one with horizontal polarization and the other with vertical polarization. But the energy or wavelength of these two photons is strictly equal, and I think this constitutes a quantum subsystem that shares an empty state. In addition, two independent individuals or subsystems may also form an entangled state when merged into a system as a whole. Experiments have shown that one is molecular light emission and the other is atomic light emission, and the two may also be quantum entangled. My understanding is that these must be two independent subsystems with the same energy in order to form a certain quantum tunneling by sharing an empty state and making the entropy energy coefficient larger.
Therefore, the essence of quantum entanglement is that the energy between different individuals in the system must be strictly equal and share empty states, thus forming a kind of time-synchronized quantum tunneling "resonance", which constitutes an indistinguishable system with quantum exchange symmetry. Such exchange symmetry leads to an increase in the number of quantum states of the system, and thus an increase in the information entropy value, although the total energy of the system has not increased. Therefore, the meaning of entropy maximization may be reflected in the maximum disorder under energy constraints, or it may reflect the order of completely equal energy and equal probability distribution. This is not necessarily the meaning of a multi-body system, but can also be reflected in a few-body system. For this reason, I particularly emphasize the use of Shannon information entropy to replace thermodynamic entropy. The meaning of the few-body system reflected in the information Bit should be the basis of the meaning of physical entropy, rather than the traditional Clausius entropy. In this way, the logical premise for the entanglement of two photons from independent sources is that the wavelengths of the two must be strictly equal. Photons of different wavelengths or frequencies from independent sources cannot be quantum entangled.
4.3 Physical understanding of topological degenerate states and their mutual transformation with thermal equilibrium states
In the previous article, I have always emphasized that topological degeneracy and spatial degeneracy are two different concepts, but only at the molecular level. For a topologically degenerate subsystem with only repulsive forces, if it further constitutes a large system, I think it is necessary to form a topologically degenerate physical state, that is, the concept of topologically degenerate state: this is reflected in the multi-body system with a shared empty state as a degenerate quantum ground state. On the one hand, this is inspired by the concept of topological order proposed by Wen Xiaogang. The topologically degenerate physical state, like the topological order, has the meaning of ground state topological degeneracy and energy gap. But on the other hand, the meanings of the two are also different. The meaning of topological order also includes fractional statistics, edge states, topological entanglement entropy, and so on. Wen Xiaogang also regards superfluids and ordinary superconductors with only s waves as Landau order, not topological order. I would like to regard both superfluids and superconductors as topologically degenerate states, or more strictly speaking, this is a system where topologically degenerate states and thermal equilibrium states coexist.
To this end, I would like to first talk about the difference between the two physical understandings. This reflects that the purpose of Wen Xiaogang to construct the concept of topological order is different from that of my construction of the concept of topological degenerate state. Wen Xiaogang is trying to show that in addition to Landau order, there is also topological order in the material world. In order to explain the difference between the two, it is necessary to emphasize that the topological order must contain the characteristics that Landau order does not have. However, as mentioned above, I think the concept of order parameter of Landau order should not exist at all, and I try to replace it with a more accurate evolution parameter p=ln|T/V|. For this reason, in this section, I will further divide the evolution parameter into two cases: V<0 and V>0. The repulsive force system with V>0 may reflect the topological degeneracy at the subsystem or individual level. The topological degenerate state also reflects the self-organizing structure of the subsystem or individual at the level of a larger system. Furthermore, individuals in the topological degenerate state and the thermal equilibrium state may also transform into each other, which can reflect the meaning of system evolution - I understand the topological degenerate state as the overall physical state after the system evolves to form a special subsystem.
Of course, the quantum entangled state mentioned above also has the characteristics of Indistinguishable degenerate, but the quantum entangled state can only be generated in one direction and cannot be converted into the thermal equilibrium state, because there is no energy gap in the quantum entangled state. As mentioned above, the twin photon pairs formed by irradiating the nonlinear crystal of barium metaborate with a laser beam belong to the quantum entangled state, but such an entangled state can only decoherent and cannot be recombined into a single photon. Therefore, the topological degenerate state I defined is neither the ordinary thermal equilibrium state nor the quantum entangled state, but it has the properties between the two. The thermal equilibrium state is based on independent individuals, and the topological degenerate state is also reflected as a collection of subsystems of a small number of individuals, such as the superfluid and superconductor described below, but they are not like the Indistinguishable state of quantum entanglement, where the individuals in the system are completely indistinguishable. The topological degenerate state must further reflect the overall characteristics of different topologically degenerate individuals.
In my opinion, the purpose of forming a physical concept is not to reflect the difference from the existing concepts in the past, but to explain why this special state exists in the existing material phenomena and give its physical understanding. At present, there is a strong correlation concept for electronic systems in physics. This concept comes from a series of new experimental discoveries in condensed matter physics after the 1980s, such as copper oxide superconductivity and quantum Hall effect. Although people cannot give an appropriate theoretical description of strong correlation, they have found that they all have certain characteristics of quasi-two-dimensional systems: the kinetic energy and Coulomb repulsion of electrons are approximately equal in two-dimensional systems, which is very similar to the marginal state I defined earlier. For this reason, I will classify strongly correlated systems with ground state degeneration as topological degenerate states: superconductivity, superfluidity and quantum Hall effect all belong to topological degenerate states . Below, I will use topological degenerate states to make an alternative analysis of superfluidity phenomena.
Landau's superfluid phenomenological theory won the Nobel Prize. However, although this theory explains the frictionlessness of superfluids and the specific heat curve of liquid He using the phonon state and voron state energy spectra, it cannot explain it and seems to intentionally avoid the Rollin film and fountain effect. In fact, the Rollin film climbing effect was discovered as early as 1939, earlier than Landau's superfluid theory in 1941. Later, two other Nobel Prize winners, Onsager and Feymann, also proposed the concept of quantum vortex based on superfluidity, trying to give a more quantitative physical description of the voron state proposed by Landau, but they also failed to give better results than Landau's superfluid theory, and of course failed to explain the climbing and fountain effects that violated the second law of thermodynamics. When I studied the superfluid theory, I felt that Landau's description was problematic. How could it be considered that the entropy value of the superfluid state was 0? The state with an entropy value of 0 should spontaneously transition to a thermally disordered state with a larger entropy value.
The following experimental phenomena are worth noting about the liquid helium problem: after the temperature drops to the liquefaction point of 4.2K, its state HeI will be like boiling water and will be in a permanent boiling state. This is also incomprehensible: hot water will not continue to boil if it leaves the heat source. Then, the temperature continues to drop to the λ phase transition point of 2.17K to form HeII, and the boiling state stops immediately. The Rollin film climbing effect also occurs in this temperature range. If the film climbing effect presented after the λ phase transition is understood from the perspective of non-equilibrium statistical physics, it should not be a disorder-order phase transition, but a spontaneous formation of a self-organized structure far from equilibrium - HeII reflects that the system is trying to transform a certain kind of boiling vitality into an ordered self-organized energy, because a simple disorder-order phase transition does not show energy drive. For this reason, I will propose a macroscopic understanding of classical gravity and laminar flow , as well as a microscopic understanding of quantum "rigid clusters" for topological degenerate states.
Rollin film climbing effect: The film will climb up along the rim of the bowl and completely drop out of the bowl. This picture comes from Professor A. Leitner's 1963 demonstration video on Youtube ( https://youtu.be/-7PNacL4n8g ), in which Professor Leitner pointed out that this is a violation of the second law of thermodynamics by doing work from a single heat source. The picture below shows the fountain effect, where a fountain will form when the bottom of the liquid helium container is illuminated.
First, let me briefly explain from the perspective of classical gravity why HeI boils between the liquefaction temperature of 4.2K and the λ phase transition point of 2.17K: This is due to two reasons. First, the two external 2s electron states of He are spherically symmetrical and saturated, so there is a weak Coulomb repulsion between different He atoms, which is different from the mutual attraction of any other chemical bonds between molecules. Second, due to the difference in gravitational potential energy, the average energy of He atoms at the bottom and top of the container will be different, just like it is colder at high altitudes. In fact, the temperature range of liquid He is only a few K, and its average kinetic energy is extremely low, which is comparable to the gravitational potential energy difference of the container scale, so it will form thermal convection under the action of gravity. Therefore, the boiling of liquid helium is the same as the principle of boiling water, which comes from the higher kinetic energy of the molecules at the bottom and the thermal convection under the action of gravity. Next, I will use the concept of synergetic state to give a clearer description.
This has to start with why the He atomic system undergoes a gas-liquid phase transition at 4.2K. Ordinary matter will liquefy and condense due to the attractive force of chemical bonds. But there is only repulsion between liquid He atoms, so how can a gas-liquid phase transition occur? This is exactly what I pointed out in the previous article, which is caused by the isoenergetic entropy drive in momentum space under the action of entropy force. This allows particles with similar energies to form clusters of concerted states with similar energies: but such clusters are not a concept in coordinate space, but in momentum space. Liquid He atoms only have Coulomb repulsion in coordinate space, and it is difficult to present them as observable ordered states, but entropy forces will drive the atomic system to form concerted states under mean and variance constraints in momentum space. Gravitational force will then split such a concerted state into the kinetic energy density difference between the bottom and top of the container. Therefore, HeI in the concerted state will be different from the state of complete thermal disordered motion, and the completely disordered Brownian motion will not appear boiling.
Furthermore, why does the HeII system stop boiling after the λ phase transition and form the climbing effect of the Rollin film? Superfluid HeII cannot be contained in ordinary containers, but must be contained in special high-density materials with extremely low capillary scales. Otherwise, the superfluid will seep out of the capillaries of the container. This makes me think that the sudden stop of superfluid boiling is not due to the sudden decrease in the energy of the system, but more likely the sudden conversion of the energy of the individual system to the "resonant motion" of the vertical "rigid cluster" parallel to the gravity. First, the macroscopic image of such a superfluid is similar to the laminar flow of fluid, but it does not flow in a certain direction, but "resonant motion" at a fixed point. Laminar flow is also called steady flow. You can see laminar flow videos that hardly show any movement on the Internet. The steady laminar flow may be the reason why HeII stops boiling, and of course it is also the reason for the climbing effect and fountain effect.
This laminar macroscopic image of "resonant motion" under the action of gravity, its microscopic counterpart is the understanding of quantum "rigid clusters". "Rigid clusters" are of course Landau's vortex state and a certain embodiment of the quantum vortex state imagined by Onsager and Feymann. But the quantum vortex state seems to lack imagination, because it is very similar to the quantized orbit of hydrogen atoms constructed by Bohr in his early years, but after the concept of wave function was established, this quasi-classical physical image was abandoned. Therefore, for the quantum "rigid clusters" with the characteristics of "resonant motion", my idea is to involve the indistinguishability of quantum Indistinguishable states, rather than the phase characteristics of quantum vortex states - this requires a description of its physical characteristics first. This is not a structural description based on spatial geometry. To this end, we must first analyze from the minimum action TV of the repulsive force system. "Rigid clusters" have the following three important characteristics: first, how is the λ phase transition different from the ordinary gas-liquid-solid phase transition, second, the cause of cluster formation, and third, the difference between its Indistinguishable state and synergetic state.
Let's first talk about the formation of "rigid clusters" caused by the λ phase transition . The positive-marginal and marginal-negative phase transitions based on the evolution parameters mentioned above are very close to the description of the gas-liquid phase transition of liquid He. But the marginal-negative phase transition is very different from the λ phase transition. This is due to the different signs of the interaction energy V. From the perspective of the minimum action TV, this means that the He atoms inside the "rigid cluster" will increase their kinetic energy T and repulsive energy V at the same time because they are close to each other. But the evolution leads to the minimum point of the action T-V, forming a "rigid cluster". This will reflect two major characteristics. The first characteristic is that every time a stable "rigid cluster" is formed, V increases faster than T, which will release heat to the system. This is the same as the physical principle that after the formation of stable helium atoms by nuclear fusion in the sun, energy must be released. Therefore, when the λ phase transition approaches the phase transition point of 2.17K from high temperature, there will be an almost straight-up specific heat peak, as shown in the figure below.
The second characteristic is, will the "rigid cluster" have quantum excited states like ordinary atoms and molecules? Just as the alpha particles I analyzed earlier will not have excited states, any fully homogeneous subsystem will not have excited states and will only present a unique energy state. The physical reason is that any energy-degenerate topologically degenerate subsystem has the same information entropy at equal energy. Of course, the total energy will also be different depending on the number of individual He atoms contained in the "rigid cluster", but clusters with the same number of atoms have fully homogeneous indistinguishability. This shows that the maximization of information entropy must also be reflected as a constraint to minimize the total energy of the system. If the maximization of disorder entropy under the constraint of total energy is regarded as entropy energy criterion I, then the energy of the system itself must also be minimized under the constraint of information entropy, which can be regarded as entropy energy criterion II. This has been reflected in the mathematical argument of the entropy energy criterion in the previous article, and this naming is inspired by the above HeI and HeII.
Let's talk about the second point. After the "rigid cluster" is fully homogeneous, the meaning of "resonant motion" will be reflected. This is different from the Brownian motion when it is in a boiling state. After the λ phase transition, it is like a laminar flow and will only move vertically in the direction of gravity. This is the necessity for the He atom fully Indistinguishable state to show exchange symmetry under the action of gravity. Imagine, is there any inclined motion in the cluster? Because the gravitational energy of each individual He atom will be converted into its kinetic energy, different He atoms cannot form fully Indistinguishable states with absolutely equal energy and exchange symmetry. In this way, the cluster should be confined to a two-dimensional plane perpendicular to gravity, and can only move in the horizontal or vertical direction. This can ensure that the energy of all He atoms in the "rigid cluster" has quantum homogeneity.
Furthermore, the phonon and vortex energy spectra given by Landau, as I understand it, actually reflect the two types of energy spectrum structures of the "rigid cluster" of He atoms, which are up and down motion in the direction of gravity or plane motion perpendicular to the direction of gravity in momentum space. These are not two types of quasiparticles, but both are reflected in the energy spectrum of "rigid clusters", which is a major revision of Landau theory. The energy spectrum of plane motion reflects the "quantum vortex" energy spectrum formed after a certain He atom is spontaneously compressed in space and the interaction energy V increases. The energy spectrum of "resonant motion" perpendicular to the gravitational direction depends on the height of the gravitational potential energy. In fact, if the physical understanding of the "rigid cluster" is considered from the momentum space, it should be reflected as a Fourier transformation of real space. In the momentum space after the Fourier transformation of the gravitational potential energy, it is proportional to the momentum p and forms the phonon energy spectrum. The plane vortex motion of the "rigid cluster" may involve the plane vortex motion of different numbers of He atoms, and thus may present different vortex state energy spectra.
Third, each "rigid cluster" embodies the maximization of information entropy of a topologically degenerate subsystem, and all these topologically degenerate subsystems must also constitute a large system of topologically degenerate physical states. Furthermore, the HeII formed after the λ phase transition is actually not entirely composed of "rigid clusters", and the above-mentioned synergetic state of the HeI phase also exists. It is necessary to clarify the relationship between these two physical states. First, "rigid clusters" are subsystems in the large system of topological physical states. The synergetic state individuals of HeI are independent He atoms and do not form clusters. How can they coexist with the fully Indistinguishable state of "rigid clusters" after the λ phase transition? The reason is that "rigid clusters", as a steady state of entropy maximization, are under the thermal disorder background of synergetic state He atom individuals and form thermal equilibrium: the formation of "rigid clusters" will release heat to the system, and its disintegration into independent He atoms will absorb heat - the two systems achieve the coexistence of thermal equilibrium, and the same is true for superconducting electrons and ordinary electrons in superconductors.
Therefore, the above “rigid cluster” subsystems will tend to form a large system of topologically degenerate states, and the overall kinetic energy of each “rigid cluster” subsystem in the large system is consistent. This forms the information entropy maximization of the entropy energy criterion II at the second system level , which can explain the climbing effect and the fountain effect. These two effects actually reflect that the order entropy maximization brought about by the gravitational effect or quantum-classical phase transition is higher than the disorder entropy maximization formed by heat release and heat absorption. The climbing effect shows that the increase in gravity promotes the formation of “rigid clusters” on the same horizontal plane to form a topologically degenerate state, so that the liquid helium molecules in the bowl will convert all HeI into HeII with increased gravitational potential energy, and then all drip across the mouth of the bowl. The fountain effect shows that the “rigid clusters” at the bottom of the container will absorb photon energy to form a completely isoenergetic topologically degenerate state, and then form a fountain of higher energy isoenergetic ordered entropy maximization state and spray out.
The above physical explanation is completely different from the physical understanding of Landau superfluidity theory. This reflects that the concept of topological degeneracy has the meaning of quantum homogeneity at the level of homogeneous subsystems and the system as a whole. At the subsystem level, "rigid clusters" should be reflected in that this is different from ordinary atoms and molecules. Any atomic molecule, as a subsystem individual of a statistical system, can be approximately regarded as a rigid spherical individual at low temperatures, but at high temperatures, the atomic molecule itself will form a quantum high-energy state. For example, the measurement of the Lamb shift of the hydrogen atom spectrum is to place hydrogen at a high temperature of about 2000K. However, the "rigid cluster" of the topological state will only present a rigid degenerate energy level. Of course, the rise in temperature will make it unstable and decompose into ordinary He atoms. Furthermore, at the system level, the meaning should be reflected in that all "rigid cluster" subsystems will converge in energy and form an integrated topological state - this is the physical reason why the topological state system will violate the second law of thermodynamics, which is due to the maximization of information entropy at the system level.
The above concept of topological degenerate state should also bring a new physical understanding of the energy conversion of the system. Below, I will briefly point out that this will also revise the superconducting BCS theory: Cooper pairs cannot be composed of two electrons with opposite momentum on the Fermi surface, but should be reflected as two electrons with close momentum, which move back and forth in the momentum space and the coordinate space of the superconductor surface to form Cooper pairs. Only in this way can the energy of the system be reduced to have a stable and continuous effect. For this reason, I proposed the physical mechanism of the difference wave state of superconductivity: p=|p↑ - p↓| constitutes a quantum homogeneous difference wave state on the Fermi surface in momentum space. Such a difference wave state should be understood as the energy and momentum of all Cooper pairs of electrons relative to their center of mass system must be strictly equal, and their spatial positions must be on the surface of the superconductor. In this way, the ordinary electronic state in the superconductor only exists in the superconductor, while the superconducting electrons are on the surface of the superconductor and form topological degenerate states to quantum tunnel each other.
The physical image of mutual quantum tunneling of superconducting electrons comes from my early imagination that geomagnetic reversal is also mutual tunneling, but the construction of the model is also hindered by insufficient understanding of its microscopic physical mechanism. Furthermore, high-temperature superconducting materials all have quasi-two-dimensional characteristics, which I feel comes from a certain quantum tunneling resonance effect. The system view is reflected in the fact that superconducting electrons and ordinary electronic states will spontaneously reach thermal equilibrium, and the disintegration of a Cooper pair must be accompanied by the generation of another Cooper pair without energy difference - the superconducting electron state can reduce the energy of the system and maintain the entropy value of the system. This is the physical reason for maintaining the superconducting circulation and squeezing out the internal magnetic field to show the Meissner effect. This also shows that the essence of superconductivity is a self-organizing structure. It is only because this is an energy conservative system rather than a dissipative system that it does not have the characteristics of being far from equilibrium. Due to space limitations, I will not explain my imagination in detail here, but when giving the concept of two-level evolutionary states in the next section, I will do some further analogy analysis.
Finally, at the end of this section, I would like to explain one more point. The above three self-organized states, cooperative tunneling state, quantum entangled state and topological degenerate state, of course, reflect my understanding from the perspective of the evolutionary platform driven by the system entropy force. But such an understanding does not mean that such a self-organized state must be like our human body, forming an independent temperature system that is different from the ambient temperature. My understanding of self-organization is only driven by isoenergetic entropy maximization, and it does not need to be maintained by self-energy dissipation and external energy input. Only in a far-from-equilibrium system that is constantly supplemented by external energy or spontaneously generates energy, which is also the concept of a dissipative system, can it have a temperature that deviates from the environment, such as a living organism or a star. Even if it is a self-organized material state driven by isoenergetic entropy maximization, it still has to have thermal contact with the environment and have the same temperature. At most, the specific heat will be different, which has been reflected in the aforementioned superfluidity and superconducting properties.
5. The bipolar evolution platform of the life system - starting from the bipolar philosophy and economic network platform
As mentioned above, I returned to physics academic research in 2013, but why didn’t I apply to return to the academic circle in the end? The most important reason is that I feel that my ideas may be incompatible with the current scientific academic circle and have deviated from normal scientific thinking. Normal scientific thinking is to seek rational thinking based on the logical connection between individual things. This first attempts to construct a strict equation description for interrelated concepts or things. If the equation description does not work, then use probability description. If it still does not work, then it is considered that there is no connection between these things, thus forming the concept of uncertainty. A specific manifestation of this in physics is the uncertainty relation of quantum mechanics in the past, which is called the uncertainty relation. In economics, the concepts of risk and uncertainty were formed earlier, and uncertain projects cannot obtain loans. The above uncertainty is even more obvious in the description of inheritance and variation in life phenomena.
However, after thinking over and over again about the seesaw model proposed by Tang Chao, combined with the concept of topological degenerate state discussed in the previous section, I thought that from the perspective of physics, a concept of a life system evolution platform should be constructed in biology. This has gone beyond the scope of the deterministic and uncertain description of normal scientific thought mentioned above . The evolution platform embodies a certain system holistic thinking that is unique to Chinese culture. Although such holistic thinking is very common in our daily lives, such as in TV dramas, a man and a woman may get angry for a trivial matter, but the cause is the accumulated contradictions and conflicts in the other person's heart that have nothing to do with this trivial matter. However, such artistic expression is contrary to scientific thinking. Scientific analysis must exclude other factors and can only construct causal relationships between specific concepts with strict scientific definitions. For this reason, system holistic thinking is difficult to be precise, which runs counter to scientific thinking that can be precise and quantified.
But my analysis over the years shows that the contradiction between holistic thinking and scientific thinking may be reconciled, but it requires a new understanding of the above evolutionary platform concept. As mentioned above, there is such a platform concept in the field of computer information science, but there is no such platform concept in life science, at least in molecular biology. If we understand it from another perspective, if we regard all the laws of nature as system laws built on the evolutionary platform, then the equation description and probability description of existing physical science may just be two extreme special cases of the system dominated by energy and entropy. In this way, the system law on the evolutionary platform reflects the projection of the system energy and entropy in different time and space. For this reason, the system with "equal strength" in energy and entropy may be described by cellular automata in discrete time, which is also a precision of the seesaw model. In this way, the Western scientific view and the Chinese holistic thinking view can be integrated.
For this reason, I will use a special writing style in this section to express the process of my thinking. From the late 1990s to 2013, I once left the physics academic circle and tried to use the complex network of physics to analyze the economic system, and proposed that the economic system has two major characteristics: scale-free network and central star network. Such thinking comes from my bipolar philosophy. Therefore, in the first and second parts of this section, I will analyze and prove my philosophical and economic thinking, and give the idea that the evolution of the economic system comes from the energy drive based on the contract and the entropy drive based on the non-contract, thus proposing the concept of a bipolar evolution platform to describe the economic system. It is on the basis of economic thinking that I have constructed a new understanding of the evolution of life, so the third part of this section will further discuss that I "combine" Wen Xiaogang's topological order and Tang Chao's seesaw concept into one, thus forming a two-level evolution platform for the life system.
5.1 Equation description, uncertainty and the current AI era - from the perspective of regular cognition and inertial thinking
The current scientific thought system is based on seeking the connection between individuals to seek the laws of material movement, while the concept of the system evolution platform I mentioned above is to make an overall analysis based on the total energy and entropy characteristics of the system, rather than based on individual connections. This involves the perspective of our understanding of laws. Mr. Yang Zhenning once said that he learned the deductive method in his university education in China and the inductive method in the United States. The deductive thinking from general to individual is conducive to telling knowledge, but the inductive method from individual to general is obviously more conducive to summarizing laws. However, the observation paradigm analysis framework proposed in this article is more reflected in the fact that it belongs to deductive reasoning, and it also questions the existing scientific thought system - it believes that what drives the evolution of all things in the universe is actually reflected in a certain criterion model rather than an equation model. I will not analyze this issue in depth here. Next, I will only talk about the doubts about the existing scientific system.
Let me start with my childhood, when everyone had to study Chairman Mao’s works. We could recite the “Three Old Articles” when we were young, and we also started to study “Where do people’s correct thoughts come from?” in the upper grades of elementary school. The article talks about “people’s correct thoughts can only come from social practice, only from the three practices of social production struggle, class struggle and scientific experiment”. This is of course very reasonable. However, my parents had a colleague who often came to my house. My grandmother was very polite to him at first, but later she heard that he was a divorced person, and immediately felt that he was not a good person: how could our family make friends with such a divorced person? After that, my grandmother would never talk to him again when he came to my house. This made me start to think when I was a child, whether my grandmother’s thoughts were correct? If correct thoughts can only come from social practice, the view that “divorced people are not good” has obviously not been tested by experiments and is even less likely to be correct. It is just the moral concept that my grandmother has received from the previous generation. I call it inertial thinking.
Which of the inertial thinking handed down by human society is correct and which is wrong? Furthermore, how is the inertial thinking of people formed, passed down from generation to generation and accepted by future generations? People's thoughts often follow the economically developed and wealthy places. This must have its reason in a certain historical period, but if it continues to spread and evolve, it may evolve into absurd ideas. For example, the Protestant Ethic and the Spirit of Capitalism written by German sociologist Max Weber in 1905, the so-called Protestant ethic is actually just ethics in the sense of economics, believing that Protestant Christianity emphasizes consumption moderation on the one hand, and advocates the importance of voluntary labor on the other hand, and this spirit will be conducive to economic development. Then, according to such inertial thinking, it seems that all mankind must spontaneously believe in Protestant Christianity in order to bring about social and economic prosperity. The subsequent development of human society is obviously not the case. The economic take-off of Japan, the four Asian tigers, and especially China completely denies the inevitable connection between economic development and religious ethics.
Therefore, there is no objective standard for judging whether something is right or wrong. It can only depend on whether a certain view is generally accepted at the time when it was formed. Whether a view can be generally accepted depends on the analysis and argumentation of the scientific spirit based on rationality created by Greek civilization. The scientific spirit is not based on the inertial thinking of following the crowd, but pays more attention to the certainty of knowledge itself, rather than practicality and utilitarianism. It is necessary to distinguish right from wrong through the logical deduction inherent in the evolution of things. The landmark event of the general acceptance of the scientific spirit was Newton's success in creating classical physics in the 17th century. Such success has not only spread to natural sciences such as chemistry and biology, but now people also call the study of social issues social sciences, such as economics and political science, in addition to philosophy and law. Scientific thinking has replaced unscientific inertial thinking. But a new question I want to raise in this article is whether today's scientific thinking has also become a kind of alternative inertial thinking, which also restricts the progress of human society?
This question is the focus of my analysis and argumentation in this section. To this end, I will first propose a concept of equation description under the general framework of scientific thinking. This reflects that people first try to use determinism to describe scientific laws. If the deterministic description is unsuccessful, they will use probability theory to describe it. If probability theory is still not enough, they will form the concept of uncertainty and cannot make a scientific description. Equation description and uncertainty description form the two extremes of scientific analysis and argumentation, which is most obvious in physics. Classical Newton mechanics can be described by deterministic equations of motion. When it comes to the microscopic explanation of thermodynamic systems, probability theory has to be used to describe it, although people did not have a clear concept that matter is composed of molecules and atoms at that time. Next, in addition to probability theory, people have to introduce the uncertainty relationship in the description of the quantum mechanical wave function of atoms. Today, people will express it as an uncertainty relationship. The generation of the concept of uncertainty reflects the blind spot of scientific description.
From determinism to probability theory and then to uncertainty description, it is also reflected in biology. Since Mendel began to study genetics in the 19th century, people have created the concept of recessive and dominant genes. Genes determine biological traits, which seems to be deterministic. However, a mother gave birth to nine sons, and each of them is different. Can we say that this is all determined by genes? This may reflect that the genetic process will also produce gene mutations, and that human behavior is also affected by the acquired environment, which requires probability theory to describe. Gene mutations have no directionality at all, only probabilistic randomness, which seems to perfectly explain Darwin's theory of evolution. However, before I paid attention to biology in the past, I seemed to have not realized the concept of uncertainty in biology. But the non-coding DNA in existing biology, which is jokingly called the concept of "junk DNA", is believed to have played a role in the past, but was later replaced by biological evolution, which implies a large uncertainty factor.
The seesaw model also brings me an uncertainty shock: Does the seesaw of two induction forces also reflect the uncertainty of the evolutionary direction of cell fate change? The reason why I attach so much importance to the concept of uncertainty is that in addition to the fact that my early geomagnetic reversal model contains a strong uncertainty meaning, it also stems from my attention to the evolution of economic systems after the 2008 financial crisis. If it weren't for Tang Chao's proposal of the seesaw model that made me pay attention to physics again, I would have been trying to study the problem of why the financial system collapsed from the perspective of complex networks. In fact, the concept of uncertainty is much more important in the economic system than in physics and biology. A simple example is that real estate developers can easily obtain bank loans in the land purchase stage, and later home buyers can also easily obtain loans after paying a deposit. However, the risk in the intermediate stage cannot be assessed, which is uncertainty, and can only be obtained through venture capital institutions. Economics has long put the concepts of risk and uncertainty into practical use.
The above-mentioned descriptions of determinism, probability and uncertainty will affect the cognition of the existing scientific system: this will be reflected in the boundaries of the usability and unusability of equation thinking, and also in the boundaries of the testability and untestability of scientific descriptions. All that can be described by deterministic equations, whether they are deterministic equations or the conclusions of probability theory, can be verified by rigorous experiments, which is needless to say. But if it is determined that the relationship between things is uncertain, can this also be considered a law? It is not appropriate to form any concept of experimental testability on this basis: experiments prove that two concepts are unrelated or have no causal relationship. As a conclusion of scientific argumentation, this is certainly not completely meaningless, but at least its meaning is very limited. The existence of uncertainty can actually only be regarded as a blind spot in scientific description, which usually does not bring any harm. However, in today's AI era, to use a popular phrase today - "the problem is here."
Based on big data analysis, any description of uncertainty is likely to be confused with the equation description under the meaning of determinism and probability theory. In fact, many decisions of people come from the analysis of big data. These big data themselves do not distinguish between certainty and uncertainty. Over-reliance on big data is likely to bring unpredictable consequences. Nowadays, there are actually two directions in AI research and development. One is the comprehensive direction. For example, AI robots can sweep the floor and wash dishes, but their hands and feet are not yet smooth. There will be improvements in the future. The comprehensive direction is also reflected in literary creation. Deepseek's poetry level should exceed 99% of people. But this still only reflects the instrumentality and is an extension of human organs. Another direction is to let AI make decisions instead of people through big data. This is to treat AI as a "leader" and be very cautious. It is not a big problem for a company to rely on AI for analysis for financial planning. But if humans think that their own human brains are not working well and AI must be used to direct the future behavioral decisions of human society, what problems will this bring?
All the future of our society will be handled by AI, and even the president of a country will not be needed. In the future, AI will definitely be better than anyone else in making decisions when it comes to the president of the United States. There are many people who hold this view. For example, when YouTube celebrity and former CCTV host Wang Zhian talked about this view, most of the comments and messages behind him agreed. For this reason, as history has developed to this era, I think of my grandmother's inertial thinking when I was a child. My grandmother's view that "divorced people are not good" at that time was of course limited to that era. Today's human society has progressed. Even though arranged marriages still exist in India, the divorce rate is extremely low, but discrimination against divorced people is basically no longer there. However, if there was big data when my grandmother was young, would discrimination against divorced people never be corrected? It's terrifying to think about it. This shows that AI big data analysis may make humans return to the era of inertial thinking, and AI may "kidnap" the human mind and soul from progressing, which will be a serious problem.
If the scientific view from determinism to probability theory and then to uncertainty description has played a positive and progressive role in the development of human society so far, then if we stick to the scientific thinking view, that is, if human beings continue to move forward along the inertia of scientific thinking, the future human beings may be "kidnapped" on this scientific road all the way to the end - this involves the paradigm problem of thinking. The geocentric theory constructed by Ptolemaeus in the early years was difficult to sustain as the number of its epicycles and deferents continued to increase with the new discoveries of astronomical observations, so Copernicus' heliocentric theory had to be born. This is a major paradigm shift in the history of human science. From then on, people no longer regard theoretical doctrines as speculation based on logical deduction, but want to verify scientific views through actual observations. However, as science has developed to this day, the scope of uncertainty description has been expanding, and people's existing scientific thinking seems to be unable to cover the crazy growth of the relationship between uncertain things. Does scientific thinking also need another paradigm shift?
5.2 Needham Puzzle, Bipolar Philosophy, and Complex Network Platform of Economic System
To re-examine the scientific thought of mankind, we must start with the Needham Puzzle. Until the 16th century, China's science and technology and economic level were higher than those of the Western world at the same time, but why was it later surpassed by the West? In essence, the formulation of the Needham Puzzle actually reflects the new and old scientific research paradigms. In ancient China, there was no scientific theory constructed, only technology that was based on experience and gradually developed. However, after the Renaissance, Western society developed scientific innovation thinking based on experiments. Regarding the Needham Puzzle, people often understand it based on why human thought is advanced or backward, or the corresponding level of technological progress and economic development, but I explain it through the spontaneous evolution of the Chinese and Western economic systems, thus forming a two-level philosophy of scientific spirit and Confucian ethics. Furthermore, based on this, I would like to propose that the evolution of the economic system should be based on a complex network platform.
The bipolar philosophy originally came from my thoughts during my time as a teaching assistant at Hunan University after I graduated from college in the 1980s. At that time, I lived in the home of my parents who were studying at Hunan Normal University. In 1985, the school asked my mother Zhang Guozhen to teach the course "Introduction to Western Philosophical Thoughts". The course was to first briefly introduce a certain school of Western philosophical thought, and then criticize these Western thoughts from the perspective of Marxism. Since then, our family's bookshelf has been filled with a large number of original works by various Western philosophers, and I often read these books in my spare time. What I felt very strange at the time was, where did all these different thoughts in human history come from? If we trace back to the source, all thoughts must come from the three major social practices of production struggle, class struggle and scientific experiment. Doesn't this also reflect that the social practices engaged in by people all over the world are also essentially different?
What puzzles me most is why there were philosophers and thinkers in ancient Greece who were far more than the proportion of their population? All the later thoughts of Western philosophers were based on the thoughts of these sages in the Greek period. However, the formation of Chinese philosophy has nothing to do with ancient Greece, which seems to reflect the difference in agricultural production methods. In the early days of any civilization, due to the backward information and transportation, it would be relatively isolated from the outside world. The people on this land had to survive without trade, which could only rely on two types of agricultural products: one is grain products, including foods that are easy to preserve after being dried, such as grapes. Other vegetables and fruits are difficult to store over the winter and cannot be used as major cash crops, unless they can grow vegetables and fruits all year round in tropical areas. The second is livestock products. Although meat is more difficult to store than grains, it can be slaughtered at any time and can ensure human survival. Livestock can eat hay that humans cannot use as food over the winter. For this reason, the agricultural economy of early humans in the world usually only has two major categories: grain model and livestock model.
But ancient Greece is very special. In addition to the fact that 80% of its land is mountainous, its Mediterranean climate is even more peculiar: little rain in summer but much rain in winter. Therefore, Greece is not suitable for growing grains, which requires abundant sunshine and rainfall. But the climate conditions are more suitable for growing grapes and olives. For this reason, the latter two, especially olive oil, which is easier to store, became the main agricultural products of the Greek people, in exchange for insufficient grain production. Therefore, if the ancient early agriculture of countries around the world is divided by culture, there will be two poles: grain culture and olive culture: the former is the least dependent on trade, while the latter is extremely dependent on trade. Other cultures, including animal husbandry culture, are in the middle. In this way, the diversity of world cultural thought may come from the various superpositions, integrations and evolutions under the above two cultural poles. But in essence, it comes from the polarization of early human agricultural models. This shows that the formation of human thought, whether scientific or ethical, comes from the soil on which it depends for survival: the production and sales model of agricultural products gave birth to early human philosophy and ideas.
Furthermore, China, India and Southeast Asia, which are completely different from the Greek Mediterranean region, are very suitable for growing grains because of their mild climate and abundant rainfall. These countries are all at the extreme of grain culture, and their distinctive characteristics are reflected in China's authoritarian culture and India's religious culture. I will not elaborate on this, but just want to explain that ancient Chinese agriculture needed to strengthen state authority: land ownership must be divided into each family to be most efficient, and property rights and deeds must be strictly defined by the government. Furthermore, in order to resist natural disasters, the state needs to mobilize collective forces to build water conservancy projects, and to prevent the plunder of surrounding nomadic peoples, it is necessary to build the Great Wall. However, the extreme of grain culture has also formed the opposite Indian culture, which weakens the concept of the state but emphasizes religious ideas. The caste system makes people more resigned to suffering, and the society with lower production efficiency and more poverty will spontaneously form such a religious caste model. India's Brahman and Atman seem to be similar to China's unity of heaven and man, but the focus of Chinese philosophy is to pursue the principle of heaven in the present world, while the focus of Indian philosophy is the afterlife and liberation, and to give up people's attachment to the world.
The bipolar philosophy also reflects that system evolution has both bifurcation and merger. In terms of bifurcation, if we regard grain culture and olive culture as the result of the first bifurcation after the transition from hunting and gathering society to agricultural society, then as mentioned above, grain culture has further bifurcated into the two poles of Chinese and Indian philosophy. This is consistent with the description of the Book of Changes: "The Book of Changes has Taiji, which first gave birth to the two poles, the two poles gave birth to the four images, and the four images gave birth to the eight trigrams." The same is true for Western culture. The monotheistic Abrahamic religions, the source of Western religions, first bifurcated into Christianity and Judaism, and then gave birth to Islam, and then formed various religious sects. Each bifurcation was caused by the sharp opposition and irreconcilability of certain philosophical ideas, thus forming opposing poles. This is the bipolar philosophy I have summarized. Furthermore, such a bipolar philosophy also has a merger aspect: China's early grain culture still had various schools of thought, but in the later period, Confucianism was the only one. Olive culture also had various myths and speculations in ancient Greece, but now it has merged into scientific thinking based on reason and logic.
Corresponding to the bifurcation and merging of the above bipolar philosophy, if we use physics to analyze by analogy, it will show that human economic society has also evolved into a bipolar society. As mentioned above, the economic society can be described by a complex network. The transactions between nodes reflect that the interaction energy V is positive energy, that is, the edges between different nodes are used as economic exchanges, which make the two parties agree to a positive value. This shows that any spontaneous economic transaction will bring benefits to both parties of the transaction. In mathematical terms, the evolution of such a complex network system with positive energy initially has the expansion characteristics of the random graph with increasing edges . In the early evolution, there was no increasing returns to scale, so the bifurcation of the edges would continue to form. However, after the system energy increases and the increasing returns to scale effect begins to appear, it will become complicated, and two different complex network platforms of olive culture and grain culture will be formed.
For any economic system, there is no transaction without surplus. Therefore, the early economic society of mankind is presented as an isolated system. The first random network constructed reflects people's sporadic transactions, and they cannot make a living through transactions. The random network characteristics of such early society can be described by the small-world network of complex networks. Its transactions can only form a Poisson distribution without increasing returns to scale, which can be described by a random graph. Subsequently, olive culture will tend to a free market system, that is, the Albert-Barabási scale-free network mentioned above. The latter grain culture will lead to an authoritarian society of imperial despotism, forming a star network with only one center point. The lazy ant model analyzed by Prigogine mentioned above as a description of the ant world also belongs to a star network, and the same is true for the bee world.
The above two types of descriptions of the network platform of the early economic system of mankind are also very similar to the development of the Internet. In the early Internet Web1.0 era, countries around the world experienced an era from access to content, but since then, there have been two major trends: decentralization and centralization: the blockchain concept of the Internet that stores huge transaction data today must be decentralized, and the big data model constructed in the AI era needs to be centralized and cannot be decoupled from the world's ever-expanding data - decentralization and ensuring centralization have led to two types of platform structures with different functions. Under these two types of platform structures, different economic rules will also be formed. I have thought a lot about the analysis of such a complex network platform, but I can't elaborate on it here . Below, I will first give two special interpretations of the Needham Puzzle to answer the Needham Puzzle. Then I will analyze the economic network platform problem.
First, we need to understand the source of the formation of Western scientific thought from the aforementioned path dependence, and its economic foundation is the olive culture of ancient Greece. The reason why ancient Greece produced a large number of philosophers and thinkers is not because the ancient Greeks had higher IQs, but because of the transaction characteristics of olive culture. As human society develops to this day, the social division of labor has become more and more detailed. From the olive oil of the aforementioned olive culture, to the pin factory described in Adam Smith's "The Wealth of Nations", to the current world, there are about 10 8 -10 9 types of goods. People's survival is actually more and more dependent on transactions: this just reflects the inheritance of the transaction characteristics of olive culture by today's economic society. To this end, in order to achieve a transaction, people must build an equal relationship between the two parties to the transaction and a fair rule of law. On this basis, they will further form a scientific spirit based on reason and logic that convinces people with reason. The continuous strengthening of the scientific spirit in the Western world has its material basis of increasing economic connections. This is reflected in the structural change of the network platform, which has evolved from a random network to a scale-free network.
Second, grain culture and olive culture are exactly the opposite. For Chinese society, the self-sufficient peasant economy has very little dependence on trade. However, as the population grows, the contradiction between too many people and too little land becomes more and more prominent. In this way, the landlord economy will also form an increasing return to scale effect, but this is not a commercial consequence of the growth of transactions, but is reflected in the fact that the richer the landlord, the easier it is to save money to buy land and become richer, while the poorer the landlord, the less able to resist the risk of disasters, and if an accident occurs, he must sell the land to survive. In this way, the landlord economy cannot give birth to a scientific spirit based on transactions, but will only form a Confucian ethics that makes wealth distribution more reasonable. The most important economic manifestation of Confucian ethics is to build a more reasonable model of taxation and land rent: the land leasing model developed and evolved by the Chinese people is much richer than the land rent model in Europe and the United States. The tax system has gradually become reasonable from the one-whip law of the Ming Dynasty to the land tax in the Qing Dynasty, but this requires the strengthening of imperial autocracy. This is also reflected in the marriage system of monogamy and multiple concubines, and the inheritance system of equal distribution among sons, etc. Its corresponding network platform is a star-shaped network structure centered on imperial power.
Therefore, according to my understanding, the Needham Puzzle actually reflects the different network platforms caused by the evolution of two types of economic systems. From the evolutionary consequences of the two types of network model platforms, it is difficult for scale-free networks and star network platforms to remain stable for a long time. The star network platform of China's early agricultural agriculture generally collapsed once every 300 years. The Western economy under the scale-free network seemed to be developing steadily in the early days from the great voyages to the eve of World War I at the beginning of the last century, but social and economic contradictions began to erupt on the eve of World War I, followed by the Great Depression of 1929-33 until World War II. The global financial crisis in 2008 further demonstrated that both types of network platforms will cause the system to become unstable due to the uneven distribution of wealth caused by the increasing returns to scale effect. Physicists are mainly concerned about the distribution of wealth, such as the aforementioned Yakovenko model. The stability of society depends on the uniformity of the distribution of social wealth, which is also reflected in the stability of the network platform structure.
With the advancement of globalization in the 1980s and the disintegration of the former Soviet Union and Eastern Europe, the world economy seems to be developing in the direction of the merger of the two platforms of scale-free networks and star networks. More than a decade ago, I tried to use complex networks to study economic issues from this perspective. Today, the economic systems of various countries are driven by two forces. One is the exchange between economic individuals in commodity production and sales, which reflects that wealth growth comes from contractual energy drive. This has expanded from the local production and sales scope of the pin factory described by Smith in the early years to globalization. The second is reflected in the cumulative tax system adopted by countries around the world to enhance public welfare. This is of course not entirely reflected in redistribution. The construction of water conservancy and the Great Wall in the aforementioned grain culture, and the current carbon tax to prevent global warming and other systemic collective behaviors are also included. Although such public power behavior is also called a social contract today, it is not a voluntary contract between economic individuals. I call it a non-contractual entropy-driven behavior that everyone must comply with.
The characteristic of complex network platforms is that there will definitely be increasing returns to scale. Whether it is the free market economy driven by energy that leads to scale-free network distribution, or the star network of the imperial autocratic economy driven by entropy, the spontaneous evolution of a certain system will lead to uneven distribution of wealth. An uneven system may still be stable, which is reflected in the role of financial capital in the stability of the system. I once believed that financial capital is like the "blood" flow of a life system, which will bring vitality and always tend to be stable. This comes from my early reading of the principle of preferential allocation proposed by Mr. Mao Yushi: A simple example is that a production team buys a batch of fertilizers and applies fertilizer to N pieces of land. There are good fields and bad fields. Its grain output P has a functional relationship with the amount of fertilizer x, Pi(x), where i=1, 2,..., N. The best fertilization plan should be the point where all derivatives dPi(x)/dx are equal. This is reflected in investment behavior as equal investment-output ratio. The corresponding physical correspondence is that the temperature T=dE/dS of each subsystem is the same, which reflects self-organizing behavior.
The above concept of physical temperature can also be introduced into complex network platforms. If the average number of edges connected to each node of the network, which is also called the degree of the network, is small compared to the total number of nodes, it belongs to a sparse network. My analysis is that the principle of preferential allocation will be spontaneously satisfied in a sparse network platform, and the temperature of each part of the network platform will tend to be the same, so the system is stable, which comes from the consequence of the self-organization evolution of the system. For a stable system, the government's non-contractual entropy force drive, that is, tax regulation, is beneficial and can bring about an increase in the degree of social consensus as a whole, which is also reflected in the fact that the temperature of each part of the system is the same and in a state of thermal equilibrium. However, with the continuous expansion of financial capital and the emergence of various investment portfolio products, people's investments are influenced by unrealistic and irrational expectations - the expected contract of non-real-time exchange will bring instability to the economic system. As a result, the balance of temperature in each part of the entire network platform is broken and the system may collapse.
It is from the above perspective that I analyze and demonstrate the evolution of the economic system, which is to study the economic system using a complex network platform. In this way, the significance of economics is not to analyze the equilibrium problem, or the problem that all parties are spontaneously seeking to maximize their own interests, but to demonstrate the stability of the system - this requires that the temperature of each part of the economic network platform is as equal as possible. My personal view is that economic behavior is the choice of maximizing the interests of economic individuals themselves, and economists cannot provide any guidance. This is what life mentors do. Economics can only do overall system analysis, but I can only briefly talk about the above complex network platform analysis of the economic system. But after seeing Tang Chao's seesaw model in 2013, I re-focused on physics and biology. The above thinking from economics still promoted my thinking about biology, thereby extending the concept of the bipolar evolution platform from the economic system to the life system. This is also the purpose of my spending a lot of time and effort on the above economic description.
5.3 Bipolar Evolutionary Platforms and Self-Organization of Life Systems
It is with the above thinking about the network platform of the economic system that the seesaw model allows me to re-examine the loop path and bifurcation path that have been constructed for the life process in the past, and thus construct the concept of a bipolar evolutionary platform. This reflects that both the loop path and the bifurcation path have self-organizing meanings. How are they formed and maintained? Next, I will first analyze the self-organizing characteristics of life evolution, and then I will analyze its physical reasons. This requires constructing the concept of system evolution under energy dominance and entropy dominance in coordinate space and momentum space, respectively, and then generating the concept of a two-level evolutionary platform, which comes from the "combination of two into one" of topological degenerate states and multipotential states. Furthermore, the meaning of the two poles here reflects the rationality of using cellular automata to describe system evolution when the energy and entropy forces of the system are "equally matched" - this is very meaningful for us to understand the essence of physical laws.
A. Revisiting the self-organizational understanding of the cyclic and bifurcated pathways of cell function
The self-organization of the cycle path should be reflected in the process of evolution parameters swinging between p>0 and p<0, which reflects the irreversibility of life evolution. For the folding of proteins or insulin, I described it in the previous article using a scaling mechanism, which shows that each folding state reflects the maximization of the entropy value with equal probability. This does not contain the meaning of irreversibility, but has a certain probabilistic randomness, and does not have the meaning of self-organization. Furthermore, for each sub-path of the cell differentiation process on the bifurcation path, this should also reflect the irreversible biochemical reaction, such as the disassembly of the DNA double helix requires helicase, and the replication of the daughter chain requires polymerase. In this way, the description of the bifurcation sub-path also belongs to the standard process of DNA→RNA→protein, which also reflects the self-organization of the cycle path. For this reason, the self-organization of the cycle path I refer to can also be considered as the embodiment of various standard processes of the biochemical reaction cycle of living organisms.
The average life span of human cells is 2.5 years, and it may only take 1 microsecond to generate a certain protease. These irreversible evolutionary processes of different time periods should be regarded as self-organizing behaviors under the cycle path involving chemical reactions. From this point of view, we need to re-understand the conventional description of DNA gene damage and repair in molecular biology. People usually regard life as a machine that manufactures biological molecules. Cells will die regularly, and DNA will also be damaged by internal metabolic activities of cells or external environmental factors, and need to be repaired. If we understand it from the perspective of the cycle path, this reflects that near the marginal state of the evolution parameter p=0 at a specific temperature, when the system is in a negative state of p<0, it will tend to maximize the heat of thermal disorder entropy and release heat, and the system will evolve to a positive state of p>0 and tend to maximize the ordered entropy. In this way, the drive of energy and entropy values will automatically repair the damaged genes, which also means that this reflects the self-organizing process of metabolism.
Therefore, gene damage and repair are not a problem of "failure" of machine operation. This also reflects the standard process of the biochemical reaction cycle, and still belongs to the self-organization process of alternating two types of entropy maximization under the evolutionary platform. Furthermore, the loop path will also present the following special processes. The first is the memory process of brain cells. This may be the state where the above loop path is "stuck" in p>0 or p<0, but this is still a state of vitality, and the brain cells are not dead. For this reason, the bistable circuit can express both 0 and 1, and the memory state of brain cells may also come from the same principle. The second is the common understanding of cell death. The so-called damage is too large to be repaired spontaneously. In fact, it reflects that the above irreversible process of the seesaw can no longer swing, just like a ferromagnet stops at the magnetic domain structure. The third is that the mutation of a certain gene causes the loop path to be unable to "brake" control, which is also reflected in the failure of the self-organization process.
Secondly, if we use the professional terminology of molecular biology to understand the bifurcation path, we will need to use terms such as cell division and differentiation, various regulatory factors, differentiation factors and stemness factors, as well as high and low expression of genes. These terms do not reflect the above-mentioned circular path. In fact, the meaning of the bifurcation path in terms of how life on Earth evolved has been discussed in the previous article. I will also explain that if the life process cannot be described by the circular path, it should be understood as a bifurcation path. This is mainly reflected in the fact that from the perspective of molecular biology and biochemical reactions, the coordinated process of cell division and differentiation with overall meaning is incomprehensible, that is, this is not aimed at a certain DNA genome individual, but must be reflected in the integrity of the life system, which can also be understood as the self-organizing behavior of the life evolution process. Next, I will demonstrate this point from three aspects: intracellular, transcellular and life group patterns.
First, as mentioned in Section 4 above, both superfluidity and superconductivity are products of phase transition at a specific temperature: after the temperature crosses a certain phase transition point, the maximization of the two types of entropy values requires adjusting the ratio of ordinary fluids and superfluids, as well as ordinary electrons and superconducting electrons. This has always been understood as a phase transition in physics. But this involves the bifurcation of the system into two subsystems, which I think can also be regarded as self-organizing behavior, because superfluids or superconducting electrons must spontaneously maintain a certain number ratio. For this reason, if superfluids and superconductors are compared to a cell, this is obviously comparable to the division and differentiation pattern of cells. How is the differentiation process of each cell regulated? This is reflected in the self-organizing behavior of the internal level of the cell and the overall number distribution. From a physical understanding, this is driven by entropy forces from different material levels. Such entropy forces should not be completely determined by its DNA genes at the internal level of the cell, but should be reflected in the fact that there are various regulatory factors and differentiation factors inside the cell that control the expression of genes.
The control of gene expression by various regulatory factors and differentiation factors, what kind of physical image of self-organization does such a molecular biological description correspond to? This should be reflected in the quantum tunneling coordination of the bifurcation path, rather than the biochemical reaction of the loop path. In fact, the embryonic stem cells, the most basic of life, will eventually differentiate into various functional cells, such as brain cells or stem cells. The regulatory factors and differentiation factors at each step may be different. The seesaw model has shown that if both regulatory factors exist, or because of the effects of other microenvironments, stem cells will be in a state of pluripotency. My physical understanding of this is that if two or more genomes are to be expressed at the same time, DNA will form a topologically degenerate state of mutual quantum tunneling with higher energy (to be analyzed later), and the system evolution will be "stuck" on the p=0 evolutionary platform, resulting in a potential state of cell fate. Only when a regulatory factor is determined to be effective will the gene be expressed, so the seesaw will tilt, and the cell differentiation will be guided to a specific bifurcation path.
Second, the bifurcation path is also reflected in the fact that the above quantum tunneling synergy also has cross-cell characteristics, which has two meanings. First, this is reflected in the fact that the differentiation of cells is obviously impossible to be single random, but must be strictly neighbor-ordered and strictly controlled in quantity, which is particularly reflected in the division and differentiation of embryonic stem cells. The meaning of self-organization of such non-randomized evolutionary behavior is very clear, which is reflected in the neighbor order: after the cells divide into cells of the same type, they must be arranged together as neighbors. For example, all brain cells are placed together as neighbors, and all liver cells are also placed together as neighbors. If each cell divides and differentiates completely randomly, and different cells are "stacked" together in a disorderly manner, how can it develop into a whole life? This process is completely different from the randomness of protein folding. In addition, the number of all cells must also be controlled. It is impossible for a certain type of cell to have a large number. The number of each type of cell in a living body has a strict ratio, and all number controls must also be strictly synchronized.
Third, the collective synchronization of transcellular characteristics is obviously not specific to the stem cell characteristics of life, but is to reflect all cells. This does not even refer to different cells in the same organism, but may also refer to cells across different organisms. For this reason, I disagree with the existing understanding of non-coding DNA or junk DNA by biologists. It cannot be considered that a certain section of the genome is "junk" if it cannot find a corresponding trait. This is likely to reflect a certain "resonance" based on quantum tunneling across cells, which not only ensures the harmony of different cells in the organism, but also guarantees the consistency of the behavior of the biological group - this will reflect the self-organizing behavior at the level of the biological group. My above ideas come from a popular biology report made by Professor Prigogine, the founder of dissipative structures, at Beijing Normal University in 1986. I have never found the original source of the literature, so I can only briefly introduce it as follows.
The report said that ant colonies can be divided into two categories: one is diligent ants, which constantly carry food to the ant nest. The other is lazy ants, which do nothing but run around and look around. What is the use of lazy ants? This made biologists curious. For this reason, they removed all the lazy ants from the ant nest, but a miracle happened: a part of the original diligent ant group was automatically transformed into lazy ants, also running around and looking around. For this reason, it made the group biology researchers suddenly realize that the ant colony must have both diligent ants and lazy ants. The former is responsible for carrying the found food to the ant nest, while the latter runs around to constantly find new food sources. This is indeed a good example that the system will spontaneously form a self-organizing structure, but this cannot be described and understood by a simple physical model.
The above example can simply illustrate the difference between organization and self-organization. If all lazy ants are removed from the anthill, the ant colony holds a "meeting" and the "leader" arranges the division of labor for different ants to do different things. This is organizational behavior. However, judging from the rapid response of the ant colony, this is not the organizational behavior of each ant following the work arrangement, but the collective behavior pointed to by the coding existing in the genes of each ant. For this reason, I personally think that the understanding of non-coding DNA or junk DNA is wrong. There should be some tacit cooperation between the DNA of different ants. Should this be linked to the ordered entropy of quantum many bodies: should it be reflected as some quantum entanglement between different biological individuals? This is a bolder imagination.
B. Construction of the Evolutionary Platform Concept: The Combination of Topological Degeneracy and Multipotentiality
The evolution parameters of the cyclic path come from the "combination of positive-marginal phase transition and marginal-negative phase transition". Furthermore, the concept of evolutionary platform based on bifurcation path is given below, which comes from the "combination of topological degenerate state and multi-potential state, so as to reflect that the system will spontaneously construct self-organization far from equilibrium. This reflects the significance of the evolutionary platform - this is also the core idea I want to express most in this article. If there is any revolutionary idea in today's physics, such a system evolution based on energy and entropy values will form an evolutionary platform of self-organization at a specific temperature, which may be a breakthrough idea. For this reason, the existing physics analysis method based only on interaction is wrong, because it does not take into account the ordered entropy form of the system entropy force. The concept of evolutionary platform comes from the following three starting points of thinking. Next, I will start with the concept of momentum space.
Let me talk about my first thought first. I feel that it may not be correct to regard momentum space as a virtual concept, because in our real space, it may also be called coordinate space, and its order often corresponds to the disorder of momentum space, and vice versa. A typical example is that the crystal structure embodies order in the coordinate space, but Einstein used a single frequency vibration to calculate the specific heat of solids, which was very different. The Debye model successfully treated the atomic vibration mode as a disordered vibration similar to blackbody radiation in momentum space. On the contrary, whether it is the Fermi statistics or Bose statistics of real particles, the physical image in the coordinate space is completely disordered, but the concept of Fermi sphere in momentum space obviously embodies a certain order. The Bose-Einstein condensate discovered in the experiment in 1996, the particle characteristics are still disordered in the coordinate space, while the zero condensate in the momentum space shows order.
For this reason, I believe that the coordinate space reflects the local interaction energy of the system, while the momentum space reflects the entropy force characteristics of the system as a whole. The order and disorder of the two are complementary. The concept of evolution parameter p=ln|T/V|, which was originally based on the total kinetic energy T and interaction energy V of the system, only reflects the meaning of energy. If we further consider the complementarity of the coordinate space and momentum space mentioned above, it should also be reflected that the evolution of the system with p>0 is dominated by the entropy force of the momentum space, which better reflects the overall characteristics of the system, while the evolution of the system with p<0 is dominated by the energy of the coordinate space, which only has the local characteristics of some short-range interaction forces.
When I was studying renormalization group theory in my early years, when I was calculating the critical exponent of phase transition, the simple Kadanoff merging exercises given in the statistical physics textbooks were all from real space, and the calculation results of the renormalization group were not accurate. However, the research papers usually use the renormalization group calculations in momentum space after Fourier transformation, which will be closer to the numerical results of the partition function. I feel that this also reflects some essential physical basic meaning - examples that can be transformed from Fourier to momentum space are often closer to the overall characteristics of the system in the marginal state of p=0, and thus correspond to more accurate physical phase transition points. This also reflects the two types of models of ferromagnetism, the Heisenberg model and the itinerant electron model, which embody two extremes. The former does not consider the individual kinetic energy of the magnetic moment, while the latter has too much kinetic energy, which deviates from the marginal concept.
My second thought comes from the experience of studying dissipative structure theory and synergetics. The ideological basis for Prigogine to create the dissipative structure theory actually comes from the minimum entropy production theorem he proposed as early as 1945. This is based on the fact that any chemical reaction satisfies the Onsager reciprocal relation under the linear system, and thus it is concluded that the linear system will spontaneously evolve to a steady state where entropy production reaches a minimum value, so that the system with only linear deviations will not form a stability far from equilibrium. In this way, any self-organization far from equilibrium must be based on dissipative structures under nonlinear force energy supplementation. I have talked a lot about the analysis of lasers in synergetics in the previous article. Haken actually borrowed the concept of Landau's order parameter and introduced multiple order parameters to the laser system. The final conclusion is the Slaving Principle: the slowly changing order parameter dominates the system to form a self-organized steady state, while the fast parameter will be eliminated adiabatically.
After carefully considering the ideas of Prigogine and Haken, I think their thoughts are also complementary: Prigogine's perspective is the minimum entropy production, but the dissipative structure theory he gave emphasizes that energy and nonlinear force drive will lead to self-organization. Haken's perspective comes from Landau's order parameter, which originally comes from the free energy expansion and reflects the meaning of energy more, but the formation of lasers must be reflected in the drive of entropy force after the input energy increases. For this reason, both theories start from energy or entropy force as an indirect drive of external forces to demonstrate how to achieve self-organization far from equilibrium. This is all considered from external factors. Since childhood, I have been familiar with this quotation from Mao Zedong in "On Contradiction": "External factors are the conditions for change, internal factors are the basis for change, and external factors work through internal factors." Why not consider it from the internal factor of maximizing the system's own entropy value? For example, the formation of life evolution should come from the self-organizing structure of internal factors driven by its own entropy value and energy, and the climate periodicity caused by the rotation of the earth is an external factor.
My third thought comes from the comparison between the evolution of life and the evolution of economy. The economic system spontaneously formed by human society is obviously very comparable to the life system generated under the earth's environment. Whether it is the human trading system or the process of the evolution of life on earth, both are self-organizing structures generated from nothing, and are irreversible at the micro level. More importantly, when I try to use complex networks to describe the economic system, it will spontaneously reflect the local energy drive of voluntary contracts and the overall entropy drive of non-contracts, so that the system will fluctuate near the marginal state of the evolution parameter p=0, which seems to reflect that the economic system must be built on a platform: from the early farmers' markets of human beings, to the wholesale and retail business point model, to the global chain supermarket model, and then to the current online shopping model. The economic system built on the platform is described by complex networks, and its temperature is constantly rising. It is the introduction of the concept of economic temperature in the previous article that reminds me of the comparability with the evolution of life systems: life evolution also needs the concept of platform.
The difference between the evolution of life systems and economic systems is that the temperature corresponding to their evolution does not rise gradually, but must be maintained at a specific temperature at the edge of the evolution parameter p=0, although animals will generally be constant temperature (women's body temperature will rise slightly during ovulation), while the temperature range of active plants will be larger. The cycle path is reflected in the system swinging around the edge of p=0, which is comparable to the volatility of the supply and demand balance in the development and evolution of the economic system, and both are reflected in the localized energy and the entropy drive of the system as a whole. For this reason, the platform of the economic system is built on a complex network. Then, a bold analogy is whether the evolution of the life system should also be built on a special platform, so as to ensure the irreversibility of the evolution parameter seesaw under the cycle path? This may be reflected in the spatial spiral structure of the most core genetic material DNA, and also includes the enzymes corresponding to its genome. The role of the enzyme seems to reflect that this is the central fulcrum that determines the direction of the seesaw operation.
The above three considerations bring about the concept of the evolutionary platform of the life system - this should not only reflect my original meaning of the bifurcation path, that is, the evolutionary process of life from nothing to something on Earth, but also reflect the process of life developing and growing from embryonic stem cells. This is different from the concept of evolution parameters, which only includes the meaning of biochemical reactions. Just like the network platform of the economic system, it will evolve from a random network that comes out of nothing at the beginning to the star network of Chinese society and the scale-free network of European society, until all countries in the world basically present a mixture of scale-free networks and star networks. The concept of evolutionary platform also reflects the self-organizing behavior of the evolutionary process, which comes from the joint drive of some kind of energy and entropy. But this is not reflected in the "combination of two into one" of two types of phase transitions under the cyclic path. The evolutionary platform should be formed by the "combination of two into one" of Wen Xiaogang's concept of topological order and Tang Chao's seesaw model: this is the basic platform for the orderly evolution of life, and it should be presented as a self-organizing structure far away from the thermal equilibrium of the environment and with independent temperature characteristics.
Next, I will further demonstrate that I have revised the topological order proposed by Wen Xiaogang into a topologically degenerate state, and that it will constitute the basic physical image of the evolution platform at an independent temperature. Next, given that the superconducting circulation phenomenon is comparable to the vitality of life, I will temporarily deviate from biology and continue to demonstrate the concept of the life evolution platform based on the superconducting difference wave state of p=|p ↑ -p ↓ | in the previous article. I have revised the Cooper pair into a topologically degenerate state. In addition to the previous section stating that p ↑ and p ↓ must move in the same direction and their energies based on the center-of-mass system are strictly equal, it also includes that the spatial position of the difference wave state must be on the surface of the superconductor. In this way, the internal spatial structure of the superconductor is reflected as a barrier that allows all superconducting quantum difference wave states to quantum tunnel each other - the tunneling barrier is the concept I added to the superconductor, which is not in the BCS superconductivity theory and is a physical property that only the topologically degenerate state has. This will explain why superconductivity has the Meissner effect on the one hand, and why the superconductivity phenomenon does not exist in all materials on the other hand.
In fact, the biggest difference between the above superconductivity theory based on topological degeneracy and the BCS theory is that the superconductivity phenomenon should be understood from the perspective of self-organization. It is believed that the superconductivity is composed of the subsystem of topologically degenerate superconducting electrons on the surface of the superconductor (which reflects the marginal concept) and the quantum potential barrier inside the superconductor. If the difference wave state composed of superconducting electrons can cross the potential barrier to form a self-organized equal energy system, it will present a superconducting state. The Meissner effect also reflects that the entropy value of the equal energy ordered entropy system is the same at any energy. Therefore, in order to minimize the energy, the system will squeeze out the magnetic field energy inside the superconductor. If the quantum potential barrier is too high to tunnel, the self-organized system of the superconductor will collapse and the system will be converted into an ordinary electronic state. This shows that it is the quantum potential barrier in the lattice mode that determines the superconductivity. The copper oxide superconductors and iron-based superconducting materials discovered in the 1980s have quasi-two-dimensional layered characteristics, which may indicate that this is related to the quantum resonance tunneling effect of the potential barrier, which will have high-temperature superconductivity. Hydrogen sulfide, which has been developed in recent years, requires extremely high pressure to produce superconducting properties, which is also reflected in the fact that high pressure reduces the energy of penetrating the potential barrier.
The most critical part of the above description is the understanding of quantum barriers. This is not the understanding based on coordinate space under the conventional lattice structure, but the quantum barrier in momentum space, or in other words, in momentum space after Fourier transformation of coordinate space. Therefore, I deliberately put superconductivity here for further analysis. The Rollin film effect of the superfluid mentioned above has obvious spatial characteristics, and the shape of the bowl containing liquid helium will affect the dripping. However, superconductivity has nothing to do with the coordinate space shape of the material, but should be reflected in momentum space - the exchange symmetry characteristics of the topologically degenerate state are similar to the tunneling of N - ions in NH3 described by Anderson , and the superconducting current formed has the characteristics of momentum space. Squeezing out the magnetic field in the superconductor to reduce the energy of the system and form the maximization of ordered entropy reflects the projection characteristics of the momentum space on the two-dimensional surface of the superconductor. With the above analysis of superconductors, I can focus on the concept of the evolutionary platform of life.
The great revelation brought to me by the above analysis is that the economic system is constructed on a complex network platform, which is respectively reflected as a scale-free network and a star network. Then, the evolution of material systems, including life systems, can only be reflected in physical space. How should its platform structure be reflected? The presentation of entropy force in three-dimensional space can only reflect disordered thermodynamic entropy. Only in two dimensions can the ordered entropy of mutual quantum tunneling be reflected. For this reason, the presentation of ordered entropy in the evolution of any material system must be constructed on the dimensionality reduction characteristics of space: this is reflected as a two-dimensional surface in ordinary superconductors, and the quasi-two-dimensional characteristics of high-temperature superconducting materials, and the mutual quantum tunneling effect of its topologically degenerate state is more obvious - the superconducting circulation of superconductors never stops, similar to the biological vitality of living organisms, which is a revelation for our understanding of life phenomena. This requires further discussion. DNA and RNA in the cells of living organisms are both quasi-one-dimensional spatial structures. What does this mean?
This reminds me that the quasi-one-dimensional structure of DNA and RNA should reflect the lower dimension of the coordinate space, which is more conducive to the driving of the system entropy. Therefore, the concept of the life evolution platform reflects that the quantum tunneling effect under the quasi-one-dimensional space structure may be the reason why the life system is dynamic. Its macroscopic manifestation is the balance of a certain force formed by the energy connection between local individuals and the overall entropy driving force. This is also reflected in the pluripotency state in the seesaw model. Regarding the "mutual inhibition and mutual balance relationship between mesoderm genes and ectoderm genes in the reprogramming process" of its model, my physical understanding is that this must correspond to multiple enzymes, or a single enzyme but a microenvironment with multifunctional manipulation - so that the stem cell will present a pluripotency state: the microenvironment causes multiple gene segments of DNA to "try" to be expressed, which reflects the topological degenerate state of mutual quantum tunneling formed by the aforementioned superconducting electrons.
The topologically degenerate state is also a state in which the overall energy of the system is higher and the temperature of the system is also higher. In embryonic stem cells, I imagine that the quantum tunneling of each gene segment under the action of the enzyme forms a topologically degenerate state, which is similar to the mutual quantum tunneling of electrons on the surface of a superconductor. The continuous differentiation and division of cells reflects the continuous loss of topological degeneracy, so that the biological cells of the evolutionary platform will continue to differentiate until the cells form a microenvironment with only "one gene and one enzyme". In this way, there is only one gene expression, which evolves into a single cell in a steady state. For this reason, the formation of the concept of evolutionary platform is the product of combining the pluripotency state and the topologically degenerate state. The consequences of its evolution must be systematically derived from the first cornerstone of molecular biology, the "one gene and one enzyme" theory. However, the significance of the evolutionary platform is obviously not in its final evolutionary consequences, but to provide a physical understanding of the continuous complexity of its evolutionary process.
In the past, the purpose of building any model seemed to be to give some simple explanation of the physical mechanism. But the evolution of the real system, from the universe to the life system and then to the human economic system, is from simple to complex. The evolution of the universe I mentioned above comes from the original angular momentum, which is the process of cosmic expansion from simple to complex. The complex network platform of the economic system also reflects that human economic transactions must evolve from simple to complex, and various market models, banks, insurance and other institutions will evolve. Then, do embryonic stem cells formed after the fertilization of the egg cell also reflect the evolution of the evolution of life from simple to complex? This process is what we should pay most attention to. Therefore, the concept of evolutionary platform needs to add a prefix of two poles to form the special meaning of the following two-pole evolutionary platform.
C. Bipolar evolutionary platforms: cellular automata, the path to chaos, and self-organized criticality
The above concept of evolutionary platform brings up two questions worth thinking about. That is, the growth and development of embryonic stem cells as the carrier of the life evolution platform should first be reflected as an evolutionary process from simple to complex, and secondly, how this process is controlled. Both of these meanings are very important. Furthermore, the evolution of the universe comes from the original angular momentum, and its core is the positive and negative charges of protons and electrons. The expanding space is the evolutionary platform of the universe, and thus a complex material world is generated. The evolution of the human economic system comes from the two poles of grain culture and olive culture, and also constructs the complexity of human economic behavior under the complex network platform. The combination of sperm and egg to generate embryonic stem cells also comes from the two levels of DNA genes carried by each of them. This not only reflects the growth and development of a single life, but also reflects the continuous evolution of various life genes from nothing to something-do all evolutionary paths from simple to complex have to be reflected in a bipolar structure?
The evolutionary path of life towards complexity cannot be presented by the scientific description discussed in the previous part of this section: from determinism to probability equation description to uncertainty description, such scientific analysis and argumentation can only construct the existence or non-existence of a correlation between a certain genome fragment and life traits, but cannot give a picture of the overall evolution of the system. For this reason, the expansion of the universe can be imagined by extrapolating existing astronomical observations to the early universe, and the evolution of economic systems is easier to verify through historical relics. On the contrary, our understanding of our own life phenomena may only see the trees but not the forest, and only see the correlation between the internal level of the cell and the biological traits, while ignoring the overall correlation whose possible mechanism is very simple but must be deterministic. For this reason, this reminds me of the cellular automaton in non-equilibrium statistical physics and the work on the road to chaos in my master's thesis. On this basis, I further proposed the concept of the two poles of the evolutionary platform.
Before I go into specific arguments, I would like to explain the meaning of the control parameters of the evolutionary platform and its connection with the evolution parameters. The evolution parameters, which are embodied as cyclic paths, are concepts that exist in any living organism with vitality. They reflect that the swing mechanism must obey the irreversible process under the central law. This usually corresponds to the normal cell biochemical process of "one gene and one enzyme" under normal human body temperature, and has universal meaning. However, as the embodiment of bifurcated paths, the evolutionary platform not only carries the cyclic path of evolution parameters, but also reflects the evolutionary path of the two DNAs from the beginning of life on Earth and the development and growth of each life form from embryonic stem cells. The former primitive life is unimaginable, but the division and differentiation process of the latter early embryonic stem cells is driven by environmental energy, which even affects the body temperature of women in the ovulation period to rise by 0.3 to 0.5℃. For this reason, temperature can be used as the control parameter of the early evolutionary platform, which reminds me of the problem of the road to chaos - the idea of using cellular automata to describe the bipolar evolutionary platform comes from this.
Before discussing cellular automata and the road to chaos, I would like to briefly review the history of the development of nonequilibrium statistical physics. As mentioned earlier, both dissipative structures and synergetics, two nonequilibrium statistical physics theories, start from thermal equilibrium to consider how to achieve self-organization far from equilibrium with the help of external forces, which represents people's understanding of self-organization before the 1970s.
Then in the 1980s and 1990s, people began to understand self-organization from the perspective of random forces, which was reflected in the monograph "Random Forces and Nonlinear Systems" published in 1994 by my master's supervisor Professor Hu Gang: Professor Hu Gang is a doctor of Prigogine, the founder of dissipative structure theory, and has also published a large number of papers in collaboration with Mr. Haken, the founder of synergetics. This monograph reflects people's understanding of non-equilibrium states in the post-dissipative structure and synergetics era. The book talks about three levels of physics research, including the microscopic level of Newton mechanics and Liouville equations, the macroscopic level of deterministic equations, and the random level of random forces.
Dissipative structure theory and synergetics attracted great attention in the 1970s, but then quickly cooled down. The main reason is that neither theory has found more convincing cases. But I think the more fundamental reason is that they have not been integrated with quantum mechanics. The concept of self-organization itself should be meaningful, and the existence of life phenomena certainly reflects self-organization. But how life phenomena evolved should be inevitably related to quantum phenomena. However, physicists only regard quantum phenomena as being described by the Shrödinger equation rather than self-organizing behavior. I think this is a misunderstanding, and I will analyze it later.
First, the sandpile model embodies self-organized criticality, which means that in the process of the system moving towards complexity, even if there is random noise driving, the system will still show a certain law of scale invariance. In the past, scale invariance usually came from artificial renormalization methods, but the sandpile model embodies spontaneity. The above core ideas have been mentioned in the previous article. But what I was more concerned about at the time was that the mathematical tool based on cellular automata was used to describe the sandpile model. What is the inevitable connection between this mathematical tool and self-organized criticality? In the past, people only looked at external factors, that is, how the aforementioned external forces drove the evolution of the system to form a self-organizing structure far from equilibrium. Using cellular automata to describe the sandpile model reflects that the self-organizing structure is caused by internal factors, which is more in line with the philosophical view of internal and external factors. For this reason, it was from the sandpile model that I started to pay attention to Wolfram's series of research on cellular automata.
I still remember clearly how I felt after reading Wolfram's article on the use of cellular automata to study turbulence problems in the Beijing Library: turbulence problems and sand pile models may belong to the same category of problems, which are neither completely dominated by energy nor completely dominated by entropy. Therefore, cellular automata are needed to describe such systems. Many years later, when I studied the path integral representation of the Shrödinger equation, it reminded me that the equivalence between the statistical partition function and the Shrödinger equation seems to be only reflected in the system of imaginary time evolution. The system that describes the evolution of real time can be described by the equation of motion of continuous time. Cellular automata describe the evolution of the system based on discrete time. Therefore, for the understanding of quantum mechanics, if we do not regard it as a separated energy level of a certain stationary solution, but as a reflection of the evolution of the system under a certain imaginary time, then quantum mechanics, statistical physics and cellular automata correspond to the description of evolution under different time meanings.
Furthermore, a system completely described by deterministic equations of motion is a real-time evolving system, which is also reflected as a system completely dominated by interaction energy. For example, the planets revolve around the sun, and Newton summarized the law of universal gravitation from this, which actually reflects the consequence of the solar system splitting into a stellar subsystem. If there is no such separation of the planetary system from the solar system, we can only observe that the stars in the galaxy move at a constant speed, and Newton cannot summarize the law of gravity. Furthermore, the system that needs to be described by random forces or probability equations is reflected as a system dominated by entropy forces. The classical heat conduction equation can be derived from Einstein's use of random forces to describe the Browian motion, and the description of the imaginary time evolution of quantum systems reflects the Shrödinger equation. They all belong to probabilistic description equations.
In this way, cellular automata are most suitable to describe systems where energy and entropy are "equally matched", which often appear as discrete time systems. The basic unit of life evolution is the cell, which is precisely reflected in the use of discrete time system cellular automata to describe it. Although the above ideas originated from my master's student period, they were finally formed in recent years after I carefully read and thought about the work of Wen Xiaogang and Tang Chao. For this reason, I formed a further bold idea: the system that can be scientifically described by deterministic equations is mainly dominated by the energy of interaction. The system that can be described by probabilistic equations is mainly dominated by entropy, whether it is a real-time or virtual time evolution system. This is the contribution of existing scientific thinking.
However, as human scientific research becomes more and more in-depth, complex problems will also emerge in large numbers. If it cannot be described with existing scientific language, it will be reflected as an uncertain system. Uncertainty problems are increasing in today's world. What causes this? This is obviously reflected in the fact that as people study the problem in depth, simple systems dominated by energy and entropy have been thoroughly studied. Complex systems caused by the "equal power" of energy and entropy that have not been thoroughly studied have surfaced and must be described by cellular automata. In this way, cellular automata should go hand in hand with equation descriptions and probability descriptions, and embody the description of the overall evolution of the system - turbulence problems, complex networks, and especially life evolution problems, are more suitable for description by cellular automata?
Next, let's talk about the connection between the two concepts of cellular automata and the road to chaos. The meaning of cellular automata is to describe the evolution of the system by constructing the relationship between the neighboring cells and their update rules, starting from the cells and their states. According to Wolfram's analysis, the evolutionary pattern can only have three states: a single steady state, multiple periodic states or a chaotic state. The chaos problem of nonlinear systems is mathematically manifested as the unpredictability under a certain dynamic system, and the above three states will also appear under different parameters. The former cellular automaton is manifested as the relationship between different cells, and the latter chaos problem is the evolutionary pattern of the system under the control parameters. The reasons for the two types of evolution are different, but the evolutionary consequences are similar, which must have mathematical reasons. As for the connection between cellular automata and chaos problems, I think there are two important concepts, one is the dissipative system, and the other is the marginal state at the junction of periodicity and chaos, that is, the road to chaos.
Usually, the objects studied in physics belong to energy conservation systems. Those that can be described by Hamiltonian or Lagrangian belong to this category. This means that such systems only have internal factors for the conversion of different energy forms, and it is difficult to generate self-organizing structures. The formation of self-organization usually requires the exchange of the system with the external environment, which is reflected in metabolism in living organisms. The corresponding physical language is a dissipative system. The system must have both the absorption and emission of external environmental substances and energy. For this reason, self-organization often appears as a joint drive of internal and external factors, and is in a marginal state between a periodic state and a chaotic state - this is reflected in the problem of the road to chaos . This means that a completely chaotic state is not a self-organizing state, which is not what physics is concerned about. A simple single steady state or periodic state also fails to reflect complexity and does not have the meaning of self-organization. Therefore, self-organization may reflect the marginal state characteristics of a certain road to chaos.
This brings us to the Chinese physicist Academician Hao Bolin’s advocacy of using symbolic dynamics to describe chaos in the 1980s. In the Wikipedia entry for Symbolic dynamics, Mr. Hao’s English monographs are also listed. The Chinese version can be downloaded from the Internet in the book "Practical Symbolic Dynamics" written by Zheng Weimou and Hao Bolin. I will not go into details. I will only point out one point. The completely chaotic state is reflected in the state of uncertainty and cannot be described. What symbolic dynamics describes is actually the road pattern leading to chaos, that is, it is just a description of the state that is constantly destabilizing from the stable periodic point state and approaching chaos . This is also reflected in the critical or marginal state where the Lyapunov exponent tends to 0. This also shows that a single steady state or multiple periodic states will not show the complexity of the system, and the disordered state of complete chaos only has some statistical analysis meaning. Therefore, complexity must be presented at the intersection of the ordered state and the chaotic state, which reflects the characteristics of the road leading to chaos - symbolic dynamics is to give a description of the road leading to chaos of the dynamic system.
Cellular automata were originally proposed by von Neumann in the 1950s to simulate the self-replication of biological cells. Does this also share the same symbolic dynamics rules with the problem of the path to chaos? I have pointed out in the previous article that the significance of the evolution of life on Earth is not reflected in the survival of each individual life, but more importantly in the continuation of its carrier DNA genes. To this end, it is a very basic problem to give a physical image understanding of each type of DNA gene continuation pattern. However, through discrete time iteration similar to cellular automata, there are only two paths to chaos: period-doubling bifurcation of single-peak mapping, and Arnold tongue corresponding to circular mapping, which is manifested as the locking of two or more periodic parameters. In this way, does the evolutionary platform, as a description of all life in nature, also originate from only two modes? The idea of this bipolar evolutionary platform originated from my master's thesis.
The iterative form of a single-peak mapping of xn+1 = λ xn (1 - xn) will form a period-doubling bifurcation. A typical example is that cell division 1 becomes 2, 2 becomes 4, 4 becomes 8, and the bifurcation continues along the multiples.
Third, the above cellular automata better reflect our understanding of physical laws. This is not a deterministic or probabilistic equation description in the usual scientific sense, but reflects that laws are the reason for the evolution of system states. The reason why Google's AlphaFold can predict protein structures with high accuracy should come from the scaling mechanism of proteins, that is, the blooming characteristics of the aforementioned Ω(E)∝e αE . For this reason, I then thought about whether it is possible to use similar symbolic dynamics to describe the DNA structure of all living things-they can be regarded as two roads leading to chaos from two polar evolutionary platforms, the result of the spontaneous superposition of period-doubling bifurcations and circular images . Therefore, the superposition pattern of DNA spectral structure should reflect a certain regularity. Can this be described by symbolic dynamics ? There are many classification methods for phylogenetic evolutionary trees in biological research, including rooted trees and unrooted numbers. But I think that symbolic dynamics description may better reflect the essence of life evolution.
Finally, if the above two-pole evolutionary platform concepts can be established and correspond to the description of cellular automata, the biggest change this will bring to our understanding of life phenomena will be reflected in the fact that life phenomena should have holistic characteristics across cells. Anderson's "More is Different" concept mentioned above has profoundly influenced physics and formed the view of evolution. However, this has little impact on life sciences. Biologists still believe that cells are the basic units of life and have not established the awareness of holistic evolution across cells, so there is the concept of junk DNA or non-coding DNA. However, from the perspective of the evolutionary platform of cellular automata, there should be no concept of junk DNA, but rather cross-cellular associations. Furthermore, the DNA of different life individuals may also be related, just like the lazy ant experiment mentioned above. I hope that readers and netizens who read this article can continue to think about this issue from a more professional molecular biology perspective.
6. Conclusion: Two platforms for studying complexity in physics - Thanks to two university classmates
At the end of each article, I usually write an acknowledgment. I want to thank my supervisor and senior students mentioned in the previous article, so I won’t list them one by one here. But my acknowledgment at the end of this article is a little special. I also want to include two college classmates who were not mentioned in the previous article. They are both from the 784 theoretical physics major of USTC: One is Du Mengli, who is a researcher at the Institute of Theoretical Physics of the Chinese Academy of Sciences. When I graduated with my doctorate in 1996, he was already a doctoral supervisor. When I was a postdoctoral fellow, I chose him as my co-supervisor. The other is Song Chaodi. There is an entry about him on Baidu Encyclopedia. When I graduated with my doctorate, he had already founded the company Kelihua and served as chairman. I worked as a postdoctoral fellow for less than a month at Du Mengli's place, and he pulled me in to be his deputy. I want to thank these two college classmates for inspiring my thinking and making me realize that the physics research on complexity problems should be built on two platform concepts, one is the bipolar evolution platform, and the other is the complex network platform.
Let me first talk about why I wanted to ask my classmate Du Mengli to be my postdoctoral co-supervisor in my early years. Of course, there is one factor that cannot be ignored, that is, I had already applied to immigrate to Canada. If I was approved, I would have to leave the accepted unit immediately. With my temporary postdoctoral worker status and the face of my old classmate, I was unlikely to be blocked. But after all, I was not sure whether my immigration application would be successful. Therefore, I asked Researcher Du Mengli to be my postdoctoral co-supervisor, and a big reason was still because of his closed orbit theory. After several brief exchanges with him, I thought that the theory might constitute a platform concept for exploring complexity, rather than a universal concept as usually understood: this is the bipolar evolutionary platform. Its significance lies in that the exploration of physical laws not only presents the universality of the conclusions, but also seeks the complexity of evolution - the essence of complexity may come from the "equal power" of the bipolar forces. Such an evolutionary platform idea should be more significant than the unified theory idea currently sought by mainstream physics.
In addition to the chaos model proposed by Professor Ding Ejiang in my master's thesis, my aforementioned bipolar evolution platform idea also comes from the closed orbit theory created by Du Mengli and his doctoral supervisor Delos. Both works were published in 1988. However, the former is a classical model that describes the periodic drive of two different delta functions, one is a circular drive and the other is a directional drive, thus forming the complexity of period-doubling bifurcation and frequency-locked mode superposition. The closed orbit theory describes an equally simple but more specific example: hydrogen atoms in a strong magnetic field. We know that the weak magnetic field, as a perturbation, has only the Zeeman effect on the luminescence of atoms. In physical measurements under extremely strong magnetic fields, whether it is the electromagnetic waves of astronomical pulsars or the resistance measurement of the quantum Hall effect, the atomic energy level transition effect of electrons is very weak or even negligible, and there will only be strong electromagnetic radiation or discrete resistance effects of electrons. The closed orbit theory describes the complexity of the motion of high principal quantum number electrons when the strong external magnetic field and the atomic Coulomb effect are "equally matched".
The above analysis and argumentation will further lead to a change in our understanding of the laws of matter. In the past, our understanding of scientific laws was to construct the relationship between different concepts from material phenomena, such as simple physical formulas such as F=ma or E=mc2 , and in economics, it was reflected in the relationship between interest rates and inflation coefficients, etc. However, from Professor Ding Ejiang's chaos model and my old classmate Du Mengli's closed orbit theory, I have formed a completely different feeling: in addition to the simple beauty of regularity, there is another form of expression, which is complexity. The problem of the road to chaos that cannot be described by symbolic dynamics belongs to a complex system, and the system that needs to be described by a special closed orbit also belongs to a complex system. Du Mengli's subsequent work also shows that there is indeed a certain type of atomic spectrum that cannot be described by a closed orbit. This also further illustrates the diverse manifestations of complexity-the significance of complexity research does not lie in its own characteristics, but is reflected in the evolutionary platform that generates complexity.
The meaning of complexity includes uncertainty, which means that it is impossible to give a simple relationship between concepts, but this does not mean that the value of laws has been lost. This problem requires reverse thinking: the value of exploring complexity must go beyond the scientific thinking based on the observational paradigm analysis framework, which is reflected in the state reasoning idea under the evolutionary criterion analysis framework - the forces of internal and external factors driving the evolution of the system are "equally matched", which will make the system evolution present a new self-organization far from equilibrium, which is the embodiment of the bipolar evolution platform: the evolution of the universe comes from the original angular momentum of the electrons and protons at the two poles of the charge, and the three types of galaxies generated by it, namely vortex, ellipse and irregular, come from the internal cause of the original angular momentum and the external cause of gravity. The earth's life system also comes from the internal cause that the chemical bond energy between organic molecules is very close to the earth's normal temperature energy of about 1/40eV, and the external cause of the periodic change of environmental temperature caused by the rotation of the earth. The bipolar evolution platform must give birth to the concept of diversity, which is not reflected in accuracy or chaos, but in a marginal state between the two.
Next, why didn’t I continue my postdoctoral work? This was due to the “inducement” of classmate Song Chaodi that year. I had just been a postdoctoral fellow at the Institute of Theoretical Physics for a few days when I was pulled by classmate Song Chaodi to work as his assistant at Coliva. This was of course mainly a contractual behavior in which both parties got what they needed, but there was also the factor that both of them were interested in economics. I remember that night I was invited to the restaurant opened by his Coliva company, and the two old classmates talked for more than ten hours, from 6 pm to 8 am the next day. This was an unprecedented long talk in my life. After that, I, a Ph.D. who had just graduated from the Institute of Physics, was pulled to my old classmate’s company, and my monthly salary increased from 627 yuan for the postdoctoral fellowship that year to 4,000 yuan. But I also created value for the company: the company profile was immediately revised from “junior college students, undergraduates to postgraduates” to a talent echelon from “bachelor, master to doctoral students”. In China in the 1990s, PhDs were scarce: I, who was not good-looking, was often arranged to accompany the boss to meet government leaders or financing partners, and my value was higher than the beautiful secretary accompanying me to meet ordinary customers.
But there was another reason why Song Chaodi and I talked about speculation that day, which was our common interest in economics. I talked about my only hobby during my doctoral studies, which was to go to the bookstore every Sunday to read various economics books. Song Chaodi talked about how the knowledge of physics combined with business operations was very helpful in understanding economic phenomena, which was also the reason that moved me to change jobs. In particular, he talked about that among all disciplines, only economics theory and application are separated. There is no applied economics, but there is business administration. The reason is that economic development changes too fast, and the theory cannot keep up. Economic analysis has always been "new wine in old bottles." I agree with this view. Later, I introduced the physical concept of indistinguishability into the analysis of the complex network of economic phenomena, and the argument of the relationship between uncertainty and scientific theory in the previous article came from my exchange of ideas with Mr. Song. The following example is my understanding of division of labor formed by my experience in Clever Inc. established by Song. I have to start from my childhood.
In 1976, I was 14 years old. It was impossible for a teenager of this age to be interested in economics. But knowledge and information were extremely scarce in China at that time. I was bored and had to find books to read on my parents' bookshelf. Philosophy books were boring and difficult to understand. Only the economics book "The Wealth of Nations" was worth reading. After turning a few pages, I saw Smith's description of the safety pin factory: 10 processes and a dozen workers can produce 4,800 safety pins a day. Without division of labor, even 20 can't be produced a day. When I read this, I felt something was wrong: if 10 people can produce 4,800 a day, 1 person can definitely produce 480 a day, how can it be possible that 20 can't be produced? I was so sure because I worked in a factory during the summer vacation that year. At that time, China was engaged in "five small" industries all over the country. The private middle school where my parents worked established a school-run factory to produce iron shell switches. Although it was very backward, it should be much more advanced than the safety pin factory in Smith's era.
The school-run factory had lathes, punching machines, and grinding machines, and had four workshops with about 20 workers. Both the equipment and the number of workers should have surpassed the pin factory described by Smith. I worked in two workshops as a summer intern, and earning 1 yuan a day was also a benefit for teachers' children. I remember the most difficult thing was to polish the iron shell switch handles made of cast iron. The first few I polished were very rough, and the master still needed to do some additional processing. But after a day, I was almost as good as the master in terms of polishing quality and work efficiency. For this reason, the process of the pin factory must not be more complicated than that of the iron shell switch factory. It should be no difficulty for one person to master 10 processes: the work of 10 people is done by one person, which will reduce the handover efficiency of work arrangements and be higher. But why did the pin factory, which was spontaneously formed in the Smith era, adopt the production division of workshop handicrafts? But if the division of labor cannot improve production efficiency, what is the reason for the division of labor?
As far as the division of labor is concerned, as long as everyone has chosen the professional skills they are good at, the differences between people will not be very large, and it is unlikely that there will be much difference in work efficiency. The fact that assembly line work usually only requires a few minutes of training illustrates this point. The fact that ordinary people run 100 meters, and there is a difference between us and the world champion, but the speed is not twice as slow, also illustrates this point. However, economists seem to have accepted Smith's statement in "The Wealth of Nations" that the division of labor can improve labor efficiency without any verification, which I find incredible. Later, when I studied the concept of social division of labor proposed by Durkheim, I felt that social division of labor was obviously different from Smith's division of labor, and it could reconstruct the supply and demand relationship and make the division of labor valuable: the division of labor that people often say is becoming more and more refined, obviously not referring to the division of labor in production but to the social division of labor. However, the above explanations do not give the reason why the pin factory spontaneously formed in Smith's era.
At first glance, the possible reason for the division of labor in the pin factory is that it can improve the utilization rate of the factory equipment and speed up the delivery of orders. The scale of producing 4,800 pins a day means that the market demand must also reach this level, which gave rise to the model of workshop-handicraft union. However, the household contract responsibility system in rural China in the 1980s was a negation of the collective production cooperation model, which was completely contrary to the division of labor in the pin factory. For this reason, the above explanation that the order volume increased or the utilization rate of equipment increased is not reasonable. From the perspective of the evolution of the production model, the workers in the early stage of the factory should have come from the previous handicraftsmen. If the order volume increased, they should produce pins separately and then deliver them to the orderers in batches, just like the Chinese farmers who contracted their production in the 1980s. This is more reasonable. After the factory was established, everyone left home every day to go to work. At least in the early days, doing so would definitely reduce work efficiency.
Therefore, combined with the household contract responsibility system in rural China in the 1980s, I couldn't figure out for a long time why the early industrial revolution had to form a pin factory. At this point, I have to talk about the exchange of physics ideas with Mr. Song Chaodi after I came to Coliva to understand economic issues. We discussed why the Soros Quantum Fund, which was well-known during the Asian financial crisis in 1997, was named after the physics quantum - our understanding of physics was different. Although the name of the fund is quantum, it should come from the uncertainty relation of quantum mechanics. But Mr. Song related it to the company's operations and interpreted it as taking advantage of uncertainty, rather than avoiding or preventing it. My understanding focuses on the quantum indistinguishability of financial capital. The former gave me an understanding of the uncertainty of scientific laws, and the latter made me think clearly about why the pin factory had to implement production division. I explain them separately as follows.
Boss Song understands that uncertainty does not mean that you must avoid it. The biggest worry of Chinese private enterprises back then was that they could not get loans: a research and development project of Coliva was approved by the National Spark Plan, but the bank would not give loans. At the end of each year, the company would hold order fairs in provincial capitals across the country at the same time. The overwhelming advertising and venue costs would cost nearly 10 million yuan, but this made it even more difficult to get loans. This is not because private enterprises are discriminated against, but because most private enterprises in that era did not have real estate mortgages and had loan risks. At that time, Boss Song could only find other companies to sign temporary loan agreements and mortgage all the company's equity, which made the company's prospects more uncertain, and it would be over if there was a mistake. For this reason, the reason for the existence of venture capital funds outside banks should be to reduce the overall uncertainty of the economic system. At that time, I accompanied Boss Song to meet with a venture capital fund manager, but because the company's valuation was too high and the negotiation was not completed, Boss Song later expressed regret many times. For this reason, Boss Song's understanding of Soros Quantum Fund is that this is not to avoid like a bank, but to use uncertainty to benefit, which also reflects that this "hedges" people's aversion to risk.
But my understanding is that the quantum meaning of uncertainty is also the indistinguishability of financial capital. In the 1990s, China was promoting ISO 9000 standardization certification everywhere, which is the basis of global professional division of labor. The production and assembly of parts in the world factory need to be standardized, which means that each part has standardized indistinguishability, which is conducive to trade circulation and production competition. Thinking of this, I understand why the pin factory in Smith's era had to divide production, which was to make the pins with a daily output of 4,800 pins indistinguishable, which was conducive to trading. Any grain product can be used as a futures product, which is inherently indistinguishable. Therefore, the household production of grain in rural China does not affect trading. The standardization of indistinguishability has led to the evolution of the world factory from a workshop model to a professional factory and assembly model. Indistinguishability is also the basis for the system to be described by entropy: money is the most indistinguishable as the equivalent of any commodity, which makes the value potential of financial investment the greatest.
So far, I think the biggest flaw in today's economic research is the overemphasis on uncertainty: from information asymmetry to game theory, they all emphasize that a certain unpredictability will bring bad consequences. Economic research has not started from the system and carefully studied the impact of indistinguishability based on currency and industrial standardization. For this reason, the legitimacy of various tax systems, as a manifestation of the overall characteristics of the system driven by entropy, has always lacked the basis for economic analysis - the concept of entropy criterion was formed in physics as early as the early development of thermodynamics, but it has never existed in economic analysis. In fact, the concept of political correctness has constrained the minds of economists, and the construction of the concept of utility is only ordinal, not comparable. This makes people's understanding of the economic system completely lack the meaning of entropy based on indistinguishability, and the concept of uncertainty is rampant in economics. I think that the concept of utility should be reconstructed under the meaning of the system, such as based on the city, so that a system economics based on energy and entropy can be born.
This will lead us to the evolutionary platform on which the economic system is based. As I mentioned above, the concept of complex networks represented by small-world and scale-free networks constructed by physicists more than 20 years ago should constitute the basic platform for studying the evolution of economic systems, which will bring new thinking to economics. The self-organizing evolution presented by the economic system is precisely from the random graph characteristics of the small-world network to the power-law distribution of the polarization of the star network and the scale-free network. Furthermore, the economies of countries around the world today are various superposition states of the two levels of culture, grain culture and olive culture, which is also the embodiment of complexity under the complex network platform. Unfortunately, the study of complex networks has not been effectively built on the basic physical concepts of energy, entropy and temperature. Furthermore, the existing complex network research seems to focus only on sparse networks. Dense networks with comparable degrees and overall node numbers should be introduced into the analysis of global economic networks with rapidly growing correlations today.
—--------------------
Postscript: Many of the views in this article have been briefly published in WeChat groups, but without context they were considered to be amateur science. A few days ago I published the first three pages, and I thought that if the nuclear magic number structure was submitted to any top scientific journal such as Nature or Science, it should be adopted. But no one objected, and many netizens had good interactions with me. This gave me great encouragement. Some netizens asked me why I didn't use English to let physicists around the world see it. Although I have immigrated to Canada, my life's learning and research experience are all in mainland China. Although science has no borders, I would rather communicate this article with mainland Chinese scholars before writing it in Chinese. However, the names and some terms are displayed in English to facilitate translation. In this way, it is only a "one-click effort" to translate this article into other languages.
At the end of this article, I would like to express special thanks to my wife and two children. I have never asked for a penny of funding from any organization for my thinking and research over the past 40 years, and I have a clear conscience, but I only feel guilty towards my wife and children. Although I have not delayed supporting my family, if I put more energy into making money and caring for my family, they will definitely live a better life. Fortunately, my youngest son was admitted to university this year. I indulged in the pleasure of thinking and did not delay the growth of my children. This article also fulfilled my wish: Einstein claimed that the autobiography he wrote at the age of 67 was an obituary. The above text of mine has been revised not a few times, but nearly a hundred times, and it can also be said to be my own obituary.
My wife once teased me: You compare yourself to Einstein, but he worked out the theory of relativity at the age of 26. Most of your fellow students are also doctoral supervisors and academicians, what about you? I said that compared to Einstein, my thinking is more extensive, and I didn’t get results until I was 62 years old. However, I wrote the obituary 5 years earlier than Einstein’s - After writing the above obituary, I have fulfilled my lifelong wish, and I can spend the rest of my life with my family at ease. My lifelong dream is that all kindergarten students who graduate in the future will know the formula for the magic number of the nucleus: 82=2+4+6+8+12+20+30.
Pingbo Zhao zhaopingbo@gmail.com
April 15, 2025, Toronto, Canada
评论
发表评论