Talk:Introduction to entropy: Difference between revisions

Content deleted Content added
Line 732:
==cards on the table==
 
(Discussion deleted and continued on [[User_talk:PAR]])
It occurs to me that it may be useful to say the following. For definiteness and simplicity, I have in mind an isolated body. Whether in thermodynamic equilibrium or not, its whole-system instantaneous microstate can reach some region <math>R_0</math> of phase space. Just seeing the whole-system instantaneous microstate at some point in <math>R_0</math> does not tell us whether the macrocondition is equilibrium or non-equilibrium.
 
It seems to me that none of this material by [[User:Chjoaygame|Chjoaygame]] is relevant for this article, which is an "Introduction to .." article. Some of it may be relevant to [[Entropy]], but I am not even clear about that. This article should be understood by someone who has just been introduced to the topic of entropy. [[User:Chjoaygame|Chjoaygame]], please stop wasting our time reading through your long edits to see if anything is relevant to this article. --[[User:Bduke|Bduke]] ([[User talk:Bduke|talk]]) 22:27, 20 December 2020 (UTC)
To find that out just from knowledge of the whole-system instantaneous microstate, we need to follow its trajectory for a good length of time, even for a rather long time. Overwhelmingly, not remotely as long as the Poincaré recurrence time, but still much longer than the time needed to make a measurement of, say, local temperature or wall pressure. To verify thermodynamic equilibrium or non-equilibrium, we need time to make very many measurements well separated in time.
 
:I am just as guilty for running down this rabbit hole with Chjoaygame. I agree, it has no place on this talk page, and I have removed it to my talk page. Chjoaygame and I have had many arguments in the past, and yes, he tends to say in two complicated paragraphs what he could say in one sentence. In his defense, he will actually engage in an argument, rather than ignoring what you say and repeating the same POV over and over again. He will adjust his understanding as need be and does not engage in ad-hominem attacks. This is rare, and I try to do the same.
Equilibrium is characterized by all measurements of every particular supposed state variable hovering around their respective means. The whole-system instantaneous microstate shows no drift over time, however long, practically 'covering', but not necessarily 'filling', the whole of <math>R_0</math> practically uniformly over time. Thermodynamic entropy gives a precise measurement of how the practically uniform 'covering' of <math>R_0</math> actually 'fills' it over infinite time, a sort of time-averaged logarithmic ''density'' × d ''area'' integral. Such an integration is job for mathematicians. They have an arsenal of definitions of various entropies. Our IP mathematician friend is expert in this, and thinks it is the underlying basis of the general concept of 'entropy'; he has a good case.
 
Statistical mechanics provides a sort of Monte Carlo procedure to estimate that integral, using ergodic theorems.
 
Non-equilibrium is characterized by some sequence of measurements drifting a significant 'distance' through phase space. The drift may involve repeated distinct visits of the whole-system instantaneous microstate to some region of phase space, but it must be evident that they are repeated distinct and separate visits, not just little excursions in a permanent and persistent hovering pattern. In general, for a non-equilibrium trajectory through the phase space of whole-system instantaneous microstates, over some long observation time interval <math>(t_{\mathrm{initial}},t_{\mathrm{final}})</math>, the trajectory will drift from some region <math>R_{\mathrm {initial}} \subset R_0</math> to some other region <math>R_{\mathrm {final}} \subset R_0</math>, with negligible overlap <math>R_{\mathrm {initial}} \cap R_{\mathrm{final}}</math>. Thermodynamic entropy does not apply here. Other so-called 'entropies' may be defined ''ad lib'', but they refer to some kind of 'time rate of entropy production'.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 20:09, 19 December 2020 (UTC)
 
:I think of it this way: It is an *assumption* that every trajectory will visit any neighborhood in phase space with a probability proportional to the "volume" of that neighborhood. This is just another way of saying that each microstate is equally probable. Phase space may be divided up into a large number of macrostates, each with their own information entropy. For systems with a large number of particles, the microstates corresponding to the equilibrium macrostate hugely outnumber the volume of the nonequilibrium microstates combined. It follows that, starting from a nonequilibrium microstate, the trajectory will wander into the equilibrium macrostate region and practically never leave. Observationally, that is the signature of an equilibrium state - the macrostate is unchanging. Since the information entropy of a macrostate (and, by Boltzmann's equation, the thermodynamic entropy) is proportional to the log of the phase space volume occupied by that macrostate, the information entropy of the equilibrium macrostate is the largest. A trajectory from a non-equilibrium microstate does not "drift" in any particular direction any more than a trajectory from an equilibrium microstate does. A sort of random walk from any point in phase space will almost certainly walk you into an equilibrium microstate, and almost certainly not walk you into a non-equilibrium microstate, no matter what kind of state you started from. In phase space, trajectories do not "hover" around equilibrium microstates. The macrostate variables do "hover" around their means, however. [[User:PAR|PAR]] ([[User talk:PAR|talk]]) 21:17, 19 December 2020 (UTC)
 
::We are putting our cards on the table. I have some problems with your just above comment.
 
::Your point of view is that of statistical mechanics. Statistical mechanics is a clever, indeed brilliant and even masterly, and handy mathematical procedure for a sort of Monte Carlo integration, using a concept of random walking, relying on ergodic assumptions. Statistical mechanics is a highly sophisticated topic, taught after several years of advanced education in physics. I don't see it as obvious that it is suitable for novices who are uneducated in physics.
 
::The notions of 'an equilibrium microstate' and of 'a non-equilibrium microstate' belong specifically to statistical mechanics.
 
::A physical trajectory as conceived by Boltzmann is not a random walk, but is generated by Newton's laws of motion. Mathematicians today try to deal with such trajectories as such. Thermodynamic equilibrium and non-equilibrium are characterized by trajectories, not by isolated points. Every point on an equilibrium trajectory has an equal status, as, if you like, 'an equilibrium microstate'. No point on an equilibrium trajectory is 'a non-equilibrium microstate'. Every point on a non-equilibrium trajectory has, if you like, an equal status as 'a non-equilibrium microstate'. No point on a non-equilibrium trajectory is 'an equilibrium microstate'. Boltzmann continued the work of Maxwell and others, using the statistical mechanical procedure, but that does not actually make a Newtonian trajectory into an actual random walk.
 
::For system in thermodynamic equilibrium, a fluctuation is a fluctuation; a fluctuation doth not a departure from thermodynamic equilibrium make.
 
::It might be said that the switch from Newtonian trajectory to random walk is made by a mind projection fallacy. It is not obvious that we should impose that fallacy on novices who are not expected to be trained in academic physics.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 23:35, 19 December 2020 (UTC)
 
::I have to revise the above.
 
::That the overlap <math>R_{\mathrm {initial}} \cap R_{\mathrm{final}}</math> should be negligible certainly gives a non-equilibrium trajectory. But such is well and truly and thoroughly non-equilibrium. More tightly, non-equilibrium just needs <math>R_{\mathrm {initial}} \ne R_{\mathrm{final}}</math>, though that doesn't exactly settle things, because I haven't tightly said what I mean by the trajectory being in a region <math>R</math> at a time <math>t</math>. What sort of region is <math>R\,</math>?
 
::For thermodynamic equilibrium, the condition <math>R_{\mathrm {initial}} = R_{\mathrm{final}}</math> is necessary and sufficient if at least one of {<math>t_{\mathrm{initial}} \to - \infty \,</math>, <math>t_{\mathrm{final}} \to + \infty</math> } holds.
 
::We may consider a thermodynamic process that starts when two equilibrium systems <math>\mathrm A</math> and <math>\mathrm B \,</math>, that separately occupy regions <math>R_{\mathrm A}</math> and <math>R_{\mathrm B} \,</math>, are exposed to each other, and ends when a thermodynamic process isolates the final joint system, so that its initial instantaneous microstate obeys the conditions <math>p_{\mathrm A} \in R_{\mathrm A}</math> and <math>p_{\mathrm B} \in R_{\mathrm B}</math> and <math>p_{\mathrm{initial}} = (p_{\mathrm A},p_{\mathrm B})</math> with <math>\,p_{\mathrm {initial}} \in R_{\mathrm {joint}}</math> in an obvious notation for the final thermodynamic equilibrium. (To be strict, even this doesn't really do the trick.) The second law requires something such as <math>R_{\mathrm {joint}} \sub R_{\mathrm A} \times R_{\mathrm B}</math>, in a suitable notation, with <math>\sub</math> denoting a proper subset relation. The second law requires more, making a strict statement about entropies.
 
::A non-equilibrium process is not so simple to define microscopically in general terms. But surely it requires at least definite initial and final conditions? And that they belong to different regions, <math>R_{\mathrm {initial}} \ne R_{\mathrm{final}}</math> in some sense. But it doesn't require such strict separation as makes negligible the overlap <math>R_{\mathrm {initial}} \cap R_{\mathrm{final}} \,</math>.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 03:15, 20 December 2020 (UTC)
 
:::I did not mean to imply that a trajectory in phase space was a random walk. I used the phrase "sort of a random walk". I agree, using classical physics, the trajectory is determinate, and not truly random. However, it is an *assumption* that every (determinate) trajectory will eventually enter, and then leave, ANY given neighborhood of phases space if you wait long enough, and, after waiting a long time (many Poincare recurrence times) the probability that the system will be in a given neighborhood is equal to the volume of that neighborhood divided by the volume of the entire phase space. The ergodic assumption is that if you take every trajectory as equally probable, you will arrive at the same conclusion.
 
:::I disagree with your statement "For system in thermodynamic equilibrium, a fluctuation is a fluctuation; a fluctuation doth not a departure from thermodynamic equilibrium make." Thermodynamics ONLY deals with systems in the thermodynamic limit. The equivalent of the thermodynamic limit in stat mech is the limit of an infinite number of particles. In that limit there are no fluctuations, or more exactly, systems which are "fluctated" away from equilibrium have a measure zero. The states fluctuated away from equilibrium exist, but the probability that they are visited is zero, in the thermodynamic limit. Only then does the second law hold rigorously.
 
:::For a finite system, there will be fluctuations, and we are, strictly speaking, outside of thermodynamics. The line between an equilibrium state and a non-equilibrium state becomes blurred. The number of microstates which represent "exact" equilibrium is actually very small, and every other microstate can be seen as a fluctuation away from that exact equilibrium, and therefore a non-equilibrium state. The second law does not rigorously hold. Or, we can draw an arbitrary line which says "microstates representing fluctuations in macrostate properties less that such-and-such will be declared equilibrium microstates, all others are declared non-equilibrium microstates". With such a declaration, for a large but not infinite system, you can pick a dividing line in which the second law is fairly rigorous, and the difference between a fluctuation and a non-equilibrium state is clear. However, where that dividing line is drawn subjective, not carved in stone.
 
:::I disagree with your statement "Thermodynamic equilibrium and non-equilibrium are characterized by trajectories, not by isolated points." For a system in the thermodynamic limit, or a finite system with a line drawn somewhere, a given microstate is either an equilibrium microstate or it is not. It does not matter how it got there. There is no such thing as an an "equilibrium trajectory". There are just trajectories. Because the set of equilibrium microstates so dominates phase space, any trajectory in a large system will, after time, most likely be found in an equilibrium microstate and for a system in the thermodynamic limit, it will "almost certainly" (i.e. with measure 1) be found in an equlibrium microstate.
 
:::The second (revised) part of your statement, using ''R'', confuses me. What does ''R'' represent, a region of phase space? If so, I find the first statement "That the overlap ..." totally confusing. In the thermodynamic limit, when two systems are brought into contact, they form a single system in a non-equilibrium microstate. The ensuing trajectory will certainly take the system to an equilibrium microstate.
 
::::Thank you for your careful response. We are indeed putting our cards on the table.
 
::::One question is about your sentence "I agree, using classical physics, the trajectory is determinate, and not truly random." I agree with you there, but others might not, and might strongly disagree. I would like to continue to work with you, taking that as agreed.
 
::::Another question is about your "thermodynamic limit". I don't agree about that. I am reading you that you want to work in a limit in which fluctuations are utterly infinitesimal and of practicality zero magnitude. I am guessing that you go that way because you like to bear in mind a statistical mechanical background in which such a limit is convenient; if you have a different reason, please tell me. I like to think of a finite body. For that, fluctuations are thinkable in such cases as critical states; one can practically see fluctuations in them. Einstein considered systems that to me seem to be thermodynamic systems with fluctuating entropy. I suppose such a system to interact with a thermal reservoir through a diathermal wall. Earlier in this conversation, I have set out my thoughts on this topic, but I think those thoughts have now been archived. In summary, theoretically possible fluctuations occur in the conjugate variable of a state variable that is fixed constant by the definition of the system's state variables and walls. For example, a system defined by <math>U(S,V)</math> can suffer fluctuations in <math>T</math> and in <math>P</math>, but neither in <math>S</math> nor in <math>V</math>. Yes, such fluctuations will usually be on the fractional order of perhaps 10<sup>−20</sup>, and too small to detect. But I am happy to talk about them and to try to think about them. I see no reason to make them unthinkable by going to the limit of an infinitely massive system. I am happy to think of them as so small as to be usually practically negligible. So I disagree with your sentence “For a finite system, there will be fluctuations, and we are, strictly speaking, outside of thermodynamics.” Instead, I would just say that fluctuations are negligibly small.
 
::::In a deterministic finite system, a trajectory of whole-system instantaneous microstates could (let me say 'can' for a small enough system) in principle be defined. It will almost certainly be deterministically chaotic, and will explore its phase space. In the above example, every point in the trajectory will have the fixed constant <math>S</math> and <math>V</math>, because the walls are perfectly rigid, smooth, and elastic. But locally and occasionally <math>T</math> and <math>P</math> will be determined as suitable space–time averages. This sets a formidable, indeed a practically forbidding, mathematical or computing problem. But to me it makes physical sense. For me, the physics wins. I would be perfectly happy to consider a system of 100 or even 10 finite sized molecules. Then the Poincaré recurrence time might even be accessible. Locally and occasionally, there will be detectable fluctuations of <math>T</math> and <math>P</math>. I guess that you would say that such fluctuations are departures from thermodynamic equilibrium, and are composed of non-equilibrium whole-system instantaneous microstates. I would say that they are par for the course, and in perfect accord with the thermodynamic equilibrium, because they belong to the thermodynamic equilibrium trajectory. I would object that your criteria of non-equilibriarity were arbitrary, unenlightening, and confusing. So I hold to my views that a fluctuation doth not a departure from thermodynamic equilibrium make, and that every point on an equilibrium trajectory has its equal claim to be, if you like, an equilibrium point, and that no point on the equilibrium trajectory is a non-equilibrium point, ''etc.''.
 
::::For me, an equilibrium trajectory so defined will explore and define its proper region in whole-system instantaneous microstate phase space. Geometers will measure the entropy of the trajectory. Yes, my regions such as <math>R</math> are regions in that phase space, defined by their respective trajectories. If that is accepted, I think (''modulo'' some details that may be fixable) that what I wrote above makes sense.
 
::::I suppose that a numerical computation of such a finitely defined trajectory would thoroughly comply with your "However, it is an *assumption* that every (determinate) trajectory will eventually enter, and then leave, ANY given neighborhood of phases space if you wait long enough, and, after waiting a long time (many Poincare recurrence times) the probability that the system will be in a given neighborhood is equal to the volume of that neighborhood divided by the volume of the entire phase space. The ergodic assumption is that if you take every trajectory as equally probable, you will arrive at the same conclusion." I guess that a mathematician might prove it without recourse to numerical computation. We agree that it is ok to watch the proceedings from our seats for times measured on Poincaré clocks; we will be supplied with popcorn ''ad lib''. But I think it is a mathematical stratagem to make your assumptions, and is not obvious from a naïve or novice physical viewpoint. I think those assumptions were discovered by strokes of genius on the parts of Bernoulli, Herapath, Waterston, Clausius, Maxwell, Boltzmann, and their ilk. If they get into the article, they should be presented and celebrated explicitly as such, that is to say, not as obvious physical facts, but as brilliant mathematical stratagems.
 
::::If I can prevail upon you to consider things from the point of view that I have just set out, I hope that you may allow that my idea of 'equilibrium trajectories' will do instead of your ideas such as of 'non-equilibrium points' in an equilibrium state. I think the point of view that I have just set out is physically intuitive, simple, and logically valid. I think that if we assume it, we can write a simpler and more naïvely comprehensible article. I think that the point of view that I have just set out is the one taken by pure mathematicians. I accept that it is unfamiliar to academically trained physicists, who perhaps may even find it idiosyncratic or out there. An advantage of this point of view is that the thermodynamic entropy of an equilibrium state of an isolated system is a fixed constant, and so that the second law is true without probabilistic modification.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 09:33, 20 December 2020 (UTC)
 
:::::Ok, lets do this step by step to find out the point of disagreement.
 
:::::* Do you agree that the second law of thermodynamics in effect states that entropy will never decrease?
 
:::::* Do you agree that for an isolated finite system, entropy fluctuations cannot be eliminated? (Due to Poincare recurrence). If you disagree, please outline a practical finite system in which entropy is without fluctuation.
 
:::::* Do you agree that for a system with entropy fluctuations, some will constitute a decrease in entropy and will therefore be, strictly speaking, in violation of the second law?
 
:::::* You state: "every point in the trajectory will have the fixed constant ''S'' and ''V''." I assume by "point" you mean microstate. If that is correct, can you outline a method of calculating the entropy of a microstate?
 
:::::[[User:PAR|PAR]] ([[User talk:PAR|talk]]) 15:44, 20 December 2020 (UTC)
 
::::::Ok, this procedure may help. I may observe that in a galaxy far away, the empire's wagons appear to be circling.
 
::::::* Do you agree that the second law of thermodynamics in effect states that entropy will never decrease?
 
::::::I accept that such statements are dearly beloved of those who make them. I think they are slick and shoehorned into brevity, while leaving the nature of entropy and the second law mysterious and baffling. Indeed, their slickness may gratuitously contribute to the bafflement. One person I know is puzzled as to why the subjective concept of knowledge comes into the explanation of the objective fact expressed by the second law. In a nutshell, it is because probability in this context is a superfluous concept, as observed by Guggenheim in 1949. In the language of Jaynes, it comes in ''via'' a mind projection fallacy, not from the physics. To deflect concerns that may arise here, I will say that the second law says that when bodies of matter and radiation are brought together so as to interact intimately, then their total entropy increases. The 'never decreases' wording, as I have mentioned here before, implicitly (and distractingly) allows for the convenient purely theoretical concept of 'reversible thermodynamic processes'.
 
::::::* Do you agree that for an isolated finite system, entropy fluctuations cannot be eliminated? (Due to Poincare recurrence). If you disagree, please outline a practical finite system in which entropy is without fluctuation.
 
::::::No, I do not agree. Poincaré recurrence does not signal entropy fluctuation. It just follows from the laws of motion.
 
::::::Requested outline: A molecule or billiard ball moves elastically in a rigid enclosure. The enclosure is so shaped that the particle never exactly retraces its trajectory. The particle goes nearly everywhere in the enclosure. It often visits every finite small region of the enclosure. It traces out a possibly or nearly spacefilling trajectory. The entropy of the thermodynamic system is a property of the state of thermodynamic equilibrium, and is so defined. It is not a property of an instantaneous point on the trajectory. The nearly spacefilling trajectory, taken as a whole, defines the entropy of the thermodynamic equilibrium state. It is time invariant because it is defined by the whole trajectory. This is the way mathematicians think about the matter nowadays. The concept of probability provides an attractive mathematical procedure to calculate the entropy, as in Monte Carlo, but that is not the only way to do the calculation. The entropy is a geometric property of the trajectory, and can be calculated directly in geometrical terms, without appeal to probability.
 
::::::In this simple example, only space is explored, and the particle moves with constant speed except at instants of collision. In examples with several particles that can be excited or de-excited in a collision, collisions generate various speeds, so that a more complicated phase space is required. This excitation–de-excitation possibility dispels the mystery of why systems that lack it do not show the usual phenomena. See below.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 21:17, 20 December 2020 (UTC)
 
::::::* Do you agree that for a system with entropy fluctuations, some will constitute a decrease in entropy and will therefore be, strictly speaking, in violation of the second law?
 
::::::This is like asking me 'have I stopped beating my wife?' Entropy does not fluctuate in an isolated system in a state of thermodynamic equilibrium. My reason is in the just foregoing examples. I think it saves a lot of unnecessary heartache to think of the second law without the artificial worry of 'fluctuating entropy in an isolated system'. Entropy fluctuations appear to occur in a system in thermal equilibrium across a diathermal wall with a temperature reservoir in the surroundings; such a system is not isolated. Such equilibria show no temperature fluctuations. No violation of the second law occurs because it is about total entropy, which is a property of the system and reservoir considered jointly as an isolated system.
 
::::::* You state: "every point in the trajectory will have the fixed constant ''S'' and ''V''." I assume by "point" you mean microstate. If that is correct, can you outline a method of calculating the entropy of a microstate?
 
::::::In general, an instantaneous microstate, aka 'point in the trajectory', does not have a physical entropy. There is no reason to try to calculate it. Physical entropy is a property of a trajectory, which can be identified by the law of motion that generates it. That's how present day mathematicians do it.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 20:51, 20 December 2020 (UTC)
 
::::::Thinking it over.
 
::::::Above, I wrote “In examples with several particles that can be excited or de-excited in a collision, collisions generate various speeds, so that a more complicated phase space is required. This excitation–de-excitation possibility dispels the mystery of why systems that lack it do not show the usual phenomena. See below.”
 
::::::Yes, such excitation and de-excitation brings in a topic from which I am topic banned. I might write more about it were I not banned. It does indeed make the business stochastic, and probabilistic. This blows away some of my above reasoning. Cercignani mentions some curious closely relevant facts without explaining them. A quietly historically recognised example is the case of the inverse fifth power particle force law. It was early recognised, I think by Maxwell, as exactly solvable, and does not show the expected spreading. For this reason, it is not widely celebrated; indeed it is often not mentioned. Now for the first time I understand it; I don't recall reading this explanation. Avoiding [[WP:OR]], I guess someone will fill me in on it. It may deserve specific explicit appearance in the article. But it does not detract from the main concern here, about physical entropy being a property of a trajectory, not of an instantaneous microstate.[[User:Chjoaygame|Chjoaygame]] ([[User talk:Chjoaygame|talk]]) 21:48, 20 December 2020 (UTC)
 
It seems to me that none of this material by [[User:Chjoaygame|Chjoaygame]] is relevant for this article, which is an "Introduction to .." article. Some of it may be relevant to [[Entropy]], but I am not even clear about that. This article should be understood by someone who has just been introduced to the topic of entropy. [[User:Chjoaygame|Chjoaygame]], please stop wasting our time reading through your long edits to see if anything is relevant to this article. --[[User:Bduke|Bduke]] ([[User talk:Bduke|talk]]) 22:27, 20 December 2020 (UTC)
 
== Outstanding questions ==