1
$\begingroup$

I have a question regarding the second edition of Callen's Thermodynamics and an Introduction to Thermostatistics, which regards his third postulate:

Postulate III. [...] The entropy is continuous and differentiable and is a monotonically increasing function of the energy.

I am curious, specifically, about the fact that the entropy is postulated to be a continuous function of the system's extensive parameters.

Consider an isolated, simple system consisting of two chemical species, 1 and 2, that is initially not in a state of maximum entropy with entropic, fundemental relation $S=S(N_1,N_2)$. Assume further that there exists some way for species 1 to become species 2, and vice versa, through some suitable chemical reaction. The entropy is postulated to be such that the extensive parameters change in order to maximize it. This means that $N_1$ and $N_2$ will eventually assume values that maximize the entropy.

In reality, there are only certain, discrete values that $N_1$ and $N_2$ may assume. But $S$ is a continuous function of the extensive parameters, so it will be defined for vectors $(N_1,N_2)$ that are not physically realizable. Is it possible for the state of maximum entropy to be one which is unphysical? If it is possible, does it even matter?

EDIT

After some thinking think I understand this now. Macroscopic equilibrium variables are "averages" of microscopic interactions. So we might very well have a case where that average is not an integer.

$\endgroup$
7
  • 1
    $\begingroup$ $N_1$ and $N_2$ are continuous variables in "Part 1" of Callen $\endgroup$ Commented Nov 16 at 14:07
  • $\begingroup$ @hyportnex, I understand that they are assumed to be continuous for the sake of the formalism. My question is more about the implications of this fact since they are not continuous in real life. $\endgroup$ Commented Nov 16 at 15:53
  • $\begingroup$ Callen as everybody else assumes that matter is continuous, no particles, no statistics, etc. This works for macroscopic/mesoscopic amount of quantities, but it fails for very small amount of matter along with the classical thermodynamics where these concepts, such as temperature, volume, length, area, pressure, surface tension, etc., lose their meaning. $\endgroup$ Commented Nov 16 at 16:10
  • $\begingroup$ @hyportnex, so say we have 10 moles of $N_1$ initially. Are we assuming that every real number value of $N_1$ between 0 and 10 moles is "allowed", even if many of them are unphysical? Can this not lead to situations where entropy is maximized for mole numbers which are completely unphysical? Or is the error just so small that it doesn't matter? $\endgroup$ Commented Nov 16 at 16:55
  • $\begingroup$ Said very crudely, this is all some manifestation of the Law of Large numbers, even better, the law of Iterated Logarithm, see any book on probability theory, such as Feller. If you have $N$, more or less independent events, then their arithmetic average will have fluctuation on the order of $\sqrt{N}$ and the "maximum" fluctuation is of the order $\sqrt {N \ln\ln N}$ implying that the relative error is about $N^{-\tfrac{1}{2}}$; more specifically when $N\approx 10^{12}$ then the rel.err. $\approx 10^{-6},$ good enough considering the accuracy of any macroscopic thermodynamic variable... $\endgroup$ Commented Nov 16 at 18:11

3 Answers 3

2
$\begingroup$

The rigorous mathematics of classical thermodynamics only works in the thermodynamic limit, which, amongst other requirements, require that $N\to\infty$

This is also the limit in which all the finite differences can be replaced by derivatives.

The truth of this is even more apparent in statistical thermodynamics, but it is already quite self-evident in classical thermodynamics. It also means that the conceptually correct way to obtain Sackur–Tetrode entropy formula from purely classical thermodynamics is to take $$S=N\frac SN=N\frac SN(\frac UN,\frac VN,1)$$ when considering the energy-volume (per particle) integration.

Conceptually, we are replacing every system under consideration by an infinitely large system, and then cookie cutting the actual size of our system. The price to pay is that we have to make sure that the boundary terms do not matter, and they really only do not matter if the size of the boundary is always a tiny correction to the system size, and in turn this only happens if all three system sizes are big compared to the boundary limiting them.

$\endgroup$
3
  • $\begingroup$ Thank you for your answer. I am having some trouble understanding this. Do you happen to have any sources which discusses this in terms of classical theory? I have understood that in classical theory, we make the assumption that variables, such as $N$, are continuous. But I am having some trouble understanding exactly what that even means mathematically in this context. Usually a variable is not continuous -- a function is. $\endgroup$ Commented Nov 17 at 12:34
  • $\begingroup$ As an addition to the above comment, when we say that "$N$ is continuous", do we simply mean that we permit "continuous changes" in our theory? Since we have an agglomeration of particles in order of at least $10^{23}$, I suppose a changes of $\pm 1$ particles would "look" continuous macroscopically. $\endgroup$ Commented Nov 17 at 12:58
  • $\begingroup$ I never said that N is continuous. I said that we take $N\to\infty$, so that $u=\frac UN$ and $v=\frac VN$ become continuous changes, and then we substitute the correct N in the last step. $\endgroup$ Commented Nov 17 at 13:17
1
$\begingroup$

I'll try to complement the already excellent answer by @naturallyInconsistent with a more comprehensive discussion from the mathematical point of view, and a purely thermodynamic argument, from the physical perspective.

Let me start with a math preamble that could help to clarify the question itself.

The definition of a continuous function in terms of neighborhoods, that is the most general (topological) definition of continuity, (see Wikipedia ) implies that " a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous."

Therefore, the real question should not be about the continuity of the entropy as a function of the (integer) number of molecules, but about the meaning of the extension of its domain to non-integer values of $N$.

It is here that the key observation made by naturallyInconsistent plays the key role, feeding into the mathematical description some important Physics:

  • The assumption (postulate) by Callen of the additivity of the energy and entropy can only be valid provided surface effects can be neglected. For ordinary matter, this is possible only in the case of macroscopic systems where the number of molecules is not far from the order of magnitude of the Avogadro number ( $6.022 ~10^{23}$ ). Notice that the requirement of macroscopic systems, within Classical Thermodynamics, does not come from the requirement of neglecting fluctuations, but as a requirement to ensure additivity of energy.
  • Intuitively, once $N$ is of the order of Avogadro number, the unitary minimum variation of $N$ becomes a negligible relative variation. Things can be put in a more precise mathematical form by exploiting additivity, like in the formula given by naturallyInconsistent. In particular, a formula like $$ S(U,V,N)=V s(u,n) $$ where $s=S/V$, $u=U/V$ and $n=N/V$, even if, for a fixed $V$, it is defined on a disconnected set of values of $u$ and $n$, can be extended to intervals of real values of $u$ and $n$ by varying $V$.
$\endgroup$
2
  • $\begingroup$ I am still parsing this answer, but I think I understand the gist. Since the minimum, unitary variation of $N$ is so extremely small compared to the amount of particles we have, even though $N\in\{0,1,2,\dots,X\}$ is discrete in reality, it is okay to view it as living on the interval $[0,X]$. Since our resolution is macroscopic, varying $N$ by integer values looks the same to us as though it were varied continuously? $\endgroup$ Commented Nov 18 at 16:58
  • $\begingroup$ @Anna, you got the point. $\endgroup$ Commented Nov 18 at 17:00
0
$\begingroup$

Every branch and aspect of science starts out by making approximations and simplifications in order to make progress. Elementary particle physics, with things like the fine structure constant and $g-2$ measurements, reaches very high precision (precision of order parts in $10^{12}$ may be mentioned) but the influence of a change of $N$ by $1$, when $N$ is of the order of Avogadro's number, is smaller still by a huge factor.

What I am leading up to is that thermodynamics starts out by making the idealisation that the physical parameters are continuous not discrete. Under this idealisation one makes all sorts of derivations and predictions. The precision available is parts in $10^{23}$ for situations of ordinary size, and parts per $10^{12}$ in things of micron dimensions.

If we now come along and say "ok, but I would like to discuss now the situation of discreteness: how should I proceed?" In this case we have to go back to basics and start to develop an agreed language. We will employ quantum physics and statistical methods. Thermodynamics then serves as a useful way to structure the concepts and to offer predictions that apply with extremely high precision in the thermodynamic limit, and with reasonable precision outside that limit.

The phrase "the state of maximum entropy", in ordinary scientific usage, refers to a state that is available to the system. That's usually a state of an integer number of particles, but you can only explore a maximisation of entropy under changes in the particle number in cases where the number can change, and in such cases there will be thermal fluctuations in the number of particles. After taking a time-average, the average number of particles in such a system need not be an integer.

$\endgroup$
3
  • $\begingroup$ So is the thinking that since our scale at least in the order of $10^{23}$ particles, we can think of $N$ as being able to change continuously even though it strictly changes discretely? Since each change is so small compared to the amount of particles we have. If we drew a number line that represents the amount of particles $N$ in a system, and we removed 1 particle at a time and marked $N$ after each subsequent removal on our number line, then it would approximate a continuous line very well given our "resolution". Is this the kind of thinking we are employing here? $\endgroup$ Commented Nov 17 at 14:20
  • $\begingroup$ Yes, that is right in the case where $N$ is constrained to a fixed value (the closed system). But if $N$ is not constrained then you have the case where an average value of $N$ need not be an integer. $\endgroup$ Commented Nov 17 at 14:55
  • $\begingroup$ Ah, I think I understand now! For example, if a bunch of molecules are leaving and entering our system in some regular way, then our mole numbers („macroscopic average“) might be a non-integer? $\endgroup$ Commented Nov 17 at 18:09

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.