There is real statistical mechanics today. An ensemble is defined and an average calculated.
Microcanonical & Canonical Ensembles
It has been demonstrated that there is a huge number of microstates. It is possible to connect thermodynamics with the complexity of the microscopic world. Below are definitions.
Microstate
A microstate is a particular state of a system specified at the atomic level. This could be described by the many-body wavefunction. A system is something that over time fluctuates between different microstates. An example includes the gas from last time. There is immense complexity. Fix the variables <math>T, V,</math> and <math>N</math>. With only these variables known, there is no idea what microstate the system is in at the many body wavefunction level.
Consider a solid. It is possible to access different configurational states, and two are shown below. Diffusion is a process that results in changes in state over time. X-ray images may not be clear due to the diffusion of atoms and systems accessing different microstates. There may be vibrational excitations, and electronic excitations correspond to the excitation of electrons to different levels. These excitations specify what state the solid is in. Any combination of excitations specify the microstate of a system.
Unable to render embedded object: File (Two_configurational_states.PNG) not found.
Summary
A particular state of a system specified at the atomic level (many function wavebody level
[ \Psi_{\mbox{manybody}} ]
. The system over time fluctuates betwen different microstates
- There is immense complexity in a gas
- Solid
- Configurational states
- Vibrational excitations
- electronic excitations
- There is usually a combination of these excitations, which results in immense complexity
- Excitations specify the microstate of the system
Why Ensembles?
A goal is to find
[ P_v ]
. Thermodynamic variables are time averages. Sum over the state using the schrodinger equation to find energy and multiplying by probability.
To facilitate averages, ensembles are introduced. Ensembles are collections of systems. Each is very large, and they are macroscopically identical. Look at the whole and see what states the system could be in. Below is a diagram of systems and ensembles. Each box represents a system, and the collection of systems is an ensemble. Each box could represent the class, and v could represent the sleep state. Look at the properties of the ensemble. There are
[ A ]
macroscopically identical systems. Eventually
[ A ]
is taken to go to
[ \infty ]
. Each system evolves over time.
Unable to render embedded object: File (Ensemble_of_systems.PNG) not found.
Deriving P_v
The probability,
[ P_v ]
is defined for different kinds of boundary conditions. When looking at the probability that students in a class are asleep, it is possible to take an instantaneous snapshot. The term
[ a_v ]
is the occupation number and is defined as the number of systems that are in the state
[ v ]
at the time of the snapshot. The fraction of systems in state
[ v ]
is
[ P_v ]
. This is one approximation to get the probability, and it could be a bad approximation.
[ P_v \approx \frac
]
There are an additional definitions of
[ P_v ]
. It is equal to the probability of finding a system in state v at time t or identically it is the fraction of time spent by the system in state v. In the case of the sleep example, it is equal to the fraction of time that any one is in the sleep state.
The time average corresponds to looking at a class and seeing for what fraction of time does it find someone in the class asleep. The ensemble average corresponds to looking at the set of identical classes and seeing how many classes have at least one student asleep. There is a correlation between the state average and the time average. There is a need of boundary conditions. Take a picture of a large number of systems, look at everyone, and average.
Summary
Recap:
[ E = \sum_V E_V P_V ]
To facilitate averages, we introduce "ensembles" that we average over
* Averaging over many bodies rather than averaging over time
* Example: student = system, v =sleepstate
Ensemble of systems:
* 'A' (a very large number) macroscopically identical systems
* Each system evolves over time
Probability:
* Take an instantaneous snapshot
* Define
[ a_v ]
= #of systems that are in state v at the time of snapshot
* Fraction of systems in state v is
[ \frac
\simeq P_v ]
-
- Probability to find a system in statevat timet
- Fraction of time spent in statev
Microcanonical Ensemble
The boundary conditions of all systems of the microcanonical ensemble are the same. The variablesN, V,andEcannot fluctuate. Each system can only fluctuate between states with fixed energy,E.
Unable to render embedded object: File (Microcanonical_ensemble.PNG) not found.
It is possible to get degeneracy from the Shrodinger equation.
\hat H \Psi = E \begin
\underbrace{ (\Psi_1, \Psi_2, ..., \Psi_
) }
\Omega(E) \end
Consider the example of the hydrogen atom. Below is an expression of the energy proportionality and the degeneracy whenn=1andn=2
E \prop \frac
\Omega (n=1) = 1
\Omega (n=2) = 4
What isP_vin microcanonical ensemble?
All states should be equally probable with variablesN, V,andEfixed. The termP_vis the probability of being in anyEstate for a system, and it should be equal to a constant. An expression is below. Each state can be accessed, and one is not more favored. This is related to the principle of a priori probability. There is no information that states should be accessed with different probability.
P_v = \frac
{\Omega (E)
Example
Consider an example of a box and gas. All the atoms are in one corner in the second box. Add to get complete degeneracy. The value of\Omega_1 (E)is large; there is enormous degeneracy.
Unable to render embedded object: File (Microcanonical_ensemble_II.PNG) not found.
\Omega_1(E) \gg \Omega_2(E)
Consider a poker hand. There is a lot of equivalence in bad hands. These are dealt most of the time and correspond to\Omega_1(E). The royal flush corresponds to\Omega_2(E). It is equally probable, but there are many fewer ways to get the royal flush. There are the same boundary conditions. In an isolated system, in whichN, V,andEare fixed, it is equally probable to be in any of its\Omega (E)possible quantum states.
Summary
The variablesN, V,andEare fixed
- Each system can only fluctuate between states with fixed energy E (like from Schr��dinger's equation)
*\hat H \Psi = E \Psi \rightarrow E[\Psi_2 .... \Psi_\omega ]
*\Omega(E) - All states are equally probable, and are given equal weight.
Hydrogen atom
*E = \frac
*\Omega (n=1) =1
*\Omega (n=2) =4
Probability of being in any E state for a system
- employed the principle of equal a priory probabilities
*P_v = \mboxUnknown macro: {constant}*P_v = \frac
Systems
- Equally more probable
- Some accessed more times because of large degeneracy number
- There's just a lot more ways to get configuration left (likea craphand) than the right (like a straight flush)
**\Omega_1 (E) >> \Omega_2 (E)
- There's just a lot more ways to get configuration left (likea craphand) than the right (like a straight flush)
An isolated system (N, V, E= fixed) is equally probable to be in any of its\Omega (E)possible quantum states.
Canonical Ensemble
There is a different set of boundary conditions in the canonical ensemble. There are heat conducting walls or boundaries of each system. Each of theAmembers of the ensemble find themselves in the heat bath formed by theA-1members. Each system can fluctuate between different microstates. An energy far from average is unlikely. In the picture below, the energy of the ensemble on the right side of the ensemble is fixed, while the energy of a particular system is not fixed and can fluctuate.
Unable to render embedded object: File (Canonical_ensemble.PNG) not found.
Take another snapshot. There is interest in the distribution. The term\overline
is equal to the number of systems in statev.
= \overline
Below is a table of microstates, energy, and occurence, and a graph. In the graph, equilibrium has occurred, but all states can be accessed. It is possible to access different states some distance from the average energy. The total energy,\epsilon, is fixed and is equal to the integral of the curve. As the number of systems increases, the curve becomes sharper.
\mbox
1
2
3
\nu
\mbox
E_1
E_2
E_3
E_
\mbox
a_1
a_2
a_3
a_
Unable to render embedded object: File (Occurence_versus_energy.PNG) not found.
Constraints
Below are constraints. The first is the sum of the occupation number. The second constraint is possible due to the system being isolated.
\sum_v a_v = A
\sum_v a_v E_v = \epsilon
The termP_vis the probability of finding the system in statev. It is possible to use snapshot probability. There are many distributions that satisfy the boundary conditions. There is a better way to findP_v, and a relation is below. It corresponds to the average distribution. This is associated with a crucial insight.
P_v = \frac{\overline{a_v}}
</math>
<br>
</center>
Crucial Insight
An assumption is that the entire canonical ensemble is isolated. No energy can escape, and the energy <math>\epsilon</math> is constant. Every distribution of <math>\overline
that satisfies the boundary conditions is equally probable. it is possible to write many body wavefunction because the energy of the entire ensemble is fixed. The principle of equal a priori probabilities is applied. Look at the whole distribution that satisfies the boundarty condition. Each distribution of occurance numbers must be given equal weights.
Summary
The variables N, V, T are fixed
* There are heat conducting bondaries of each system
* Each of the A (= large number) members finds itself in a heat bath, formed by the (A - 1) other members
* Take snapshot; get distribution { a_v } = \overline
</math> (= # of systems in state v)<p>
</p>ConstraintsThe total energy, <math>\epsilon</math>, is fixed<math>\sum_v a_v = A</math><math>\sum_v a_v E_v = \epsilon</math> (isolated!)Probability<math>P_v \simeq \frac
is an approximation.
- Better to useP_v = \frac{\overline{a_v}}
Unknown macro: {A}
</math> , the averaged distributionThere is an assumption that the whole canonical ensemble is isolated and that energy <math>\epsilon</math> is constant. Every distribution of <math>\overline
Unknown macro: {a}that satisfies the boundary conditions is equally probable.
- We are applying the principle of equal a priori
probabilities, and each distribution of occurance numbers must be given equal weights.
Some Math
Consider every possible distribution consistent with boundary conditions, and for each distribution consider every possible permutation. The termw (\overline
) </math> is equal to the number of ways to obtain a distribution <math>\overline
Unknown macro: {a}, wherea_vis the number of systems in statev. A bad hand in poker is defined by a large number ofa. Use the multinonmial distribution.
{ a_i } = \overline
</math><br><math>w (\overline
Unknown macro: {a}) = \frac
Unknown macro: { A! }Unknown macro: { a_1! a_2! a_3! ..... a_v! }w (\overline
) = \frac
Unknown macro: { A! }Unknown macro: { Pi_v a_v!}a_1
\mbox
Unknown macro: {Number of systems in state 1}a_
Unknown macro: {nu}\mbox
Unknown macro: {Number of systems in state v}Below are expressions of the probability to be in a certain state. The terma_vis averaged over all possible distributions. Every distribution is given equal weight, and the one with the most permutations is the most favored.
P_v = \frac{\overline
Unknown macro: {a_v}}
- We are applying the principle of equal a priori
P_v = \frac
\frac{ \sum_
\omega (\overline
) a_v (\overline
) }{ \sum_
\omega (\overline
) }
Example
Below is an example of four systems in an ensemble. The termP_1is the probability of any system to be in state1.
\mbox
1
2
\mbox
E_1
E_2
\mbox
a_1
a_2
\mbox
a_0
a_1
A
0
4
B
1
3
C
2
2
D
3
1
E
4
0
\mbox
w(A) = \frac
w(A) = 1
\mbox
w(B) = \frac
w(B) = 4
P_1 = \frac
\left ( \frac
\right )
P_1 = \frac
Distribution of w(a)
The termw(a)is the number of permutations for a particular distribution. As the number of systems increases, or asAincreases, the distribution becomes more peaked.
Unable to render embedded object: File (W%28a%29_versus_a.PNG) not found.
Unable to render embedded object: File (W%28a%29_versus_a_--_large_A.PNG) not found.
Consider the probability.
P_
= \frac
\frac{ \sum_
\omega (\overline
) a_
(\overline
) }{ \sum_
\omega (\overline
) }
P_
\approx \frac{ \frac
w ( \overline
^*) a_
^*}{w ( \overline
^*) }
P_
\approx \frac{a_
^*}
(Equation 1)
Look at the distribution that maximizesw ( \overline
), the permutation number. To geta_
, maximizew ( \overline
)subject to the constraints below.
\sum_
a_
- A = 0
\sum_
a_
E_
- \epsilon = 0
Use Lagrange multipliers, and maximize\ln w ( \overline
)in order to be able to use Stirling's approximation.
\frac
{\partial a_{\nu}} \left ( \ln w ( \overline
) - \alpha \sum_k a_k - \beta \sum_k a_k E_k \right ) = 0
w ( \overline
) = \frac
\ln w ( \overline
) = \ln A! + \left ( - \ln \pi_k a_k! \right ) = \ln A! - \sum_k \ln a_k!
Use Stirling's approximation as A and the occupation number,a_k, go to infinity.
\sum_k \ln a_k! = \sum_k \left( a_k \ln a_k - a_k \right ) = \sum_K a_K \ln a_K - \sum_K a_K = \sum_K a_K \ln a_K - A
\frac
{\partial a_{\nu}} \left ( \ln A! - \sum_k a_k \ln a_k + A - \alpha \sum_k a_k - \beta \sum_k a_k E_k \right ) = 0
\left ( a_v \to x \mbox
\ln A! - \sum_k a_k \ln a_k + A \to -a_v \ln a_v \mbox
\alpha \sum_k a_k \to \alpha a_v \mbox
\beta \sum_k a_k E_k \to \beta a_v E_k \right )
- \ln a_v - 1 - \alpha - \beta E_
Unknown macro: {nu}*is the occupation number that maximizes the expressione{\alpha '} \cdot e^{\beta E_
= 0
The terma_
Unknown macro: {nu}=e^{\alpha '} \cdot e^{\beta E_, where\alpha ' = \alpha + 1. use constraints to determine the Legrange multipliers and determine the probability.
a_
Unknown macro: {nu}a_\sum_
Unknown macro: {nu}e^{\alpha '} \cdot e^{\beta E_= A
\sum_
Unknown macro: {nu}e^{-\beta E_= A
e^
Unknown macro: {alpha '}= \frac
Unknown macro: {1}Unknown macro: {A}\sum_
Unknown macro: {nu}^*}The probability of being in a certain state\nucan be calculated, and it is still in terms of the second Legrange multiplier. Plugging back into Equation 1:
P_v = \frac{a_
Unknown macro: {A}{\sum_= \frac
Unknown macro: {nu}}} \cdot \frac{e^{-\beta E_e^{-\beta E_
Unknown macro: {nu}}}{\sum_}}
Unknown macro: {A}= \frac{e^{-\beta E_
Unknown macro: {nu}e^{-\beta E_e^{-\beta E_{\nu}}
Partition Function
The denominator is the partition function,Q = \sum_
Unknown macro: {nu}P_. It tells us how many states are accessible by the system. Determine\beta, a measure of thermally accessible states
. Look at how the partition function connects to macroscopic thermodynamic variables. Find\betaand find\overline
Unknown macro: {E}\overline
= \sum_
Unknown macro: {nu}E_
\overline
= \frac{\sum_
E_
e^{-\beta E_
}}
Consider the average pressure. The pressure for one microstate isp_
.
p_
= \frac{-\partial E_{\nu}}
p_
= \sum_
P_
p_
p_
= \frac{ -\sum_
\left ( \frac{\partial E_{\nu}}
\right ) e^{-\beta E_
}}
Summary
In the case of a canonical ensemble, the energy of the entire ensemble is fixed. Each state is equally probable, and there is degeneracy. The probability is a function of how many ways to get the distribution. The distribution with the most permutation is the most probable. The graph can become very peaked. Once the distribution is known, do a maximization ofw(\overline
). Use Legrange multipliers and two constraints. The terma_
^*is the distribution that maximizes an expression. This is what is most often found. Go back to the probability, get an expression, and give part of it a name. The term\betais a measure of how many states are thermally accessible.
\overline
- Consider every possible distribution{ a_i } = \overline
Unknown macro: {a}is\omega (\overline
(consistent with boundary conditions)
- For each distribution, consider every possible permutation
- The number of ways to obtain\overline
Unknown macro: {a}) a_v (\overline) = \frac
Unknown macro: { A! }Unknown macro: { a_1! a_2! a_3! ..... a_v! }= \frac
Unknown macro: { Pi_v a_v!}</math>, where<math> a_v</math> is the number of systems in state <math>v</math>.<br>Probability<math>P_v = \frac{\overline{a_v}}
Unknown macro: {A}= \frac
Unknown macro: {1}\frac{ \sum_
Unknown macro: {overline a}\omega (\overline
Unknown macro: {a}) } </math>Averaging <math>a_v</math> over all possible distributions.<br><math> \w(\overline a)</math><math> \w(\overline a)</math> is very peaked around a specific distributionIncrease <math>A</math> and <math>\omega) }{ \sum_{overline a}}\omega (\overline
Unknown macro: {overline a}</math> becomes more peaked<br><math>a_v^</math>To get <math>a_v^</math>, maximize<math> w (\overline a)</math> subject to constraints.
*Find the partition function, <math>Q</math>