You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

There is real statistical mechanics today. An ensemble is defined and an average calculated.

Microcanonical & Canonical Ensembles

It has been demonstrated that there is a huge number of microstates. It is possible to connect thermodynamics with the complexity of the microscopic world. Below are definitions.

Microstate

A microstate is a particular state of a system specified at the atomic level. This could be described by the many-body wavefunction. A system is something that over time fluctuates between different microstates. An example includes the gas from last time. There is immense complexity. Fix the variables <math>T, V,</math> and <math>N</math>. With only these variables known, there is no idea what microstate the system is in at the many body wavefunction level.

<p>
</p>

Consider a solid. It is possible to access different configurational states, and two are shown below. Diffusion is a process that results in changes in state over time. X-ray images may not be clear due to the diffusion of atoms and systems accessing different microstates. There may be vibrational excitations, and electronic excitations correspond to the excitation of electrons to different levels. These excitations specify what state the solid is in. Any combination of excitations specify the microstate of a system.

<br>

<center>
Unable to render embedded object: File (Two_configurational_states.PNG) not found.
</center>

<br>

Summary

<p>
</p>

A particular state of a system specified at the atomic level (many function wavebody level <math>\Psi_\mbox

Unknown macro: {manybody}

</math> )

  • The system over time fluctuates betwen different microstates
  • There is immense complexity in a gas
  • Solid
    • Configurational states
    • Vibrational excitations
    • electronic excitations
      • There is usually a combination of these excitations, which results in immense complexity
      • Excitations specify the microstate of the system

Why Ensembles?

<p>
</p>

A goal is to find <math>P_v</math>. Thermodynamic variables are time averages. Sum over the state using the schrodinger equation to find energy and multiplying by probability.

<p>
</p>

To facilitate averages, ensembles are introduced. Ensembles are collections of systems. Each is very large, and they are macroscopically identical. Look at the whole and see what states the system could be in. Below is a diagram of systems and ensembles. Each box represents a system, and the collection of systems is an ensemble. Each box could represent the class, and v could represent the sleep state. Look at the properties of the ensemble. There are <math>A</math> macroscopically identical systems. Eventually <math>A</math> is taken to go to <math>\infty</math>. Each system evolves over time.

<center>

<br>

Unable to render embedded object: File (Ensemble_of_systems.PNG) not found.

<br>

</center>

Deriving <math>P_v</math>

The probability, <math>P_v</math> is defined for different kinds of boundary conditions. When looking at the probability that students in a class are asleep, it is possible to take an instantaneous snapshot. The term <math>a_v</math> is the occupation number and is defined as the number of systems that are in the state <math>v</math> at the time of the snapshot. The fraction of systems in state <math>v</math> is <math>P_v</math>. This is one approximation to get the probability, and it could be a bad approximation.

<center>

<br>

<math>P_v \approx \frac

Unknown macro: {a_v}
Unknown macro: {A}

</math>

<br>

</center>

There are an additional definitions of <math>P_v</math>. It is equal to the probability of finding a system in state <math>v</math> at time <math>t</math> or identically it is the fraction of time spent by the system in state <math>v</math>. In the case of the sleep example, it is equal to the fraction of time that any one is in the sleep state.

<p>
</p>

The time average corresponds to looking at a class and seeing for what fraction of time does it find someone in the class asleep. The ensemble average corresponds to looking at the set of identical classes and seeing how many classes have at least one student asleep. There is a correlation between the state average and the time average. There is a need of boundary conditions. Take a picture of a large number of systems, look at everyone, and average.

<p></p>

Summary

<p>
</p>

Recap:
<center>

<br>

<math>E = \sum_V E_V P_V</math>

<br>

</center>

To facilitate averages, we introduce "ensembles" that we average over

  • Averaging over many bodies rather than averaging over time
  • Example: student = system, v =sleepstate

<p>
</p>

Ensemble of systems:

<p>
</p>

  • 'A' (a very large number) macroscopically identical systems
  • Each system evolves over time

<p></p>

Probability:

  • Take an instantaneous snapshot
  • Define <math>a_v</math> = #of systems that are in state <math>v</math> at the time of snapshot
  • Fraction of systems in state <math>v</math> is <math>\frac
Unknown macro: {A}

\simeq P_v</math>

    • Probability to find a system in state <math>v</math> at time <math>t</math>
    • Fraction of time spent in state <math>v</math>

Microcanonical Ensemble

The boundary conditions of all systems of the microcanonical ensemble are the same. The variables <math>N, V,</math> and <math>E</math> cannot fluctuate. Each system can only fluctuate between states with fixed energy, <math>E</math>.

<center>

<br>

Unable to render embedded object: File (Microcanonical_ensemble.PNG) not found.

<br>

</center>

It is possible to get degeneracy from the Shrodinger equation.

<center>

<br>

<math>\hat H \Psi = E \begin

Unknown macro: {matrix}

\underbrace{ (\Psi_1, \Psi_2, ..., \Psi_

Unknown macro: {Omega}

) }
\Omega(E) \end

</math>

<br>

</center>

Consider the example of the hydrogen atom. Below is an expression of the energy proportionality and the degeneracy when <math>n=1</math> and <math>n=2</math>

<center>

<br>

<math>E \prop \frac

Unknown macro: {1}
Unknown macro: {n^2}

</math>

<br>

<math>\Omega (n=1) = 1</math>

<br>

<math>\Omega (n=2) = 4</math>

<br>

</center>

What is <math>P_v</math> in microcanonical ensemble?

All states should be equally probable with variables <math>N, V,</math> and <math>E</math> fixed. The term <math>P_v</math> is the probability of being in any <math>E</math> state for a system, and it should be equal to a constant. An expression is below. Each state can be accessed, and one is not more favored. This is related to the principle of a priori probability. There is no information that states should be accessed with different probability.

<center>

<br>

<math>P_v = \frac

{\Omega (E)</math>

<br>

</center>

Example

Consider an example of a box and gas. All the atoms are in one corner in the second box. Add to get complete degeneracy. The value of <math>\Omega_1 (E)</math> is large; there is enormous degeneracy.

<center>

<br>

Unable to render embedded object: File (Microcanonical_ensemble_II.PNG) not found.

<br>

<math>\Omega_1(E) \gg \Omega_2(E)</math>

<br>

</center>

Consider a poker hand. There is a lot of equivalence in bad hands. These are dealt most of the time and correspond to <math>\Omega_1(E)</math>. The royal flush corresponds to <math>\Omega_2(E)</math>. It is equally probable, but there are many fewer ways to get the royal flush. There are the same boundary conditions. In an isolated system, in which <math>N, V,</math> and <math>E</math> are fixed, it is equally probable to be in any of its <math>\Omega (E)</math> possible quantum states.

<p>
</p>

Summary

<p>
</p>

The variables <math>N, V,</math> and <math>E</math> are fixed

  • Each system can only fluctuate between states with fixed energy E (like from Schr��dinger's equation)
  • <math>\hat H \Psi = E \Psi \rightarrow E[\Psi_2 .... \Psi_\omega ]</math>
  • <math>\Omega(E)</math>
  • All states are equally probable, and are given equal weight.

<p>
</p>

Hydrogen atom

  • <math>E = \frac
    Unknown macro: {1}
    Unknown macro: {n^2}

    </math>

    • <math>\Omega (n=1) =1</math>
    • <math>\Omega (n=2) =4</math>

    <p>
    </p>

    Probability of being in any E state for a system

    • employed the principle of equal a priory probabilities
    • <math>P_v = \mbox
      Unknown macro: {constant}
      </math>
    • <math>P_v = \frac
    Unknown macro: {Omega(E)}
    </math>

<p>
</p>

Systems

  • Equally more probable
  • Some accessed more times because of large degeneracy number
    • There's just a lot more ways to get configuration left (likea craphand) than the right (like a straight flush)
    • <math>\Omega_1 (E) >> \Omega_2 (E)</math>

<p>
</p>

An isolated system (<math>N, V, E</math> = fixed) is equally probable to be in any of its <math>\Omega (E)</math> possible quantum states.

Canonical Ensemble

There is a different set of boundary conditions in the canonical ensemble. There are heat conducting walls or boundaries of each system. Each of the <math>A</math> members of the ensemble find themselves in the heat bath formed by the <math>A-1</math> members. Each system can fluctuate between different microstates. An energy far from average is unlikely. In the picture below, the energy of the ensemble on the right side of the ensemble is fixed, while the energy of a particular system is not fixed and can fluctuate.

<center>

<br>

Unable to render embedded object: File (Canonical_ensemble.PNG) not found.

<br>

</center>

Take another snapshot. There is interest in the distribution. The term <math>\overline

Unknown macro: {a}

</math> is equal to the number of systems in state <math>v</math>.

<center>

<br>

<math>

Unknown macro: {a_v}

= \overline

</math>

<br>

</center>

Below is a table of microstates, energy, and occurence, and a graph. In the graph, equilibrium has occurred, but all states can be accessed. It is possible to access different states some distance from the average energy. The total energy, <math>\epsilon</math>, is fixed and is equal to the integral of the curve. As the number of systems increases, the curve becomes sharper.

<center>

<br>

<table padding = 5>

<tr>

<td>
<math>\mbox

Unknown macro: {microstate}

</math>
</td>

<td>
<math>1</math>
</td>

<td>
<math>2</math>
</td>

<td>
<math>3</math>
</td>

<td>
<math>\nu</math>
</td>

</tr>

<tr>

<td>
<math>\mbox

Unknown macro: {energy}

</math>
</td>

<td>
<math>E_1</math>
</td>

<td>
<math>E_2</math>
</td>

<td>
<math>E_3</math>
</td>

<td>
<math>E_

Unknown macro: {nu}

</math>
</td>

</tr>

<tr>

<td>
<math>\mbox

Unknown macro: {occurence}

</math>
</td>

<td>
<math>a_1</math>
</td>

<td>
<math>a_2</math>
</td>

<td>
<math>a_3</math>
</td>

<td>
<math>a_

</math>
</td>

</tr>

</table>

<br>

Unable to render embedded object: File (Occurence_versus_energy.PNG) not found.

<br>

</center>

Constraints

Below are constraints. The first is the sum of the occupation number. The second constraint is possible due to the system being isolated.

<center>

<br>

<math>\sum_v a_v = A</math>

<br>

<math>\sum_v a_v E_v = \epsilon</math>

<br>

</center>

The term <math>P_v</math> is the probability of finding the system in state <math>v</math>. It is possible to use snapshot probability. There are many distributions that satisfy the boundary conditions. There is a better way to find <math>P_v</math>, and a relation is below. It corresponds to the average distribution. This is associated with a crucial insight.

<center>

<br>

<math>P_v = \frac{\overline

Unknown macro: {a_v}

}

</math>

<br>

</center>

Crucial Insight

An assumption is that the entire canonical ensemble is isolated. No energy can escape, and the energy <math>\epsilon</math> is constant. Every distribution of <math>\overline

Unknown macro: {a}

</math> that satisfies the boundary conditions is equally probable. it is possible to write many body wavefunction because the energy of the entire ensemble is fixed. The principle of equal a priori probabilities is applied. Look at the whole distribution that satisfies the boundarty condition. Each distribution of occurance numbers must be given equal weights.

<p>
</p>

Summary

<p>
</p>

The variables <math>N, V, T</math> are fixed

  • There are heat conducting bondaries of each system
  • Each of the <math>A</math> (= large number) members finds itself in a heat bath, formed by the <math>(A - 1)</math> other members
  • Take snapshot; get distribution <math>{ a_v } = \overline

</math> (= # of systems in state v)

<p>
</p>

Constraints

  • The total energy, <math>\epsilon</math>, is fixed
  • <math>\sum_v a_v = A</math>
  • <math>\sum_v a_v E_v = \epsilon</math> (isolated!)

Probability

  • <math>P_v \simeq \frac
    Unknown macro: {a_v}
    Unknown macro: {A}

    </math> is an approximation.

    • Better to use <math>P_v = \frac
      Unknown macro: {overline
      Unknown macro: {a_v}
      }
    </math> , the averaged distribution
  • There is an assumption that the whole canonical ensemble is isolated and that energy <math>\epsilon</math> is constant. Every distribution of <math>\overline
    Unknown macro: {a}

    </math> that satisfies the boundary conditions is equally probable.

    • We are applying the principle of equal a priori probabilities, and each distribution of occurance numbers must be given equal weights.

    Some Math

    Consider every possible distribution consistent with boundary conditions, and for each distribution consider every possible permutation. The term <math>w (\overline

    ) </math> is equal to the number of ways to obtain a distribution <math>\overline
    Unknown macro: {a}

    </math>, where<math> a_v</math> is the number of systems in state <math>v</math>. A bad hand in poker is defined by a large number of <math>a</math>. Use the multinonmial distribution.

    <center>

    <br>

    <math>{ a_i } = \overline

    </math>

<br>

<math>w (\overline

Unknown macro: {a}

) = \frac

Unknown macro: { A! }
Unknown macro: { a_1! a_2! a_3! ..... a_v! }

</math>

<br>

<math>w (\overline

) = \frac

Unknown macro: { A! }
Unknown macro: { Pi_v a_v!}

</math>

<table padding=5>

<tr>

<td>
<math>a_1</math>
</td>

<td>
<math>\mbox

Unknown macro: {Number of systems in state 1}

</math>
</td>

</tr>

<tr>

<td>
<math>a_

Unknown macro: {nu}

</math>
</td>

<td>
<math>\mbox

Unknown macro: {Number of systems in state v}

</math>
</td>

</tr>

</table>

</center>

<br>

Below are expressions of the probability to be in a certain state. The term <math>a_v</math> is averaged over all possible distributions. Every distribution is given equal weight, and the one with the most permutations is the most favored.

<center>

<br>

<math>P_v = \frac{\overline{a_v}}

Unknown macro: {A}

</math>

<br>

<math>P_v = \frac

Unknown macro: {1}

\frac{ \sum_

Unknown macro: {overline a}

\omega (\overline

Unknown macro: {a}

) a_v (\overline

) }{ \sum_

\omega (\overline

Unknown macro: {a}

) }</math>

</center>

Example

Below is an example of four systems in an ensemble. The term <math>P_1</math> is the probability of any system to be in state <math>1</math>.

<center>

<br>

<table padding=5>

<tr>

<td>
<math>\mbox

Unknown macro: {state}

</math>
</td>

<td>
<math>1</math>
</td>

<td>
<math>2</math>
</td>

</tr>

<tr>

<td>
<math>\mbox

Unknown macro: {energy}

</math>
</td>

<td>
<math>E_1</math>
</td>

<td>
<math>E_2</math>
</td>

</tr>

<tr>

<td>
<math>\mbox

Unknown macro: {occupation}

</math>
</td>

<td>
<math>a_1</math>
</td>

<td>
<math>a_2</math>
</td>

</tr>

</table>

<br>

<table padding=5>

<tr>

<td>
<math>\mbox

Unknown macro: {distribution}

</math>
</td>

<td>
<math>a_0</math>
</td>

<td>
<math>a_1</math>
</td>

</tr>

<tr>

<td>
<math>A</math>
</td>

<td>
<math>0</math>
</td>

<td>
<math>4</math>
</td>

</tr>

<tr>

<td>
<math>B</math>
</td>

<td>
<math>1</math>
</td>

<td>
<math>3</math>
</td>

</tr>

<tr>

<td>
<math>C</math>
</td>

<td>
<math>2</math>
</td>

<td>
<math>2</math>
</td>

</tr>

<tr>

<td>
<math>D</math>
</td>

<td>
<math>3</math>
</td>

<td>
<math>1</math>
</td>

</tr>

<tr>

<td>
<math>E</math>
</td>

<td>
<math>4</math>
</td>

<td>
<math>0</math>
</td>

</tr>

</table>

<br>

<math>\mbox

Unknown macro: {Distribution A}

</math>

<br>

<math>w(A) = \frac

Unknown macro: {4!}
Unknown macro: {0!4!}

</math>

<br>

<math>w(A) = 1</math>

<br>

<math>\mbox

Unknown macro: {Distribution B}

</math>

<br>

<math>w(B) = \frac

Unknown macro: {1!3!}

</math>

<br>

<math>w(B) = 4</math>

<br>

<math>P_1 = \frac

Unknown macro: {1}
Unknown macro: {4}

\left ( \frac

Unknown macro: {1 cdot 0 + 4 cdot 1 + 6 cdot 2 + 4 cdot 3 + 1 cdot 4}
Unknown macro: {1 + 4 + 6 + 4 + 1}

\right )</math>

<br>

<math>P_1 = \frac

Unknown macro: {2}

</math>

</center>

Distribution of w(a)

<p>
</p>

The term <math>w(a)</math> is the number of permutations for a particular distribution. As the number of systems increases, or as <math>A</math> increases, the distribution becomes more peaked.

<center>

<br>

Unable to render embedded object: File (W%28a%29_versus_a.PNG) not found.
Unable to render embedded object: File (W%28a%29_versus_a_--_large_A.PNG) not found.

<br>

</center>

Consider the probability.

<center>

<br>

<math>P_

= \frac

Unknown macro: {1}
Unknown macro: {A}

\frac{ \sum_

Unknown macro: {overline a}

\omega (\overline

Unknown macro: {a}

) a_

Unknown macro: {nu}

(\overline

) }{ \sum_

\omega (\overline

Unknown macro: {a}

) } </math>

<br>

<math>P_

Unknown macro: {nu}

\approx \frac{ \frac

Unknown macro: {A}

w ( \overline

Unknown macro: {a}

^*) a_

Unknown macro: {nu}

^*}{w ( \overline

^*) } </math>

<br>

<math>P_

Unknown macro: {nu}

\approx \frac{a_

^*}

</math> (Equation 1)

<br>

</center>

Look at the distribution that maximizes <math>w ( \overline

Unknown macro: {a}

)</math>, the permutation number. To get <math>a_

Unknown macro: {nu}

</math>, maximize <math>w ( \overline

)</math> subject to the constraints below.

<center>

<br>

<math>\sum_

Unknown macro: {nu}

a_

- A = 0</math>

<br>

<math>\sum_

Unknown macro: {nu}

a_

E_

Unknown macro: {nu}
  • \epsilon = 0</math>

<br>

</center>

Use Lagrange multipliers, and maximize <math>\ln w ( \overline

Unknown macro: {a}

) </math> in order to be able to use Stirling's approximation.

<center>

<br>

<math>\frac

Unknown macro: {partial}

{\partial a_{\nu}} \left ( \ln w ( \overline

) - \alpha \sum_k a_k - \beta \sum_k a_k E_k \right ) = 0</math>

<br>

<math>w ( \overline

Unknown macro: {a}

) = \frac

Unknown macro: {A!}
Unknown macro: {pi_k a_k!}

</math>

<br>

<math>\ln w ( \overline

) = \ln A! + \left ( - \ln \pi_k a_k! \right ) = \ln A! - \sum_k \ln a_k! </math>

<br>

</center>

Use Stirling's approximation as A and the occupation number, <math>a_k</math>, go to infinity.

<center>

<br>

<math>\sum_k \ln a_k! = \sum_k \left( a_k \ln a_k - a_k \right ) = \sum_K a_K \ln a_K - \sum_K a_K = \sum_K a_K \ln a_K - A </math>

<br>

<math>\frac

Unknown macro: {partial}

{\partial a_{\nu}} \left ( \ln A! - \sum_k a_k \ln a_k + A - \alpha \sum_k a_k - \beta \sum_k a_k E_k \right ) = 0</math>

<br>

<math> \left ( a_v \to x \mbox

Unknown macro: { , }

\ln A! - \sum_k a_k \ln a_k + A \to -a_v \ln a_v \mbox

\alpha \sum_k a_k \to \alpha a_v \mbox

Unknown macro: { , }

\beta \sum_k a_k E_k \to \beta a_v E_k \right ) </math>

<br>

<math>- \ln a_v - 1 - \alpha - \beta E_

= 0</math>

<br>

</center>

The term <math>a_

Unknown macro: {nu}

*</math> is the occupation number that maximizes the expression <math>e{\alpha '} \cdot e^{\beta E_

</math>, where <math>\alpha ' = \alpha + 1</math>. use constraints to determine the Legrange multipliers and determine the probability.

<center>

<br>

<math>a_

Unknown macro: {nu}

=e^{\alpha '} \cdot e^{\beta E_

</math>

<br>

<math>\sum_

Unknown macro: {nu}

a_

= A</math>

<br>

<math>\sum_

Unknown macro: {nu}

e^{\alpha '} \cdot e^{\beta E_

= A</math>

<br>

<math>e^

Unknown macro: {alpha '}

= \frac

Unknown macro: {1}
Unknown macro: {A}

\sum_

Unknown macro: {nu}

e^{-\beta E_

</math>

<br>

</center>

The probability of being in a certain state <math>\nu</math> can be calculated, and it is still in terms of the second Legrange multiplier. Plugging back into Equation 1:

<center>

<br>

<math>P_v = \frac{a_

Unknown macro: {nu}

^*}

= \frac

Unknown macro: {A}

{\sum_

Unknown macro: {nu}

e^{-\beta E_

}} \cdot \frac{e^{-\beta E_

Unknown macro: {nu}

}}

= \frac{e^{-\beta E_

Unknown macro: {nu}

}}{\sum_

e^{-\beta E_{\nu}} </math>

<br>

</center>

Partition Function

The denominator is the partition function, <math>Q = \sum_

Unknown macro: {nu}

e^{-\beta E_

</math>. It tells us how many states are accessible by the system. Determine <math>\beta</math>, a measure of thermally accessible states . Look at how the partition function connects to macroscopic thermodynamic variables. Find <math>\beta</math> and find <math>\overline

Unknown macro: {E}

</math>

<center>

<br>

<math>\overline

= \sum_

Unknown macro: {nu}

P_

E_

Unknown macro: {nu}

</math>

<br>

<math>\overline

Unknown macro: {E}

= \frac{\sum_

E_

Unknown macro: {nu}

e^{-\beta E_

}}

Unknown macro: {Q}

</math>

<br>

</center>

Consider the average pressure. The pressure for one microstate is <math>p_

Unknown macro: {nu}

</math>.

<center>

<br>

<math>p_

= \frac{-\partial E_{\nu}}

Unknown macro: {partial V}

</math>

<br>

<math>p_

Unknown macro: {nu}

= \sum_

P_

Unknown macro: {nu}

p_

</math>

<br>

<math>p_

Unknown macro: {nu}

= \frac{ -\sum_

\left ( \frac{\partial E_

Unknown macro: {nu}

}

\right ) e^{-\beta E_

Unknown macro: {nu}

}}

</math>

<br>

</center>

Summary

In the case of a canonical ensemble, the energy of the entire ensemble is fixed. Each state is equally probable, and there is degeneracy. The probability is a function of how many ways to get the distribution. The distribution with the most permutation is the most probable. The graph can become very peaked. Once the distribution is known, do a maximization of <math>w(\overline

Unknown macro: {a}

)</math>. Use Legrange multipliers and two constraints. The term <math>a_

Unknown macro: {nu}

^*</math> is the distribution that maximizes an expression. This is what is most often found. Go back to the probability, get an expression, and give part of it a name. The term <math>\beta</math> is a measure of how many states are thermally accessible.

<br>

<math>\overline

</math>

  • Consider every possible distribution <math>{ a_i } = \overline
    Unknown macro: {a}

    </math> (consistent with boundary conditions)

    • For each distribution, consider every possible permutation
    • The number of ways to obtain <math>\overline
    </math> is <math>\omega (\overline
    Unknown macro: {a}
    ) = \frac
    Unknown macro: { A! }
    Unknown macro: { a_1! a_2! a_3! ..... a_v! }
    = \frac
Unknown macro: { Pi_v a_v!}

</math>, where<math> a_v</math> is the number of systems in state <math>v</math>.

<br>

Probability

  • <math>P_v = \frac{\overline{a_v}}
    Unknown macro: {A}

    = \frac

    Unknown macro: {1}
    \frac
    Unknown macro: { sum_
    Unknown macro: {overline a}

    \omega (\overline

    Unknown macro: {a}

    ) a_v (\overline

    ) }{ \sum_

    omega (overline
    Unknown macro: {a}
    ) }
    </math>
  • Averaging <math>a_v</math> over all possible distributions.

<br>

<math> \w(\overline a)</math>

  • <math> \w(\overline a)</math> is very peaked around a specific distribution
  • Increase <math>A</math> and <math>\omega
    Unknown macro: {overline a}
    </math> becomes more peaked

<br>

<math>a_v^*</math>

  • To get <math>a_v^*</math>, maximize<math> w (\overline a)</math> subject to constraints.
    *Find the partition function, <math>Q</math>
  • No labels