Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
basic_properties_of_linear_operators [2021/02/23 08:51] – [Examples of Linear Operators] adminbasic_properties_of_linear_operators [2022/09/27 18:38] (current) – [Examples of Linear Operators] admin
Line 52: Line 52:
   The upshot is that the coefficients $\psi(x)$ get transformed to $\psi(-x)$ by the parity operator.  As you may have guessed by now, we will sometimes abuse notation and write   The upshot is that the coefficients $\psi(x)$ get transformed to $\psi(-x)$ by the parity operator.  As you may have guessed by now, we will sometimes abuse notation and write
   \[\hat{\mathcal{P}} \psi(x) = \psi(-x).\]   \[\hat{\mathcal{P}} \psi(x) = \psi(-x).\]
-  * The //**Laplacian operator**// \hat{\nabla}^2:  Although we have not really discussed three-dimensional systems yet, the one-dimensional position basis $\ket{x}$ is generalized to a three-dimensional position basis $\ket{\vec{r}}$ where the basis vectors are now labelled by three-dimensional position vectors $\vec{r}$.  In this basis, a general vector can be decomposed as+  * The //**Laplacian operator**// $\hat{\nabla}^{2}$:  Although we have not really discussed three-dimensional systems yet, the one-dimensional position basis $\ket{x}$ is generalized to a three-dimensional position basis $\ket{\vec{r}}$ where the basis vectors are now labelled by three-dimensional position vectors $\vec{r}$.  In this basis, a general vector can be decomposed as
   \[\ket{\psi} = \int \psi(\vec{r}) \ket{\vec{r}}\,\D V,\]   \[\ket{\psi} = \int \psi(\vec{r}) \ket{\vec{r}}\,\D V,\]
   where the components $\psi(\vec{r})$ are now a scalar function of the position vector and $\D V$ is the three-dimensional volume element $\D V = \D x\D y\D z$.  The Laplacian operator then acts as   where the components $\psi(\vec{r})$ are now a scalar function of the position vector and $\D V$ is the three-dimensional volume element $\D V = \D x\D y\D z$.  The Laplacian operator then acts as
Line 116: Line 116:
 & = \sum_k \braket{f_j}{e_k}\braket{e_k}{\psi}. & = \sum_k \braket{f_j}{e_k}\braket{e_k}{\psi}.
 \end{align*} \end{align*}
-Again, this is a formula we have seen before, now derived as an example of the Dirac notaty.\+Again, this is a formula we have seen before, now derived as an example of the Dirac notaty. 
 + 
 +We can also perform the Dirac notaty for a continuous basis.  For example, in the position basis the identity can be decomposed as 
 +\[\hat{I} = \int_{-\infty}^{+\infty} \D x \, \proj{x},\] 
 +and in the momentum basis as 
 +\[\hat{I} = \int_{-\infty}^{+\infty} \D p \, \proj{p}.\]
  
 The Dirac notaty is useful because the identity operator does nothing, but decomposing it in a basis does //everything// If you are stuck on one of the homework problems it is probably because you need to insert one or more identity operators, decompose them in some basis, move the inner product terms around, and recognize the identity decomposed in a basis and remove it. The Dirac notaty is useful because the identity operator does nothing, but decomposing it in a basis does //everything// If you are stuck on one of the homework problems it is probably because you need to insert one or more identity operators, decompose them in some basis, move the inner product terms around, and recognize the identity decomposed in a basis and remove it.
Line 154: Line 159:
  
 Note that when the vector is obvious from context, e.g. if we are working on a problem that uses the same vector $\ket{\psi}$ throughout, then the expectation value is often denoted $\Expect{\hat{A}}$. Note that when the vector is obvious from context, e.g. if we are working on a problem that uses the same vector $\ket{\psi}$ throughout, then the expectation value is often denoted $\Expect{\hat{A}}$.
 +
 +To see why this is called the expectation value, let's compute it for the position operator $\hat{x}$.
 +\begin{align*}
 +\sand{\psi}{\hat{x}}{\psi} & = \int_{-\infty}^{+\infty} \D x' \, \psi^*(x') \bra{x'} \int_{-\infty}^{+\infty} \D x \,\ x \psi(x) \ket{x} \\
 +& = \int_{-\infty}^{+\infty} \D x' \, \int_{-\infty}^{+\infty} \D x \, x \psi^*(x')\psi(x) \braket{x'}{x} \\
 +& = \int_{-\infty}^{+\infty} \D x' \, \int_{-\infty}^{+\infty} \D x \, x \psi^*(x')\psi(x) \delta(ex-x') \\
 +& = \int_{-\infty}^{+\infty} \D x \, x\psi^*(x)\psi(x) \\
 +& = \int_{-\infty}^{+\infty} x\Abs{\psi(x)}^2 \, \D x.
 +\end{align*}
 +Since $p(x) = \Abs{\psi(x)}^2$ is the probability density for position, this is the expectation value of $x$ in the sense of probability theory.
 +
 +We will see later in the course that other physical quantities are also represented by linear operators.  If $\hat{A}$ represents such a physical quantity then $\sand{\psi}{\hat{A}}{\psi}$ is its expectation value in the sense of probability theory.
 +
 +{{:question-mark.png?direct&50|}}
 +====== In Class Activities ======
 +
 +  - Let $\ket{e_1}, \ket{e_2}, \cdots$ be an orthonormal basis.  Prove that
 +  \[\sum_j \proj{e_j}\]
 +  is the identity operator.\\ \\ 
 +  HINT: You have to prove that $\left ( \sum_j \proj{e_j} \right ) \ket{\psi} = \ket{\psi}$ for all vectors $\ket{\psi}$.  Since $\ket{e_1}, \ket{e_2}, \cdots$ is an orthonormal basis, every vector can be decomposed as $\ket{\psi} = \sum_j b_j \ket{e_j}$ for some coefficients $b_j$. 
 +  - Show that $[\hat{x},\hat{p}] = i\hbar \hat{I}$, where $\hat{p} = -i\hbar \hat{\frac{\D}{\D x}}$.\\ \\ 
 +  HINT: You need to show that $[\hat{x},\hat{p}]\ket{\psi} = i\hbar \ket{\psi}$ for all vectors $\ket{\psi}$.  You may use sloppy notation, i.e.
 +  \[\hat{x} \psi (x) = x\psi(x), \qquad \hat{p} \psi(x) = -i\hbar \frac{\D \psi}{\D x}.\]