Just Browsing Lectures
CONTENTS
"UBC Physics 108 [Spring 2004]"
Course
"DC Circuits"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"Current & Resistance"
by Jess H. Brewer
on 20030209:
Drude Theory
Ski Slope Analogy:
Idealized skis: frictionless. Idealized skiers: indestructible morons.
Idealized collisions: instantaneous, perfectly inelastic.
Drift velocity v_{d} = acceleration (eE/m)
mean time between scattering
collisions .
Problem: how can be
independent of the "steepness of the slope"?
Short excursion into Quantum Mechanics and FermiDirac
statistics: electrons are fermions (halfinteger spin)
so no two electrons can be in the same state. Thus the lowest
energy states are all full and the last electrons to go into a metal
occupy states with huge kinetic energies (for an electron) comparable
to 1 eV or 10,000 K. Only the electrons at this "Fermi level" can
change their states, so only they count in conduction. So our ideal
skiers actually have rocketpropelled skis and are randomly slamming
into trees (and each other) at orbital velocities (we will neglect the
problems of air friction); the tiny accumulated drift downhill is
imperceptible but it accounts for all conduction.
J = "flux" of charge = current per unit perpendicular area
(show a "slab" of drifting charge) so
J = n e v_{d}, where n is number of
charge carriers per unit volume. (For Cu, n is about
10^{29} m^{3}.)
Ohm's Law
J = E,
defining the conductivity
= n e^{2}
/m, measured in
Siemens per metre (S/m) if you like SI units (1 S = 1 A/V). I don't.
For Cu, is around
10^{8} S/m. Putting this together with n, e and
m_{e} = 9 x 10^{31} kg, we get
~ 10^{13} s.
At v_{F} ~ 10^{6} m/s this implies a
mean free path
~ 10^{7} m. Compare lattice spacing ~ 10^{10} m.
The drift velocity v_{d} is only ~ 10^{3} m/s.
In semiconductors, n is a factor of 10^{7}
smaller and v_{d} is a factor of 10^{7} larger,
almost as big as v_{F}! So in some very pure
semiconductors transport is almost "ballistic", especially when the
size of the device is less than
.
Briefly discuss superconductors.
The inverse of the conductivity is the resistivity
, measured in Ohmmeters.
(1 Ohm = 1 V/A).
Use a cartoon of a cylindrical resistor of length L and
crosssectional area A to explain how this works, giving
R = L/A
and the familiar V = I R.
"RC Circuits"
by Jess H. Brewer
on 20030211:
RC is a Time Constant
Consider that R is measured in Ohms = Volts/Amp = Voltsec/Coul whereas C is measured in Coul/Volt; thus RC is measured in seconds. This simple dimensional analysis should make you suspect that there is a time constant that grows with both C (charge capacity) and R (resistance to the flow of charge). Discharging a Capacitor Through a Resistor
Start with an open circuit with a charged capacitor connected to a resistor; now close the switch. What happens? Well, basically, the charge is going to bleed off the capacitor through the resistor. The bigger the capacitor, the more charge there is to bleed off (for a given initial voltage drop across the capacitor) and the bigger the resistor, the slower it bleeds. The voltage on the capacitor is propostional to its remaining charge and the voltage drop across the resistance (proportional to the current, which is the rate of change of the charge on the capacitor) is exactly balanced by the voltage on the capacitor (because potential is singlevalued, see Kirchhoff's Rule #2). So you pretty much know just what to expect; it is just like a projectile slowing down under viscous flow: exponential decay with a mean lifetime = RC. Now do it mathematically: solve the differential equation for Q(t).
Charging Up a Capacitor Through a Resistor with a Battery
Start with the capacitor uncharged and then apply a constant voltage through a resistor. What happens?
"RC with AC"
by Jess H. Brewer
on 20030214:
. . . or we might have a Preview of AC circuits:
Driving an RC Circuit with an AC Voltage
Imaginary Exponents
"Diffraction"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Waves Bend Around Corners!"
by Jess H. Brewer
on 20030325:
Huygens' Principle
"All points on a wavefront can be considered as
point sources for the production of
spherical secondary wavelets.
At a later time, the new position of the wavefront will be
the surface of tangency to these secondary wavelets."

In other words, waves bend around corners!
Less obvious is the fact that a wave also
interferes with itself
even if there is a continuous
distribution of sources.
Diffraction Pattern from a Slit of Finite Width
To see this in the simplest case,
take waves coming through a single slit of width a
and divide up a into N equal "pseudoslits"
a distance d = a/N apart.
We can then use the formula above and let N
to get (after several tricky steps) the result
I = I_{0} sin^{2}
/^{2}
where
= (a sin
/).
Some features of this result:
 At the central maximum
( =
= 0)
One sees the full I_{0}.
This can be seen from l'Hospital's rule
on (sin x)/x as x goes to zero.
 The intensity goes to zero at any nonzero
for which sin
= 0.
This occurs for any integer multiple of
.
The first minimum of the diffraction pattern occurs when
=
, which in turn implies
a sin _{1} =
.
 The secondary maxima of the diffraction pattern
can be found by setting the derivative of I with respect to
equal to zero (condition
for an extremum). The resultant formula contains both a term
that goes to zero at the minima (zeroes) and another term
that reduces to
= tan
.
This transcendental equation can easily be solved by
plotting both x and sin x on the same graph
and looking for intersections. Don't go looking for an analytical
solution.
That's enough for one day.
"Waves Bend Around Corners!"
by Jess H. Brewer
on 20030402:
Huygens' Principle
"All points on a wavefront can be considered as
point sources for the production of
spherical secondary wavelets.
At a later time, the new position of the wavefront will be
the surface of tangency to these secondary wavelets."

In other words, waves bend around corners!
Less obvious is the fact that a wave also interferes with itself
even if there is a continuous distribution of sources.
Diffraction Pattern from a Slit of Finite Width
To see this in the simplest case, take waves coming through a single
slit of width a and divide up a into N
equal "pseudoslits" a distance d = a/N apart.
We can then use the formula above and let N
to get (after several tricky steps) the result
I = I_{0} sin^{2}
/^{2}
where
= (a sin
/).
Some features of this result:
 At the central maximum
( =
= 0)
One sees the full I_{0}. This can be seen from
l'Hospital's rule on (sin x)/x as x goes to zero.
 The intensity goes to zero at any nonzero
for which sin
= 0. This occurs for any
integer multiple of .
The first minimum of the diffraction pattern occurs when
=
, which in turn implies
a sin _{1} =
.
 The secondary maxima of the diffraction pattern
can be found by setting the derivative of I with respect to
equal to zero (condition
for an extremum). The resultant formula contains both a term
that goes to zero at the minima (zeroes) and another term
that reduces to
= tan
.
This transcendental equation can easily be solved by
plotting both x and sin x on the same graph
and looking for intersections. Don't go looking for an analytical
solution.
That's enough for one day.
"Resolution"
by Jess H. Brewer
on 20030404:
Circular Aperture
Think of it as a "square slit with the corners lopped off".
This makes it "effectively narrower" which makes the diffraction
pattern wider. The numerical result (to be memorized, sorry!)
is
a sin _{1} =
1.22 .
Babinet's Principle
An obstacle is as good as a hole.
Picture light shining through a large aperture and consider the
region "straight ahead" of the aperture (i.e. neglect the
fuzzy areas around the edges of the region in shadow).
If you place a small obstacle in the middle of the aperture,
you are subtracting the amplitude contributions of the rays that
would have arrived at the final screen from where the obstacle is now.
Now take away the obstacle and instead block off the whole aperture
except for a hole of the same shape as the former
obstacle (and in the same place). Now only the rays that
were formerly being blocked are allowed through.
Since amplitudes are squared to get the intensity, a "negative"
amplitude is just as good as a "positive" one. Thus the two
situations give the same diffraction pattern on the final
screen. There will be a bright spot directly behind the
obstacle, just as you would expect for the hole.
Detector Arrays
Interference is reversible. Just run the rays backward.
Thus a telescope (for instance) "sees" diffractive rings around a
distant star; the width "fuzziness" of the star
(the size of the dot it makes on the final optical detector
of the telescope) is larger for a smaller telescope diameter.
A telescope whose resolution is limited by this effect is called
"diffraction limited" and is considered a pretty good telescope
if it is a big one.
The width of a diffraction pattern is defined as
the angular distance from the central maximum to the first minimum.
Rayleigh's Criterion
Two objects can be resolved if
the central maximum of one falls on the first minimum of the other.
"Dispersion"
by Jess H. Brewer
on 20030407:
Dispersion
The wavelength (colour) dependence of the interference pattern from a
grating determines how useful it will be for resolving
sharp "lines" (light of specific wavelengths) in a mixed spectrum.
The rate of change of the angle of the m^{th}
principal maximum with respect to the wavelength is called the
dispersion D_{m} of the grating.
This is easily shown to have the value
D_{m} =
d_{m}/d
= m/d cos
_{m} .
Note that there is a different dispersion for each principal maximum.
Which m values will give bigger dispersions?
Why does this "improvement" eventually have diminishing returns?
Resolving Power
A separate question is How close together
()
can two colours be and still be resolved
by the grating?
Well, the two lines will just be resolved when
the m^{th} order principal maximum of one
falls on top of the first minimum beyond the m^{th}
order principal maximum of the other.
By requiring the path length difference between
adjacent slits to differ (for the two colours) by
/N
(where N is the number of slits) we ensure that the
phasor diagram for the second colour will just close
(giving a minimum) when that of the first colour
is a principal maximum. This gives a resolving power
R_{m} =
/
= m N .
"Electrostatics"
Topic
Found 6 Lectures on Mon 07 Oct 2024.
"The Electric Field"
by Jess H. Brewer
on 20030123:
Electric Dipoles
Show attempted 3D visualization. Calculate torque in terms of dipole moment p = q d. Line of Charge
Calculate field a distance r away from [the centre of? VOTE!] a finite line charge of length L. Treat limiting cases r >> L and r << L.
"Gauss' Law I"
by Jess H. Brewer
on 20030126:
What is Coulomb's Law "saying" about the flux of
"electric field lines"? (Work backwards to Gauss' Law.)
Extend to more complex isotropic charge distributions.
.
.
.
"Gauss' Law II"
by Jess H. Brewer
on 20030127:
Conductors
 Why the electric field has to be zero inside a conductor . . .
 Charges in cavities in conductors . . .
 Field and charge at the surface of a conductor . . .
Gauss' Law with all the Constants
 Spherically Symmetric Charge Distributions
 Cylindrically Symmetric Charge Distributions
 Planar Symmetric Charge Distributions
"Potential"
by Jess H. Brewer
on 20030130:
Electrostatic Potential
Notation: I will use V here instead of
["phi"] (chosen in class)
because HTML still has no Greek letters except "µ".
In principle, it's easier to find
E from V than vice versa,
because it's a lot easier to integrate up a scalar function
than a vector one! (And derivatives are easy, right?)
However, in practice (at the level of P108) we are not
going to be evaluating arbitrary, asymmetric charge distributions,
but only the simple symmetric shapes and combinations thereof
(using the principle of additive superposition). In these cases
Gauss' Law allows us to find E easily and
find V by simple integrations; so that's mostly what we do.
Examples
 Potential of a point charge
(or any spherically symmetric charge distribution):
Note convention of letting V be zero
at infinite r.
 Potential difference between
two concentric spheres: Easy & obvious.
 Potential of a cylinder:
Impossible to choose a radius at which V = 0.
This is because you can't actually have an infinite
line of charge without having an infinite charge.
 Potential difference between
two concentric cylinders: integral of dr/r
from a to b is ln(a/b).
 Potential of a plane: V = E d, if we
take V = 0 at the plane.
 Potential difference between
two parallel planes: E d again.
"Hammers, Doors, Capacitance & Dielectrics"
by Jess H. Brewer
on 20030201:
Hammers & Doors: Vector Calculus
Strictly optional (alternate, more compact notation).
When you first encountered Algebra it gave you new
powers  now you could calculate stuff that was
"magic" before. (Recall Clarke's Law.)
This is what I call The Hammer of Math 
"When all you have is a hammer, everything looks like a nail."
This year (or maybe earlier) you have discovered that algebra
is also a Door  the door to another whole world of Math:
the world of Calculus. Now you are exploring a new, different
Hammer of Math  the hammer of calculus, with which you can
drive a whole new class of nails!
This cycle never ends, unless you give up and quit. Every year you
will have a new Door of Math opened by the Hammer your mastered the
year before. Next year you will probably go through the Door of
Vector Calculus to find elegant and powerful Hammers for
the nails of vector fields. I am not supposed to tell you about
this, because it's supposed to be too hard. So I won't hold you
responsible for this topic, but I gave you a handout on it (and will
discuss it a little in class) because you deserve a glimpse of the
road ahead. Think of it as a travel brochure that shows only the
nice beaches and night clubs.
Topo maps and equipotentials: meaning of the
gradient operator.
Capacitors and Capacitance
(Textbook's Ch. 30)
Capacitance C is a measure of a capacitor's
capacity to hold charge
(for a given voltage between the plates).
Thus Q = C V or
V = (1/C) Q.
Units: a Farad (F) is one Coulomb per Volt.
Start with simplest (and most common) example,
the parallel plate capacitor: this case defines the terms
of reference clearly and is in fact a good approximation to most
actual capacitors. Know the formula by heart and
be able to derive it yourself from first principles!
The capacitance of a parallel plate capacitor of area A
with the plates separated by d is given by
C_{pp} =
A/d.
The capacitance of a capacitor consisting of
concentric spherical shells of radii a and b
is given by C_{sph} =
4
[(1/a)  (1/b)]^{1}.
Capacitance of the Earth: treat the Earth as a
conducting sphere of radius R_{E} =
6.37 x 10^{6} m. If the "other plate" is a concentric
spherical conducting shell at infinite radius, what will be the
potential difference between the "plates" when a charge of
Q is moved from the shell at infinite radius
to the Earth's surface? Answer: 710 µF
(later on I will show you a capacitor you can hold in the palm of
your hand that has a thousand times the capacitace of the Earth!)
Note: this is not the same thing as you calculated
in the 3rd homework assignment.
Pass around a 1 F capacitor  more than 1000 times as big as the
Earth!
The capacitance of a capacitor consisting of
concentric cylindrical shells of radii a and b
and equal length L is given by
C_{cyl} = 2
L/ln(b/a).
Note that in each case C = (numerical constant)
(distance).
Check that this makes dimensional sense.
"Electrostatic Springs and Energy Storage"
by Jess H. Brewer
on 20030204:
Capacitor as an Electrostatic "Spring"
If you like you can think of 1/C as a sort of
"electrical spring constant": if you move Q away from its
equilibrium value (zero) you get a "linear restoring voltage".
Arrays of Capacitors
An arbitrary network of capacitors can always be replaced by a single
equivalent capacitor.
An array of capacitors in parallel has an equivalent capacitance
equal to the sum of their separate capacitances. [Explain.]
An array of capacitors in series has an equivalent
inverse capacitance equal to the sum of their separate inverse capacitances. [Explain.]
Dielectric Materialism (Ch. 29)
Basically just replace
by =
(where is the
dielectric constant, a pure number always
1)
and everything takes care of itself.
Thus C always gets bigger (by a factor of
)
when there is a dielectric in between the plates. [Explain.]
Electrostatic Energy Storage
Recall the question at the beginning: why isn't a big capacitor a good
replacement for a battery? Because the voltage decreases with the
remaining charge! This has other implications as well....
The energy required to put a charge Q on a capacitor C
is not just VQ! The first bit of charge goes on at zero
voltage (no work) and the voltage (work per unit charge added)
increases linearly with Q as the charge piles up:
V = (1/C) Q. Thus dU = (1/C)
Q dQ. Integrating yields U = (1/2C)
Q^{2} or U = (1/2)C V^{2}.
For a parallel plate capacitor, V = E d and
C = A / d.
Thus
U = (1/2)
AE^{2} d. But A d is the
volume of the interior of the capacitor (the only place where
the electric field is nonzero). Thus if u is defined to be
the energy density per unit volume, then we have
u = (1/2)
E^{2}. "It turns out" that this prescription is
completely general! Wherever there is an electric field,
energy is stored at a density u given by the formula above.
It is now getting really tempting to think of E as something
"real", not just a mathematical abstraction.
"Elementary Particles"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Elementary Particles"
by Jess H. Brewer
on 20030408:
"EXAM"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"First Midterm"
by Jess H. Brewer
on 20030206:
"Second Midterm"
by Jess H. Brewer
on 20030313:
"Faraday & Inductance"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Magic!"
by Jess H. Brewer
on 20030305:
"Deriving" Faraday's Law:
Consider a metal bar of length L moving sideways through a
uniform perpendicular magnetic field B:
assuming positive charges can move, use Hall effect to find
resultant voltage between ends of the bar:
V_{Hall} = v B L
where v = dx/dt is the speed of the bar.
Now close the loop with a wire that goes outside the field
region and let current flow. The direction of the current
will be such as to make its own field either parallel or antiparallel
to the original B depending on whether the loop is
moving into or out of the field region.
(This is the essence of Lenz's Law.)
Now reformulate in terms of the net magnetic flux
_{M} = B L x
through the loop: V = B L dx/dt = 
d_{M}/dt
(Faraday's Law).
This also works for an arbitrary shaped loop, in which case V
is the integral of E around the closed path (loop)
enclosing the area through which the magnetic flux is changing.
Implausible Generalizations:
 What if you leave the loop still and move the magnet instead?
This is just a shift of reference frame, so the Hall voltage shouldn't
suddenly disappear. But where does the "induced EMF" come from
now? If you use Faraday's formulation it doesn't matter why
the flux is changing. Hmmm....
 What if you are making B with
another coil which you turn on/off?
(The basic idea of a transformer!)
Again, Faraday's Law gives the right answer "magically".
 What if you poke a long straight solenoid through the loop
and turn it on? No field acts anywhere around the loop,
and yet it magically "knows" that the flux through it has changed!
Faraday's Law is more general than my derivation!
Remember the Order:
 What is _{M}?
  Rate of change of _{M} = induced EMF.
 Induced EMF causes (if it can) a current to flow
(through resistance R, if any).
 That current (if it can flow) will (would) make its own
magnetic flux to counteract the original flux change.
(Lenz's Law)
 If real currents flow, magnetic forces result.
These are a result of the induced current; include them as
the last step in this description.
Some Nice Demos
 Magnet falls slowly through copper tube due to induced
eddy currents that dissipate gravitational energy as
Ohmic heating.
 Show pulse in ammeter as flux from magnet "cuts into"
small coil.
 Do the same thing with big floppy coil.
 With student volunteers, flip the big floppy coil
in the Earth's field and see the pulse in the ammeter.
Now we are really "seeing" the Earth's field! Sort of.
 (Possible exam question: will we get more induced voltage
with the coil unfolded as big as possible with only one turn,
or with the same length of wire wrapped many times around a
smaller loop?)
"Inductance"
by Jess H. Brewer
on 20030305:
Inductance
Suppose the long straight solenoid has a crosssectional area A.
Then the magnetic flux through it is _{M} = N A B (since B is uniform
inside and each field line links all N turns). Thus _{M} = N A µ_{0} n I = L I if we define the
inductance of the solenoid to be L = A
µ_{0} N^{2}/, where is the
actual length of the solenoid. This can also be written
L = µ_{0} n^{2}
A. Note that
A is the volume
of the inside of the solenoid (where the field is).
Similarly for the toroidal solenoid if it has a rectangular cross section so that integrating B over that area is easy:
L_{toroid} = (µ_{0}/2) N^{2}h
log (b/a), where h is the height of the solenoid
and a & b are its inner & outer radii,
respectively.
Note that in each case L has the form µ_{0}
N^{2} x, where x is some length.
Thus if L is measured in Henries [1 Henry = 1 Weber per Amp,
where a Weber (1 Tesla metre^{2}) is the unit of magnetic
flux] then µ_{0} has units of Henries per metre.
Examples:
First let's ask how big a typical coil's inductance might be. If we
make a circular 1000turn coil 1 cm in radius and 10 cm long (easy
enough to make in your kitchen) it would have an inductance of
about 4 mH [milliHenries]. Work it out.
Now some demonstrations:
 Magnetically damped pendulum and eddy currents.
 Transformers and nail melter.
 Jumping rings and forces on induced currents.
Stored Energy:
Faraday's Law can be written V =  L dI/dt.
If we move a bit of charge dQ = I dt through the wire
against that EMF, we do electrical work dU_{L} =
 V dQ = L I dI. Integrating from I = 0 up to
the final current gives U_{L} = (1/2) L
I^{2}.
In a long solenoid, I = B/µ_{0}n and L is given above, so U_{L} =
(1/2) (A µ_{0}
n^{2} )
(B/µ_{0}N)^{2}
= (1/2µ_{0}) B^{2} A . But A is just the volume of the interior of the
solenoid (where the field is), so the energy density per unit
volume stored in the solenoid is given by
u_{magn.} = (1/2µ_{0})
B^{2}.
Like the analogous result for the energy density stored in an
electric field, this result is completely general,
far moreso than this example "derivation" justifies.
"Inductance in Circuits"
by Jess H. Brewer
on 20030305:
Work through an assortment of "simple" (meaning all elements in
series) circuits:
(V_{0} stands for a battery)
 RC and V_{0}RC (review):
Q(t) = Q_{0}
e^{t/RC} and
Q(t) = CV_{0}(1 
e^{t/RC}), respectively.
 RL and V_{0}RL:
I(t) = I_{0}
e^{Rt/L} and
I(t) = (V_{0}/R)(1 
e^{Rt/L}), respectively.
 LC:
try Q(t) = Q_{0}
e^{Kt} and find that
K^{2} =  1/LC.
Slight problem having the square of a real number be negative.
Resolve simply by letting K be imaginary:
K = i , where
= (1/LC)^{1/2}.
Go on from there . . . .
For each case, first picture the Mechanical analogue
and then ask "What happens?" before launching into the
mathematics.
"AC Circuits"
by Jess H. Brewer
on 20030310:
What happens when a series LCR circuit is driven by
a sinusoidal voltage at a given frequency? You have already seen
the answer in the 109 lab; now let's see if we can understand it
better.
The Mechanical Analogue
You have seen this several times before.
The SteadyState Solution
Using complex notation.
Current is what we really want to know. We'd like to write
V_{source} = I R_{eff}. Then
R_{eff} = R  i X_{C}
+ i X_{L}, where X_{C} = 1/C and
X_{L} = L
are the reactances of the capacitor and inductor,
respectively, and is the
driving frequency (in radians per second, don't forget!).
"Interference"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"Interference in Films & Slits"
by Jess H. Brewer
on 20030324:
Huygens' Principle
"Every point on an advancing wave front may be considered a source of
outgoing spherical waves." [paraphrased]
The concept of a "wave front" is a little vague, of course; you can
think of it as the "crests" of waves if you are visualizing waves
in water or on stretched strings, but in 3 dimensional waves a "crest"
corresponds to a locus of maximum (positive) amplitude of the wave.
In general any locus of fixed phase will do just as well, as
long as you use the same fixed phase
(plus 2)
to defined the adjacent "wave front".
Naturally we make no attempt to draw 3D spherical waves
on a flat page;
all the 2D pictures are meant only as "conceptual shorthand".
This will be even more abstract as we start drawing "phasor"
diagrams.
Be sure to review your trigonometry  we'll be using it!
Interference from TWO SLITS
(Young's experiment modernized)
The "near field" intensity pattern (where "rays" from the
two sources, meeting at a common point, are not even approximately
parallel) is difficult to calculate, though it is easy enough
to describe how the calculation could be done. We will stay
away from this region  far away, so that all the interfering
rays may be considered parallel. Then it gets easy!
Simplified sketch assuming incident waves hitting the barrier
in phase (i.e. normal incidence) shows an obvious
path length difference of
= d sin
between the waves heading out from the two slits at that angle.
If this path length difference is an integer multiple
of the wavelength
we get constructive
interference. This defines the n^{th}
Principal Maximum (PM):
d sin
_{n} =
n
Often we are looking at the position of interference maxima on a
distant screen and we want to describe the position
x of the n^{th} PM on the screen rather than
the angle _{n}
from the normal direction. We always define x = 0 to be
the position of the central maximum (CM)  i.e.
= 0.
If the distance L from the slits to the screen is >> d
(the distance between the slits), as it almost always is, then
we can use the small angle approximations
sin
tan
so that _{n}
n
/d
and x_{n} = L tan
_{n}
L
_{n}
giving x_{n}
n L /d.
Be sure you can do calculations like these yourself.
Such problems are almost always on the final exam.
Time permitting, I will start on Multiple Slit Interference.
The handout covers this in detail; if I don't cover it today,
be sure to study the handout over the weekend!
"NSlit Interference"
by Jess H. Brewer
on 20030325:
PHASORS ON STUN!
We now take you to a world beyond time and space, a world of pure
mathematics where what you see are wave amplitudes and phases
of different rays of a coherent wave with a given frequency and
wavelength, interfering to make a combined amplitude  the world of
The Phasor Zone. (Dewdewdewdew, dewdewdewdew,
DEWdiddlyew...)
In this abstract world each wave is seen as an amplitude
A_{i} pointing away from some origin at a
phase angle _{i}
in "phase space"  a phasor. All the phasors representing
different wave amplitudes are "precessing" about the origin at a
common angular frequency
(the actual frequency of the waves) but their phase differences
do not change with time. Thus we can pick one wave arbitrarily to
have zero phase and "freeze frame" to show the angular orientations
(and lengths) of all the others relative to it.
Phasors are vectors (albeit in a weird space) and so if they
are to be added linearly we can construct a diagram for the resultant
by drawing all the amplitudes "tiptotail" as for any vector addition.
If there are any configurations that "close the polygon" (i.e.
bring the tip of the last phasor right back to the tail of the first)
then the net amplitude is zero and we have perfect destructive
interference!
For an idealized case of N equalamplitude waves
out of phase with their neighbours by an angle
we will get a minimum when
N =
n(2),
satisfying the above criterion.
This is the condition for the n^{th} minumum
of the Nslit interference pattern; we usually only care
about the first such minimum, which occurs where
N =
2.
To see where in real space that first minimum occurs,
we have to go back to the origin of the phase differences
due to path length differences:
/2
= /
= d sin
/,
giving
sin _{first min.} =
/N.
"NSlit Interference"
by Jess H. Brewer
on 20030330:
PHASORS ON STUN!
We now take you to a world beyond time and space, a world of pure
mathematics where what you see are wave amplitudes and phases
of different rays of a coherent wave with a given frequency and
wavelength, interfering to make a combined amplitude  the world of
The Phasor Zone. (Dewdewdewdew, dewdewdewdew,
DEWdiddlyew...)
In this abstract world each wave is seen as an amplitude
A_{i} pointing away from some origin at a
phase angle _{i}
in "phase space"  a phasor. All the phasors representing
different wave amplitudes are "precessing" about the origin at a
common angular frequency
(the actual frequency of the waves) but their phase differences
do not change with time. Thus we can pick one wave arbitrarily to
have zero phase and "freeze frame" to show the angular orientations
(and lengths) of all the others relative to it.
Phasors are vectors (albeit in a weird space) and so if they
are to be added linearly we can construct a diagram for the resultant
by drawing all the amplitudes "tiptotail" as for any vector addition.
If there are any configurations that "close the polygon" (i.e.
bring the tip of the last phasor right back to the tail of the first)
then the net amplitude is zero and we have perfect destructive
interference!
For an idealized case of N equalamplitude waves
out of phase with their neighbours by an angle
we will get a minimum when
N =
n(2),
satisfying the above criterion.
This is the condition for the n^{th} minumum
of the Nslit interference pattern; we usually only care
about the first such minimum, which occurs where
N =
2.
To see where in real space that first minimum occurs,
we have to go back to the origin of the phase differences
due to path length differences:
/2
= /
= d sin
/,
giving
sin _{first min.} =
/N.
Analytical Solution for N Slits
You need to actually remember, understand, and be able to use
everything above. The full derivation of the formula
for the intensity as a function of
is another matter. It is shown in gory detail on the
Phasors handout and I will go over it in class,
but you will not be expected to derive it, nor would there be
many occasions when you would need to use it, as long
as you can reason out the positions of principal maxima and
secondary maxima or minima using the qualitative arguments
above. Nevertheless, it is nice to see a real derivation;
you may even wish to run the result through Mathematica
(or some other software package) to make nice plots of
"interference patterns" for your own amusement and the
edification of your friends.
"Maxwell's Equations"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Displacement Current"
by Jess H. Brewer
on 20030310:
Ampère's Law revisited
What happens when you run a current "through" an uncharged
capacitor? Apply Ampère's Law around the wire; now extend the
"open surface bounded by the close loop" so that it passes through
the gap in the capacitor without cutting any currentcarrying wires.
(Imagine that you are making a big soap bubble with a hoop.) Does
the magnetic field around the loop suddenly disappear? I think not!
Maxwell proposed a "timevarying electric flux" term symmetric to the
changing magnetic flux in Faraday's Law to resolve this paradox.
Suddenly a timevaring electric field generates a
magnetic field, as well as the reverse.
"A Wave in Nothing"
by Jess H. Brewer
on 20030316:
"The Magnetic Field"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"I x B: the Lorentz Force"
by Jess H. Brewer
on 20030223:
The Magentic Field:
why do we bother to define such a thing, rather than just looking at
the direct force law? Well, have you ever looked at
the direct force law? It's too hard!
Instead we postulate some preexisting magnetic field B
(Who knows where it came from? None of our business, for now.)
and ask, "How does it affect a moving charged particle?" The answer
(SWOP) is the Lorentz Force:
F =
Q(E + v x B)
where v is the vector velocity of the particle,
"x" denotes a cross product (review this!)
and we have thrown in the Coulomb force due to an electrostatic field
E just to make the equation complete.
Note that a bunch of charged particles flowing through a short
piece of wire (what we call a current element
I d) is interchangeable with a single
moving charge Qv. Discuss units briefly.
Speaking of units, the Coulomb is defined as an
Amperesecond, and an Ampere is defined
as the current which, when flowing down each of two parallel wires
exactly 1 m apart, produces a force per unit length of
2x10^{7}N/m between them.
No kidding, that's the official definition. I'm not making this up!
Some Disturbing Examples
Consider two like charges at rest with respect to each other:
the force of each on the other is repulsive, right?
Now fly over them in a jet plane and look down on them: in your
frame they are moving parallel to each other; this gives a
small attractive force. If you fly by in the Enterprise
instead, you can go much faster; eventually the Coulomb repulsion
can be overcome by the magnetic part of the Lorentz force.
But surely the particles are either attracted or repelled, not both!
Wait! It gets worse!
Now try visualizing the forces between two charged particles
moving at right angles to each other. What happened to
Newton's Third Law?! This conundrum is only resolved by the
relativistic transformations of E and B.
(Stay tuned . . . . )
Realistic Numbers
In the lab one can "easily" make a field of 1 T (about 20,000 times
the Earth's field here). A wire 1 m long carrying 1 A of current
will "feel" a net sideways force of only 1 N. Motors etc.
need lots of "turns" of wire to make a decent sized torque.
Circulating Charges
If v is perpendicular to B,
we get a very familiar situation: the force on the particle is
always normal to its velocity, so it cannot change its speed;
and yet it is constantly accelerated. Ring a bell? Come on,
you know this: it's gool 'ol uniform circular motion!
Solve the familiar equations to get
p = Q B r
where p is the momentum and r is the radius of the
orbit. "It turns out" that this relation is relativistically
correct, but you needn't concern yourself with this now.
Playing with angular frequency and such reveals the nice feature
that the period of the orbit is independent of the speed
of the particle! This nice feature (which is not true
relativistically, but only at modest speeds) is what makes
cyclotrons possible. See TRIUMF.
"What B Do!"
by Jess H. Brewer
on 20030225:
More tricks with The Lorentz Force:
F = Q
(E + v x B)
Cyclotrons
Since (for E = 0)
F is always perpendicular
to both v and B,
the Lorentz force can never change the speed
(or kinetic energy) of the particle.
In a uniform B, the resultant motion
in the plane perpendicular to B
is uniform circular motion with a radius of curvature
r given by v^{2}/r = QvB/m
or p = mv = QBr. Since v = r
, this means =
QB/m, a constant angular frequency (and therefore
a constant orbital period) regardless of v! Faster particles
move in proportionally larger circles so that the time for a full
orbit stays the same (as long as v << c).
This is what makes cyclotrons possible.
Magnetic Mirrors
In general, a charged particle moves along a spiral
in a magnetic field. If it moves toward a region of higher field,
its momentum perpendicular to the field increases at the expense
of its momentum along the field; eventually the latter stops
and reverses direction  a magnetic mirror!
Examples: the van Allen Belt, Tokomaks and Cosmic Rays from
the Universe's biggest accelerators.
Remember: the Lorentz force does no work!
(It's like a really smart Physicist. :)
Wien Filters
If v, B and E are all
mutually perpendicular, the particle will pass undeflected
if E = vB. This makes a nice velocity selector.
If you also measure the radius of curvature of the same particle's
path in B with no E, you know its momentum.
Putting these together gives you the ratio Q/m.
If you know Q (which was not so easy until Milliken's
"oil drop" experiment) then you know m.
This is the basis for conventional mass spectroscopy.
However, the cyclotron is an even better mass spectrometer.
Why?
Hall effect
Transverse voltage due to moving charges
trying to curve in a magnetic field. Useful for determining both
the concentration (number per unit volume) and the magnitude
and sign of the charge on individual carriers in any material.
Rail Gun
Just a quick handwaving description. Go look on the Web for many
designs, if you're interested; but spooks will probably be watching
you thereafter.
"Do B, do B, do!"
by Jess H. Brewer
on 20030228:
Biot & Savart vs. Ampère: now that you know how to do integrals, you're expected to use them! (Dang! Ignorance is easier!) Law of Biot & Savart
Long Straight Wire via Biot & Savart: Derivation shown for general case of a finite wire segment. From this one could fairly easily find the field produced by a loop made of several straight segments. Answer: field makes righthanded circles around the wire, magnitude (for a long wire) B(r) = µ_{0}I/2r where r is the distance from the wire. (Simple answer fron a very complicated derivation.) Circular Current Loop via Biot & Savart: too hard to calculate the field anywhere except on the axis of the loop. There (by symmetry) the field can only point along the axis, in a direction given by the RHR: curl the fingers of your right hand around the loop in the direction the current flows, and your thumb will point in the direction of the resulting magnetic field. (Sort of like the loops of B around a line of I, except here B and I have traded places.) As usual, symmetry plays the crucial role: current elements on opposite sides of the loop cancel out each other's transverse field components, but the parallel (to the axis) components all add together. As for the electrostatic field due to a ring of charge, we get the same contribution to this noncanceling axial field from each element of the ring.
Ampère's Law
The integral of B_{//} dl around a closed loop (where B_{//} is the component of B along the path at each element dl) is equal to µ_{0} times the net current I_{encl} linking the loop (i.e. passing through it). (Used like Gauss' Law only with a path integral.)
Long Straight Wire via Ampère's Law: It's so easy!
Any Cylindrically Symmetric current distribution gives the same result outside the conductor; inside we get an increase of B with distance from the centre, reminiscent of Gauss' Law....
Circular Current Loop via Ampère's Law: Forget it! Ampère's Law is of no use unless you can find a path around which B is constant and parallel to the path. There is no such path here.
"Thermal Physics"
Topic
Found 6 Lectures on Mon 07 Oct 2024.
"First Class!"
by Jess H. Brewer
on 20030104:
WHAT IS PHYSICS ABOUT?
Until about the 16th Century, science was dominated by the Aristotelian
paradigm, caricatured as follows: "Get to know how things are."
That is, concentrate on the phenomena; for instance,
what happens when you touch a hot stove? If you asked an Aristotelian
why your finger gets uncomfortably hot, the answer would be,
"Because that's the way it works, stupid." We are still pretty
Aristotelian in our hearts today; the textbook reflects this 
it generally delivers a concise description of "how things are"
(usually as a concise formula in terms of defined quantites) and
then shows how to use that principle to calculate stuff; only later
(if ever) does it show why the world behaves that way.
Starting nominally with Galileo, "modern scientists" began to ask
questions that Aristotelians whould have considered impertinent
and even arrogant, like, "Why does the heat flow the way
it does?" or "How heavy is an atom?" or "Why are there only three
Generations?"
[The last refers to leptons and quarks, not Star Trek.]
In my opinion, PHYSICS is about those impertinent questions.
It goes like this: we observe a PHEMONENON and gather empirical
information about it; then we MAKE UP A THEORY for why
this behaviour occurs, DERIVING it mathematically so we can check it
for consistency, extend it and finally use it to PREDICT hitherto
unobserved NEW phenomena as well as answering our original questions.
Then we can go do EXPERIMENTS to see if the predicted phenomena do
in fact occur. If not (usually), back to the drawing board.
But over time, this has given us a ladder to climb....
I am going to try to follow this sequence in my lectures, so that
PHYS 108 will have some of the flavour of actual science as you
will experience it if you become a Scientist (not just Physicists).
Some of you won't like it. Sorry. As one of the lesser philosophers
of the 20th Century said, "You can't please everyone, so you have to
please yourself." And speaking of songs...
(musical introduction to Thermodynamics)
I mainly want everyone to understand that the approach I am taking
to introducing Thermal Physics is very unconventional,
and that the glib nonsense you were probably taught in high school
is not what I expect you to understand by entropy
or temperature.
"Entropy and Temperature"
by Jess H. Brewer
on 20030108:
PHENOMENON: If you touch a hot stove, you get burned.
QUESTION: How come?
HYPOTHESIS: Energy flows spontaneously from the hot stove
to the cooler skin because of random exchanges.
THEORY: Wait a minute, I have more questions! What does "hot" mean?
"Cooler"? What sort of "random exchanges"? In any case the socalled
"hypothesis" just begs the question of "Why?"  we need to start
earlier.
Revised QUESTION: What's "hot"?
HYPOTHESIS: It has something to do with randomness and energy.
THEORY: Let's make up the simplest possible definition of what we
mean by "random": the Fundamental Assumption of Statistical
Mechanics, namely
Every accessible fully specified state of the system
is a priori equally likely.
Whoa! This has some ringers in it. We need to define (as well as
we can) exactly what we mean by "accessible", "fully specified state"
(or for that matter "state"), "system" and "a priori".
Here we get to the details, for which I think it is appropriate
to say, "You had to be there!"
Some topics touched upon: Dirac notation ("a>"), energy
conservation, parking lots, counting, binomial distributions,
the multiplicity function, entropy, microcanonical ensembles,
maximum likelihood, extrema and derivatives, temperature
and the Cuban economy.
Everything is on the Thermal Physics handout (Ch. 15 of the
Skeptic's Guide). I went from the beginning through the
definition of (the dimensionless form of) entropy and on to
the definition of inverse temperature as the criterion for the
most probable configuration, i.e. thermal equilibrium. This
stuff is essential, fundamental and important! I expect you to
know it well enough to reproduce the derivation on an exam.
(Not many things fall into this category.) Note that the derivative
of entropy with respect to energy is the inverse temperature;
thus when entropy is a dimensionless number,
temperature is measured in energy units.
"Hot, Hotter and Boltzmann"
by Jess H. Brewer
on 20030108:
 So What IS "Hot" Anyway?
"It takes two to tango." Put two systems in contact so they
can exchange energy (U). Now every accessible microstate
of the combined system is equally likely; but there are
more such states in some "configurations" (divisions of U
between the two systems) than others.
If U_{1} is the total energy in system 1 and
U_{2} is the total energy in system 2,
U_{1} + U_{2} = U is a conserved
constant and any increase in U_{1}
implies a correspinging decrease in U_{2}:
dU_{2} =  dU_{1}
Does such a change lead to more overall possibilities?
For a given configuration, the net multiplicity is the product
of the multiplicities of the individual systems, so the net entropy
is the sum of the entropies of the individual systems.
If the net entropy increases when we take dU_{1}
out of system 2 and move it into system 1, then this new
configuration is more likely  i.e. such an energy transfer
will happen spontaneously. This is exactly what we mean when we say
that system 2 is hotter than system 1! To turn this into a
formal definition of temperature we need some mathematics.
.
.
.
 A Model System:
N spins in a magnetic field.
Energy per spin = plus or minus µB
where µ is the magnetic moment of one spin.
Total energy U depends only on B and
the number n of spins up.
(The rest are down.)
U = (n  [N  n])
µB = (2n  N)µB
Thus the multiplicity function is a binomial
distribution, which is approximately Gaussian
for large N.
Entropy (log of a Gaussian) is an inverted parabola.
Slope of entropy vs. energy goes through zero
and then goes negative. Therefore temperature
goes to infinity, then jumps discontinuously to and finally approaches zero from the negative side.
What does this mean?
 Really Small Systems:
a single degree of freedom
 The orientation of a single spin in a magnetic field.
 The height of a single oxygen molecule in the atmosphere.
Put such an infinitesimal "fully specified state" in contact with
a large "heat reservoir" at temperature T. This is
called a Canonical Ensemble. What can we say about the
probability of finding the tiny "system" in that particular
fully specified state?
.
.
.
Mathematical derivation of the Boltzmann Distribution.
Note that the probability must be normalized.
"Particle in a Box"
by Jess H. Brewer
on 20030115:
Discussion of
standing waves,
http://musr.physics.ubc.ca/~jess/hr/skept/Waves/node9.html>
quantization and
de Broglie's Principle:
=
h/p
[For an introduction to Quantum Mechanics in the form of the script to
a comical play, see
The Dreams
Stuff is Made Of (Science 1, 2000).]
..
.
Discrete wavelengths, momenta and energies. Lowest possible energy
is not zero. As the box gets smaller, the energy goes up!
Handwaving reference to black holes, relativistic kinematics,
massenergy equivalence and how the energy of confinement
can get big enough to make a black hole out of even a photon
if it is confined to a small enough region (Planck length).
" Kinetic Theory of Gases"
by Jess H. Brewer
on 20030115:
EQUIPARTITION OF ENERGY:
 Look at the Boltzmann distribution as a function of T
for fixed E.
 Now look at it as a function of E for fixed T:
what is the average energy if there is a uniform distribution
of possible energies?
 Handwave the factor of two to get the average energy per degree
of freedom (explain what is meant by a "degree of freedom").
IDEAL GAS:
 Mean energy per atom (from Equipartition Theorem).
 Mean energy in a gas of N noninteracting atoms at
temperature T.
 Average momentum from average energy gives average
pressure at temperature T. Voila! the Ideal
Gas Law!
"Momentum Space"
by Jess H. Brewer
on 20030117:
Allowed states are evenly spaced in momentum but not in
energy, which is what we want in our Boltzmann distribution.
Since p
E, we expect
(E)
1 / E. (Sketch.)
Moving to 3D picture, there is one allowed state (mode) per
unit "volume" in pspace. But if what we want is the
density of states per unit magnitude of the (vector) momentum,
there is a spherical shell of "radius" p and thickness
dp containing a uniform "density" of allowed momenta whose
magnitudes are within dp of p. This shell has a
"volume" proportional to p^{2} and so the density of
allowed states per unit magnitude of p increases as
p^{2}. This changes everything!
.
.
.
The details are on the Momentum Space handout. You may feel
this is going too far for a First Year course, and I have considerable
sympathy for that point of view. I simply wanted you to have some
idea why the Maxwellian energy and speed distributions have those
"extra" factors of E and
v^{2} in them (in addition to the Boltzmann factor
itself, which makes perfect sense). The textbook (perhaps wisely)
simply gives the result, which is too Aristotelian for us, right?
Rest assured that I will not ask you to reproduce any of these
manipulations on any exam. At most, I will ask a short
question to test whether you understand that one must account
not only for the probability of a given state being occupied
in thermal equilibrium (the Boltzmann factor) but also
how many such states there are per unit momentum
or energy (the density of states) when you want to find
a distribution.
"Waves"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Wave Review"
by Jess H. Brewer
on 20030318:
Today I want to spend some time reviewing the general properties
and behaviour of waves. Only a few of the topics will be new, but
for the rest of the course I am going to be relying on your deep and
intuitive understanding of how waves behave, so I will not just rely
on what you learned last term.
The Electromagnetic Spectrum
. . . from ~1 Hz seismic waves (wavelength ~10^{8} m)
to ~10^{20} Hz gamma rays (wavelength ~10^{12} m).
We will, out of human biological chauvinism, pay most attention to
the visible spectrum between ~400 and ~800 nm in wavelength.
Simple Harmonic Motion (SHM) in Time and Space
. . . a review of sinusoidal travelling waves.
Solutions of the Wave Equation
. . . the linear Wave Equation has solutions that are
not sinusoidal. In fact, any wellbehaved function of
only u = x  ct, where c is the wave's
propagation velocity, will automatically satisfy the Wave Equation.
Same for u = x + ct, but this describes a wave
propagation in the negative x direction.
Which way is it going? To see the answer, pick a point of
well defined phase on the wave (for instance, where it crosses
the x axis) and then let t increase by a small amount
dt. This changes the phase; what would you need to do with
x to make the phase go back to its original value? If
adding dx to x would compensate for the
shift in t, then the wave must be moving in the positive
x direction. If you must subtract dx from
x to get this effect, it is moving in the negative
x direction. Be sure you understand this thoroughly.
Actual Wave Functions: Plane and Spherical Waves
The standard "plane wave" propagating in the z direction
can be generalized to propagate in the k direction,
where k is called the wave vector.
It has the same magnitude as usual, k =
2/,
but the scalar kz is replaced by the dot product
kr
(where r is the vector position where we want to know
the wave's amplitude). Imagine the wave "crests" as plane sheets
stretching off to infinity in both directions perpendicular to
k, marching along in the k direction
at c. Obviously the plane wave is an idealization.
We won't use this formulation explicitly very often,
but it serves to remind us that the wave has a well
defined direction of propagation, which we habitually
express in the form of rays, a picture inherited from
Newton, who insisted that light was particles following
trajectories like little billiard balls, until Huygens showed
that it was indeed waves.
(We now know they were both right!)
"Reflection, Refraction & Interference"
by Jess H. Brewer
on 20030321:
Group vs. Phase Velocity
For a solution of "the" Wave Equation, they are the same thing:
if = c k, then
v_{g} =
d/dk = c =
/k = v_{p}
But there are lots of other wave equations (a good example
being the Schroedinger Equation for "matter waves" which you will
encounter next year if you take Physics 200) which do not
have this simple linear relationship between the frequency and the
wavelength. We will not dwell on this in P108, but you should be
aware that actual information (or matter itself, in the
case of matter waves) moves at the group velocity
v_{g} =
d/dk,
not at the phase velocity v_{p} =
/k.
Using "Rays"
Reflection: phase reverses
(
= 2)
at reflection from a denser medium
with a larger index of refraction
(like for a rope attached to the wall);
phase does not reverse at reflection from a less dense
medium with a smaller index of refraction (like for a rope with a
dangling end).
Refraction: "slow light" 
index of refraction n =
c_{vacuum}/c'_{medium}
(always 1 or greater).
Snell's Law: n sin
= n' sin '.
Total Internal Reflection (Ltd.)
"Effective path length"
INTERFERENCE: "Beats" in Space
This applies only for waves with the same frequency.
Disturbances of a linear medium just add together.
Thus if one wave is consistently "up" when the other is "down"
(i.e. they are "180^{o} out of phase") then the
resultant amplitude at that position is zero.
This is called "destructive interference". If they are both "up"
(or "down") at the same time in the same place, that's
"constructive interference".
Thin Films
Assuming normal incidence, add together the "rays" reflected
from both surfaces of the film. Remember the phase change at any
reflections from denser media. Then add in the phase
difference
= 2
(/)
due to the path length difference
and you have the net phase difference between the two reflected waves.
When this is an integer multiple of
2 you have constructive
interference. When it is an odd multiple of
, you have destructive
interference. That's really the whole story.
Examples: the "quarter wave plate" and the
soap film. Oil on water and the fish poem.
"Weird Science"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Weird Science"
by Jess H. Brewer
on 20030119:
"Vector Fields"
by Jess H. Brewer
on 20030122:
"The force that would be there"  Electric Field
Vector fields and visualization.
Simple problems with point charges. Superposition of
electric fields from different sources  just add 'em up (vectorially)!
Not so simple problems: continuous charge distributions.
Example: the electric field on axis due to a
ring of charge can only be calculated by
"brute force" integrating Coulomb's Law. Fortunately it is quite
easy, as long as we stay on the axis where transverse components
cancel by symmetry.
Slightly harder: the electric field on axis due to a
disc of charge is the sum of the fields from
all the little rings that make up the disc.
Always check that the result you calculate behaves as expected
(namely, Coulomb's Law) as you get so far away from the charged
object that it looks like a point charge.
"UBC Physics 473"
Course
"UBC Physics 107 [Fall 2004]"
Course
"Emergence of Mechanics"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Mathematics as the Agent of Emergence"
by Jess H. Brewer
on 20031216:
"Introduction"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
""
by junaid
on 20060601:
"Rigid Body Motion"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Moments of Inertia"
by Jess H. Brewer
on 20041006:
Definition of a Rigid Body: a system of particles in which
the distance between any two particles is held fixed.
Possible motions of a rigid body:
 Translational motion of the centre of mass (CM) as if
all the mass were located there and all external forces acted thereon.
 Rotational motion about an axis through the CM.
Of course one can always have rotation about some other axis
not through the CM, but then one has to take into account
the rotational motion of the CM as well. More on this later.
Inertial factors: we are used to m being a fixed,
scalar property of a particle, determining both how much F
it takes to produce any a and how much p
we get for a given v. In the same way, there is a
measure of rotational inertia called the moment of inertia
I_{A} about axis A that tells us how much
torque it takes to produce a given angular acceleration
and how much angular momentum we get for a given
angular velocity.
Show for an arbitrary axis through an arbitrary rigid body that the
moment of inertia about that axis is the sum (integral) of the
square of the perpendicular distance from the axis
times the element of mass at that distance.
Thus the moment of inertia of a hoop of mass M
and radius R about a perpendicular axis through its centre
is I_{CM} = MR^{2}.
The same goes for a cylindrical shell.
More examples on Friday.
"Rockin' Rolling"
by Jess H. Brewer
on 20041008:
What about the moment of inertia about some other axis?
Parallel Axis Theorem
If we know the moment of inertia I_{CM}
of an arbitrary rigid body about an axis through its CM,
it can easily be shown that the moment of inertia about
a different axis A parallel to the CM axis but a
perpendicular distance h away from it is given by
I_{A} = I_{CM}
+ M h^{2}.
Perpendicular Axis Theorem
(Applies only to thin flat plates.)
Pick any point on the plate; draw the z axis
through that point perpendicular to the plate
and the x and y axes in the plane of the plate.
The distance r of a given mass element
from the z axis obeys r^{2} =
x^{2} + y^{2}, where
x and y are its distances from the
y and x axes, respectively. Thus
I_{z} = I_{x}
+ I_{y}.
Some Examples
A thin Rod of length L
 about a perpendicular axis through its CM:
A simple integration over the distance away from the centre gives
I_{CM} =
(1/12)M L^{2}.
 about a perpendicular axis through its end:
We can either do an even simpler integral or use the Parallel
Axis Theorem to get I_{end} =
(1/3)M L^{2}.
A Rectangular Plate of length L_{x}
and width L_{y}
Here the integrals for I_{x} and I_{y}
are the same as for a thin rod: I_{x} =
(1/12)M L_{x}^{2} and I_{y} =
(1/12)M L_{y}^{2}.
Applying the Perpendicular Axis Theorem gives
I_{z} = (1/12)M
(L_{x}^{2} + L_{y}^{2}).
A uniform Disc
...is a collection of hoops, each with
its own radius r and its own mass (proportional to r).
Since moments of inertia add, we just sum (integrate) over all the
hoops to get (with only a few steps)
I_{CM} = (1/2)M R^{2}.
A Spherical Shell
...can also be built up out of hoops, each of which is centred on
the same perpendicular axis through its centre. The straightforward
integration gives
I_{CM} = (2/3)M R^{2}.
A Solid Sphere
...can be built up out of hoops or discs, each of which is again
centred on the same perpendicular axis through its centre.
The slightly more challenging integration gives
I_{CM} = (2/5)M R^{2}.
Each of these takes a little while to calculate, and the
only difference between them is the numerical factor out in front
of the M R^{2} or M L^{2}
or whatever.
Although you can do the calculation yourself using only simple
integrations and the two Theorems described above, this is one of
those few cases where it is a good idea to just
memorize the numerical factors that go with the different
common shapes, to save yourself time and energy on homework
and exams.
Kinetic Energy of Rotation
An easy derivation gives
K = (1/2)M V^{2} +
(1/2) I_{CM} ^{2}
where V is the velocity of the CM.
Rolling Motion of a Wheel
Demonstration of two apparently identical cylinders
rolling down an inclined plane: one reaches the bottom significantly
ahead of the other. Why?
(Several theories proposed; vote taken on which was wrong.)
Explanation:
In general, the angular motion is independent of the
translational motion. But in the case of rolling
without slipping, the position on the surface
is locked to the angle through which the wheel has
turned, and so likewise the speed parallel to the plane and the
angular velocity of rolling:
v = R
.
Applying this to the net kinetic energy, which must equal the
gravitational potential energy lost as the wheel rolls downhill,
we find that the smaller the moment of inertia per unit mass,
the larger the velocity at the bottom of the slope.
"UBC Physics 108 [Spring 2005]"
Course
"DC Circuits"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"Batteries, Resistors and DC Circuits"
by Jess H. Brewer
on 20050209:
We have explained why the "voltage drop" across a capacitor is
Q/C. Think of the capacitor as a rubber balloon which you can fill with a charged "fluid"; the more fluid you inject, the harder it tries to squirt it back out. The next circuit element to consider is the battery, whose voltage "drop" is +V_{o}. You can think of the battery as a reservoir full of charged fluid that is stored at higher elevation and therefore gives you constant pressure in the pipes. Which brings us to the next circuit element: the "pipe" through which the fluid must flow against friction; this is the resistor. The voltage drop for a resistor is IR. This can be understood in microscopic detail, but for now let's lean heavily on analogy:
Adding Resistances: in series (just add 'em up!) and in parallel (add inverses to get equivalent inverse).
Kirchhoff's Rules:
 Charge is conserved. Whatever current goes into a junction must also come out. Net charge on any isolated part of the circuit is zero.
 Potential is single valued. A trip around any closed loop must get you back to the same voltage you started with. ("Sum of the voltage drops equals zero.")
Now you understand both C and R. Let's put them together.
RC is a Time Constant
Consider that R is measured in Ohms = Volts/Amp = Voltsec/Coul whereas C is measured in Coul/Volt; thus RC is measured in seconds. This simple dimensional analysis should make you suspect that there is a time constant that grows with both C (charge capacity) and R (resistance to the flow of charge). Discharging a Capacitor Through a Resistor
Start with an open circuit with a charged capacitor connected to a resistor; now close the switch. What happens? Well, basically, the charge is going to bleed off the capacitor through the resistor. The bigger the capacitor, the more charge there is to bleed off (for a given initial voltage drop across the capacitor) and the bigger the resistor, the slower it bleeds. The voltage on the capacitor is propostional to its remaining charge and the voltage drop across the resistance (proportional to the current, which is the rate of change of the charge on the capacitor) is exactly balanced by the voltage on the capacitor (because potential is singlevalued, see Kirchhoff's Rule #2). So you pretty much know just what to expect; it is just like a projectile slowing down under viscous flow: exponential decay with a mean lifetime = RC. Now do it mathematically: solve the differential equation for Q(t).
Charging Up a Capacitor Through a Resistor with a Battery
Start with the capacitor uncharged and then apply a constant voltage through a resistor. What happens?
"Resistance is Futile!"
by Jess H. Brewer
on 20050210:
Drude Theory
Ski Slope Analogy:Idealized skis: frictionless. Idealized skiers: indestructible morons. Idealized collisions: instantaneous, perfectly inelastic. Drift velocity v_{d} = acceleration (eE/m) mean time between scattering collisions .
Problem: how can be independent of the "steepness of the slope"? Short excursion into Quantum Mechanics and FermiDirac statistics: electrons are fermions (halfinteger spin) so no two electrons can be in the same state. Thus the lowest energy states are all full and the last electrons to go into a metal occupy states with huge kinetic energies (for an electron) comparable to 1 eV or 10,000 K. Only the electrons at this "Fermi level" can change their states, so only they count in conduction. So our ideal skiers actually have rocketpropelled skis and are randomly slamming into trees (and each other) at orbital velocities (we will neglect the problems of air friction); the tiny accumulated drift downhill is imperceptible but it accounts for all conduction.
J = "flux" of charge = current per unit perpendicular area (show a "slab" of drifting charge) so J = n e v_{d}, where n is number of charge carriers per unit volume. (For Cu, n is about 10^{29} m^{3}.)
Ohm's Law
J = E, defining the conductivity = n e^{2} /m, measured in Siemens per metre (S/m) if you like SI units (1 S = 1 A/V). I don't. For Cu, is around 10^{8} S/m. Putting this together with n, e and m_{e} = 9 x 10^{31} kg, we get ~ 10^{13} s. At v_{F} ~ 10^{6} m/s this implies a mean free path ~ 10^{7} m. Compare lattice spacing ~ 10^{10} m. The drift velocity v_{d} is only ~ 10^{3} m/s.
In semiconductors, n is a factor of 10^{7} smaller and v_{d} is a factor of 10^{7} larger, almost as big as v_{F}! So in some very pure semiconductors transport is almost "ballistic", especially when the size of the device is less than .
Briefly discuss superconductors.
The inverse of the conductivity is the resistivity , measured in Ohmmeters. (1 Ohm = 1 V/A).
Use a cartoon of a cylindrical resistor of length L and crosssectional area A to remind how this works, giving R = L/A and the familiar V = I R.
"RC with AC & Introduction to B"
by Jess H. Brewer
on 20050216:
See the PDF file on AC RC Circuits for all the details I covered in class (and then some). Why do we bother to define such a thing, rather than just looking at the direct force law? Well, have you ever looked at the direct force law? It's too hard! Instead we postulate some preexisting magnetic field B (Who knows where it came from? None of our business, for now.) and ask, "How does it affect a moving charged particle?" The answer (SWOP) is the Lorentz Force:
F = Q(E + v x B)
where v is the vector velocity of the particle, "x" denotes a cross product (review this!) and we have thrown in the Coulomb force due to an electrostatic field E just to make the equation complete. Note that a bunch of charged particles flowing through a short piece of wire (what we call a current element I d) is interchangeable with a single moving charge Qv.
Some Disturbing Examples
Consider two like charges at rest with respect to each other: the force of each on the other is repulsive, right? Now fly over them in a jet plane and look down on them: in your frame they are moving parallel to each other; this gives a small attractive force. If you fly by in the Enterprise instead, you can go much faster; eventually the Coulomb repulsion can be overcome by the magnetic part of the Lorentz force. But surely the particles are either attracted or repelled, not both! Wait! It gets worse! Now try visualizing the forces between two charged particles moving at right angles to each other. What happened to Newton's Third Law?! This conundrum is only resolved by the relativistic transformations of E and B. (Stay tuned . . . . )
Realistic Numbers
In the lab one can "easily" make a field of 1 T (about 20,000 times the Earth's field here). A wire 1 m long carrying 1 A of current will "feel" a net sideways force of only 1 N. Motors etc. need lots of "turns" of wire to make a decent sized torque.
"Diffraction"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"An Infinite Number of Slits?"
by Jess H. Brewer
on 20050326:
Features of Single Slit Diffraction
 At the central maximum ( = = 0) One sees the full I_{0}. (Use l'Hospital's rule on (sin x)/x as x goes to zero.)
 The intensity goes to zero at any nonzero for which sin = 0, i.e.
any integer multiple of . The first minimum of the diffraction pattern occurs when = , which in turn implies
a sin _{1} = .
 The secondary maxima of the diffraction pattern can be found by setting the derivative of I with respect to equal to zero (condition for an extremum), giving
= tan .
This transcendental equation can be solved by plotting both x and sin x on the same graph and looking for intersections. Don't go looking for an analytical solution.
Circular Apertures
We've been talking about "slits" as if all diffraction problems were onedimensional. In reality, the most common type is circular, such as telescopes, laser cannons and the pupil of your eye. The following handwaving logic is not a proof, but a plausibility argument: The narrower the slit, the wider the diffraction pattern. Picture a circular aperture as a square aperture with the "corners chopped off": on average, it is narrower than the original square whose side was equal to the circle's diameter. Thus you would expect it to produce a wider diffraction pattern. It does! The numerical difference is a factor of 1.22:
a sin _{1} = 1.22 .
"Dispersion"
by Jess H. Brewer
on 20050404:
Dispersion
The wavelength (colour) dependence of the interference pattern from a grating determines how useful it will be for resolving sharp "lines" (light of specific wavelengths) in a mixed spectrum. The rate of change of the angle of the m^{th} principal maximum with respect to the wavelength is called the dispersion D_{m} of the grating. This is easily shown to have the value D_{m} = d_{m}/d = m/d cos _{m} .
Note that there is a different dispersion for each principal maximum. Which m values will give bigger dispersions? Why does this "improvement" eventually have diminishing returns? Resolving Power
A separate question is How close together () can two colours be and still be resolved by the grating? Well, the two lines will just be resolved when the m^{th} order principal maximum of one falls on top of the first minimum beyond the m^{th} order principal maximum of the other.
By requiring the path length difference between adjacent slits to differ (for the two colours) by /N (where N is the number of slits) we ensure that the phasor diagram for the second colour will just close (giving a minimum) when that of the first colour is a principal maximum. This gives a resolving power
R_{m} = / = m N .
"Electrostatics"
Topic
Found 5 Lectures on Mon 07 Oct 2024.
"The Electric Field"
by Jess H. Brewer
on 20050119:
"The force that would be there"  Electric Field Vector fields and visualization.
Simple problems with point charges. Superposition of electric fields from different sources  just add 'em up (vectorially)!
Not so simple problems: continuous charge distributions.
Example: the electric field on axis due to a ring of charge can only be calculated by "brute force" integrating Coulomb's Law. Fortunately it is quite easy, as long as we stay on the axis where transverse components cancel by symmetry.
Slightly harder: the electric field on axis due to a disc of charge is the sum of the fields from all the little rings that make up the disc.
Always check that the result you calculate behaves as expected (namely, Coulomb's Law) as you get so far away from the charged object that it looks like a point charge.
PDF or printerfriendly gzipped PostScript files
"Doing Electrostatics the Hard Way"
by Jess H. Brewer
on 20050124:
Calculate the torque on an electric dipole in a uniform external electric field. From that, calculate the potential energy of the same dipole in the same field, as a function of its orientation. Then move on to a hard (but not impossible) problem: the electric field due to a Finite Rod of Charge. See also the usual PDF and printerfriendly gzipped PostScript files.
"Electrostatic Potential"
by Jess H. Brewer
on 20050131:
Electrostatic Potential
Notation: I will use V here instead of ["phi"] (chosen in class) because HTML still has no Greek letters except "Âµ". I can get away with this on the computer because the math symbol V (for potential) is italicized while the abbreviation "V" (for "Volts") is not; so I can write "V = 4 V" without ambiguity. In principle, it's easier to find E from V (using E =  ) than vice versa, because it's a lot easier to integrate up a scalar function than a vector one! (And derivatives are easy, right?) However, in practice (at the level of P108) we are not going to be evaluating arbitrary, asymmetric charge distributions, but only the simple symmetric shapes and combinations thereof (using the principle of additive superposition). In these cases Gauss' Law allows us to find E easily and find V by simple integrations; so that's mostly what we do.
Examples
 Potential of a point charge Q (or outside any spherically symmetric charge distribution): V(r) = k_{E}Q r^{1}. Note convention of letting V be zero at infinite r.
 Potential difference between two concentric spheres: Easy & obvious.
 Potential of a cylinder: Impossible to choose a radius at which V = 0. This is because you can't actually have an infinite line of charge without having an infinite charge.
 Potential difference between two concentric cylinders: integral of r^{1}dr from a to b is ln(b/a).
 Potential of a plane: V =  E d, if we take V = 0 at the plane.
 Potential difference between two parallel planes: E d again.
(Note: to get the sign right, always check your result against common sense: for a unit [positive] test charge, "uphill" is from negatively charged regions to positively charged regions, so the potential should increase in that direction.)
"Capacitance"
by Jess H. Brewer
on 20050204:
Thinking of an isolated conductor as a capacitor is very irregular. We usually think of a capacitor as two conductors with equal and opposite charges, and calculate the capacitance "between" them. Consider for example two perpendicular wires that don't touch. Even if the wires are infinite, their mutual capacitance is finite. (This would be a real challenge to calculate!) I offer a $10 prize to anyone who correctly calculates the capacitance. An extra $5 for the charge distribution along each wire. Dielectric Materialism (Ch. 29)
Basically just replace by = (where is the dielectric constant, a pure number always 1) and everything takes care of itself. Thus C always gets bigger (by a factor of ) when there is a dielectric in between the plates. [Explain.] For isotropic, cylindrical and planar geometries, show how potential is calculated from the electric field and how capacitance is in turn calculated from that. See PDF or printerfriendly gzipped PostScript files.
"Electrostatic Springs and Energy Storage"
by Jess H. Brewer
on 20050205:
Capacitor as an Electrostatic "Spring"
If you like you can think of 1/C as a sort of "electrical spring constant": if you move Q away from its equilibrium value (zero) you get a "linear restoring voltage". Arrays of Capacitors
An arbitrary network of capacitors can always be replaced by a single equivalent capacitor. An array of capacitors in parallel has an equivalent capacitance equal to the sum of their separate capacitances. [Explain.]
An array of capacitors in series has an equivalent inverse capacitance equal to the sum of their separate inverse capacitances. [Explain.]
Electrostatic Energy Storage
Recall the question at the beginning: why isn't a big capacitor a good replacement for a battery? Because the voltage decreases with the remaining charge! This has other implications as well....The energy required to put a charge Q on a capacitor C is not just VQ! The first bit of charge goes on at zero voltage (no work) and the voltage (work per unit charge added) increases linearly with Q as the charge piles up: V = (1/C) Q. Thus dU = (1/C) Q dQ. Integrating yields U = (1/2C) Q^{2} or U = (1/2)C V^{2}.
For a parallel plate capacitor, V = E d and C = A / d. Thus U = (1/2)AE^{2} d. But A d is the volume of the interior of the capacitor (the only place where the electric field is nonzero). Thus if u is defined to be the energy density per unit volume, then we have u = (1/2) E^{2}. "It turns out" that this prescription is completely general! Wherever there is an electric field, energy is stored at a density u given by the formula above.
It is now getting really tempting to think of E as something "real", not just a mathematical abstraction.
"Elementary Particles"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Small Stuff"
by Jess H. Brewer
on 20050408:
I gave the cartoon version in class; colorization of the images is not yet complete, but you can download the PDF or printerfriendly gzipped PostScript version and see if it has been updated.
"EXAM"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"First Midterm"
by Jess H. Brewer
on 20050209:
"Second Midterm"
by Jess H. Brewer
on 20050313:
"Final Exam"
by Jess H. Brewer
on 20050408:
"Faraday & Inductance"
Topic
Found 0 Lectures on Mon 07 Oct 2024.
"Gauss' Law"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Conservation, Flux & Symmetry"
by Jess H. Brewer
on 20050126:
Divisible Conservation
Without some quantity (real or imagined) that is (a) divisible into smaller parts and (b) conserved overall, Gauss' Law has no meaning. Some examples would be the amount of water in a river, the amount of energy in a closed system, the number of dollars in circulation (neglecting reissues, recalls, counterfeiting, lighting cigars with $20 bills, bills used as bookmarks and forgotten, and of course ignoring the real value of a dollar), the amount of light emitted by a source, the smell emitted by a rabbit or the electric "field lines" emitted by a positive charge. In each case, anywhere there is a "source" or a "sink" of the conserved "stuff" has to be accounted for explicitly; the "stuff" does not just disappear or appear out of nowhere without a good reason. But (in order to be interesting in this context) the "stuff" does have to be divisible so that we don't just have one clump of it rattling around. So a quantized thing like the charge on an electron, while conserved, would not be a good candidate; there is a version of Gauss' Law for charge itself, but in Classical Electrodynamics we always deal with such large numbers of electrons etc. that we can get away with pretending that charge is an infinitely subdividable "fluid" just the way we do with water and air. Flux
Next we need some sort of distributed "motion". If everything is static there is no need for Gauss' Law. The water flows, the energy is exchanged, the money changes hands in commerce, the rabbit smell drifts in the air, the light travels away from its source at c and the electric field "lines" may be thought of (quite accurately, it turns out) as "rays" of zerofrequency light also moving away from the positivecharge source (or falling into the "sink" of a negative charge) at the speed of light. This is a nontrivial thing to visualize, as the "stuff" can be everywhere in different amounts, moving in different directions at different speeds. We have to invent the notion of flux of "stuff", a spacefilling vector field that has at each point in space both direction and magnitude (usually in units of "stuff" per unit time per unit perpendicular area). I use a "directed tennis racket" to illustrate how flux can be measured at a given position; this requires an additional new idea: the directed area element. Gauss' Law
We are now ready to state Gauss' Law in its most primitive form: When some "stuff" leaves a region, there is that much less "stuff" in that region. Doh! Conversely, When some "stuff" enters a region, there is that much more "stuff" in that region. Well, duh! But that's all there is to it. We can use elegant mathematical language to express the same simple idea, but it doesn't make it any less simple. Remember that. Spreading Out = Thining Out
Here's the part any good hunting dog understands: if the "stuff" (in this case, rabbit smell, whose flux we can think of as "lines of rabbit") is coming from a localized source and spreading out into a larger region, then because the total flux of stuff out through any closed surface is constant (conserved), the flux has to get "thinner" (the smell has to get fainter) as you get further away from the rabbit. This allows a simple routine to consistently bring you closer and closer to the rabbit. [Describe.] Symmetry
OK, this works. But can we use it to calculate electric fields due to collections of charges? We may be able to say lots of qualitative things just from Homer Simpson logic, but if we want precise, quantitative results we can do nothing with Gauss' Law unless we have symmetry on our side. If the electric field is different everywhere on the Gaussian surface in question, and passes through the surface at various angles in different places, we have not gained a thing by stating the laws of electrostatics in this form. We need to be able to define a simple surface of familiar geometry where the electric field must be both constant and everywhere normal to the surface before Gauss' Law is going to do us a bit of good. But if we can find such a surface, that that big scarylooking surface integral is just E times the total area A through which E passes at normal incidence. In that case we can write Gauss' Law for Electrostatics in the simple form A D = Q_{enclosed}
where for additional simplicity we have defined D = E.
"Gauss' Law in Action"
by Jess H. Brewer
on 20050128:
Continue discussion of cylindrical symmetry. Then do planar symmetry. If time permits, begin discussion of conductors.
"Interference"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"Adding Amplitudes"
by Jess H. Brewer
on 20050322:
Linear Superposition (Adding Amplitudes)
The most remarkable feature of a "linear medium" (including vacuum in the case of electromagnetic waves) is that the amplitude of one wave and that of another are independent of each other  waves can "pass through" each other without scattering; they just keep going and come out the other side as if the other wave hadn't been there! Moreover, while they are passing through the same region, their amplitudes simply add together, so that when their "crests" or "troughs" coincide, the net effect is a bigger wave, but when the "crests" of one wave coincide with the "troughs" of another, the net effect is a cancellation. This can lead to very complicated behaviour called "interference". Our goal is to find simple tricks to make it seem less complicated, but one should never lose sight of the beautiful, intricate patterns created by interference. Standing Waves: The most familiar example to players of stringed instruments is probably the case of two waves of equal amplitude, wavelength and frequency propagating in opposite directions (which can be represented mathematically by giving either k or [but not both] opposite signs for the two waves). In this case we get a wave which no longer "travels" but simply "oscillates in place" with nodes where no motion ever occurs. The "particle in a box" example shares with the closed organ pipe and the guitar string the feature that there must be nodes at the ends of the box/pipe/string, a feature that forces quantization of modes even for classical waves.
Beats  Interference in Time: If two waves pass the same location in space (your ear, for instance) with slightly different frequencies then they drift slowly into and out of phase, resulting in a sound of the average frequency whose average amplitude (or its square, the intensity) oscillates at a frequency equal to the difference between the two original frequencies. This is a handy method for tuning guitar strings: as their frequencies of vibration get closer together, the beat frequency gets slower, until it disappears entirely when they are exactly in tune.
Interference in Space: This applies only for waves with the same frequency. Consider two waves of equal amplitude: If one wave is consistently "up" when the other is "down" (i.e. they are "180^{o} out of phase") then the resultant amplitude at that position is zero. This is called "destructive interference". If they are both "up" (or "down") at the same time in the same place, that's "constructive interference".
Thin Films: Assuming normal incidence, add together the "rays" reflected from both surfaces of the film. Remember the phase change at any reflections from denser media. Then add in the phase difference = 2 (/) due to the path length difference and you have the net phase difference between the two reflected waves. When this is an integer multiple of 2 you have constructive interference. When it is an odd multiple of , you have destructive interference. That's really the whole story.
Examples: the "quarter wave plate" and the soap film. Oil on water and the fish poem.
"Two Slit Interference"
by Jess H. Brewer
on 20050322:
Huygens' Principle
[paraphrased] "Every point on an advancing wave front may be considered a source of outgoing spherical waves." The concept of a "wave front" is a little vague, of course; you can think of it as the "crests" of waves if you are visualizing waves in water or on stretched strings, but in 3 dimensional waves a "crest" corresponds to a locus of maximum (positive) amplitude of the wave. In general any locus of fixed phase will do just as well, as long as you use the same fixed phase (plus 2) to defined the adjacent "wave front". Naturally we make no attempt to draw 3D spherical waves on a flat page; all the 2D pictures are meant only as "conceptual shorthand". This will be even more abstract as we start drawing "phasor" diagrams. Be sure to review your trigonometry  we'll be using it!
Interference from TWO SLITS
(Young's experiment modernized) The "near field" intensity pattern (where "rays" from the two sources, meeting at a common point, are not even approximately parallel) is difficult to calculate, though it is easy enough to describe how the calculation could be done. We will stay away from this region  far away, so that all the interfering rays may be considered parallel. Then it gets easy!
Simplified sketch assuming incident waves hitting the barrier in phase (i.e. normal incidence) shows an obvious path length difference of = d sin between the waves heading out from the two slits at that angle. If this path length difference is an integer multiple of the wavelength we get constructive interference. This defines the n^{th} Principal Maximum (PM):
d sin _{n} = n
Often we are looking at the position of interference maxima on a distant screen and we want to describe the position x of the n^{th} PM on the screen rather than the angle _{n} from the normal direction. We always define x = 0 to be the position of the central maximum (CM)  i.e. = 0. If the distance L from the slits to the screen is >> d (the distance between the slits), as it almost always is, then we can use the small angle approximations sin tan so that _{n} n /d and x_{n} = L tan _{n} L _{n} giving x_{n} n L /d. Be sure you can do calculations like these yourself. Such problems are almost always on the final exam.
Time permitting, I will start on Multiple Slit Interference. The handout covers this in detail; if I don't cover it today, be sure to study the handout over the weekend!
"Multiple Slit Interference"
by Jess H. Brewer
on 20050326:
We now take you to a world beyond time and space, a world of pure mathematics where what you see are wave amplitudes and phases of different rays of a coherent wave with a given frequency and wavelength, interfering to make a combined amplitude  the world of The Phasor Zone. (Dewdewdewdew, dewdewdewdew, DEWdiddlyewdew...) In this abstract world each wave is seen as an amplitude A_{i} pointing away from some origin at a phase angle _{i} in "phase space"  a phasor. All the phasors representing different wave amplitudes are "precessing" about the origin at a common angular frequency (the actual frequency of the waves) but their phase differences do not change with time. Thus we can pick one wave arbitrarily to have zero phase and "freeze frame" to show the angular orientations (and lengths) of all the others relative to it.
Phasors are vectors (albeit in a weird space) and so if they are to be added linearly we can construct a diagram for the resultant by drawing all the amplitudes "tiptotail" as for any vector addition. If there are any configurations that "close the polygon" (i.e. bring the tip of the last phasor right back to the tail of the first) then the net amplitude is zero and we have perfect destructive interference!
For an idealized case of N equalamplitude waves out of phase with their neighbours by an angle we will get a minimum when N = n(2), satisfying the above criterion. This is the condition for the n^{th} minumum of the Nslit interference pattern; we usually only care about the first such minimum, which occurs where N = 2.
To see where in real space that first minimum occurs, we have to go back to the origin of the phase differences due to path length differences: /2 = / = d sin /, giving
d sin _{first min.} = /N .
Note that this looks a lot like the formula for principal maxima, but it describes the angular location of the first minimum. This offers a good object lesson: Never confuse a formula with its meaning! You may memorize all the formulae you like, but if you try to apply them without understanding their meanings, you are lost. Note also that the central maximum is narrower by a factor of N than the angular distance between principal maxima. This is why we build "diffraction gratings" with very large N....
"Maxwell's Equations"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Maxwell's Equations"
by Jess H. Brewer
on 20050313:
AmpÃ¨re's Law revisited
What happens when you run a current "through" an uncharged capacitor? Apply AmpÃ¨re's Law around the wire; now extend the "open surface bounded by the close loop" so that it passes through the gap in the capacitor without cutting any currentcarrying wires. (Imagine that you are making a big soap bubble with a hoop.) Does the magnetic field around the loop suddenly disappear? I think not! Maxwell proposed a "timevarying electric flux" term symmetric to the changing magnetic flux in Faraday's Law to resolve this paradox. Suddenly a timevaring electric field generates a magnetic field, as well as the reverse.
So now we have Gauss' Law in two forms (integral over a closed surface vs. differential at any point in space) for E (or, better yet, for D = E) and for B (where it may seem trivial to express the fact that there don't seem to be any magnetic "charges" [monopoles] but in fact this is quite useful).
We have Faraday's Law also in two forms; we will only be using the integral form in this course, but you should be able to recognize the differential form.
And we have Maxwell's corrected version of AmpÃ¨re's Law which again we will be using here only in the integral form but you should be able to recognize in either form.
These 4 Laws constitute Maxwell's Equations, which changed the world. To complete "everything you need to know about electromagnetism on one page" you should include the Lorentz Force Law (including the electric force) and the Equation of Continuity (which simply expresses the conservation of charge). That's it. Real simple "cheat sheet", eh?
From AmpÃ¨re's Law applied to a specific geometry we have the first mixed time and spacederivative equation. I will derive this today and then move on to the next equation which comes from Faraday's Law.
"The Magnetic Field"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"I x B: the Lorentz Force"
by Jess H. Brewer
on 20050216:
UNITS
The Coulomb is defined as an Amperesecond, and an Ampere is defined as the current which, when flowing down each of two parallel wires exactly 1 m apart, produces a force per unit length of 2x10^{7}N/m between them. No kidding, that's the official definition. I'm not making this up! To get the definition of a Tesla [T] we have to wait until the next Chapter on where magnetic fields come from, i.e. the Law of Biot & Savart.
Circulating Charges
If v is perpendicular to B, we get a very familiar situation: the force on the particle is always normal to its velocity, so it cannot change its speed; and yet it is constantly accelerated. Ring a bell? Come on, you know this: it's good 'ol uniform circular motion! Solve the familiar equation (v^{2}/r = QvB/m and p = mv) to get p = Q B r
where p is the momentum and r is the radius of the orbit. "It turns out" that this relation is relativistically correct, but you needn't concern yourself with this now. Since v = r , this means = QB/m, a constant angular frequency (and therefore a constant orbital period) regardless of v!
(This, unfortunately, is not relativistically correct.) Faster particles move in proportionally larger circles so that the time for a full orbit stays the same (as long as v << c). This is what makes cyclotrons possible. At TRIUMF, since v ~ c, we have to resort to an ingenious trick to compensate for relativity. Wien Filters
If v, B and E are all mutually perpendicular, the particle will pass undeflected iff E = vB. This makes a nice velocity selector. If you also measure the radius of curvature of the same particle's path in B with no E, you know its momentum. Putting these together gives you the ratio Q/m. If you know Q (which was not so easy until Milliken's "oil drop" experiment) then you know m. This is the basis for conventional mass spectroscopy. However, the cyclotron is an even better mass spectrometer. Why?
"What B Do & Do B, Do!"
by Jess H. Brewer
on 20050223:
Biot & Savart vs. AmpÃ¨re
now that you know how to do integrals, you're expected to use them! (Dang! Ignorance is easier!) Law of Biot & Savart
Circular Current Loop via Biot & Savart: too hard to calculate the field anywhere except on the axis of the loop. There (by symmetry) the field can only point along the axis, in a direction given by the RHR: curl the fingers of your right hand around the loop in the direction the current flows, and your thumb will point in the direction of the resulting magnetic field. (Sort of like the loops of B around a line of I, except here B and I have traded places.) As usual, symmetry plays the crucial role: current elements on opposite sides of the loop cancel out each other's transverse field components, but the parallel (to the axis) components all add together. As for the electrostatic field due to a ring of charge, we get the same contribution to this noncanceling axial field from each element of the ring.
"Link the Loop with Symmetry: AmpÃ¨re's Law"
by Jess H. Brewer
on 20050226:
AmpÃ¨re's Law
The integral of B_{//} dl around a closed loop (where B_{//} is the component of B along the path at each element dl) is equal to Âµ_{0} times the net current I_{encl} linking the loop (i.e. passing through it). (Used like Gauss' Law only with a path integral.)
Long Straight Wire via AmpÃ¨re's Law: It's so easy!
Any Cylindrically Symmetric current distribution gives the same result outside the conductor; inside we get an increase of B with distance from the centre, reminiscent of Gauss' Law....
Circular Current Loop via AmpÃ¨re's Law: Forget it! AmpÃ¨re's Law is of no use unless you can find a path around which B is constant and parallel to the path. There is no such path here.
Torque on a Current Loop & the Magnetic Dipole Moment
It can be shown in detail that the torque on a rectangular loop of area A carrying a current I in a magnetic field B is given by the vector ("cross") product of Âµ with B, where Âµ = I A n and n is the unit vector normal to the plane of the loop (taken in the sense of the RHR for the current around the loop). Stated without proof (SWOP): the shape of the loop doesn't matter. By the same logic as for electric dipole moments in electric fields, the potential energy of the magnetic dipole in the magnetic field is minus the scalar ("dot") product of Âµ with B. This may be familiar from Thermal Physics.
For details see PDF or printerfriendly gzipped PostScript files.
"Thermal Physics"
Topic
Found 6 Lectures on Mon 07 Oct 2024.
"First Class!"
by Jess H. Brewer
on 20050106:
I will be making an unconventional introduction to Thermal Physics, based on the microscopic approach of Statistical Mechanics which is usually withheld until later years, for reasons with which I disagree. You'll thank me later.
Much of the lecture was presented using Open Office, a free, Open Source replacement for Micro$oft Office. You are welcome to download the PDF file if you like; I will make the Open Office or PPT file available on request.
"Temperature"
by Jess H. Brewer
on 20050107:
Put two systems in thermal contact (meaning that they can exchange energy freely). What tends to happen is whatever increases the total number of possibilities  i.e. the multiplicity and therefore the entropy of the combined system.
We have to remember that the total energy U is conserved. Thus dU_{1} =  dU_{2}.
A maximum of the total entropy occurs where its rate of change with respect to U_{1} is zero.
Working this out in detail gives a definition of temperature.
We use this definition to examine the thermal behaviour of an unusual system: N spin 1/2 electrons in an applied magnetic field B. The exotic features of this system are due to its unusual feature of having a limit to the amount of energy U it can "hold". We correctly expect that the number of ways it can have that maximum energy (where all the spins are "up") is 1, so at the maximum U the entropy is zero; since it is nonzero at lower U, it must be decreasing with U for energies approaching the maximum. Thus the slope of entropy vs. energy starts positive, goes down through zero and then becomes negative. Since this is the inverse temperature, the temperature itself starts low, goes to infinity, flips to negative infinity and finally approaches zero from the negative side. What does this mean?!
Negative temperatures exist. It is easy to make them in the lab. They are hotter than positive temperatures (even hotter than infinite positive temperature!); the hottest temperature of all is "approaching zero from below". This weirdness is the result of our insistence that "hot" must mean "high temperature", requiring the definition of temperature as the inverse of the slope of entropy vs. energy. Live with it.
UNITS
Another silly convention we have to live with is the idea that temperature should have its own special units, "degrees" Kelvin or K. This is absurd. One look at the definition of temperature tells us that it is measured in energy units. Thus "K" is an energy unit: 1 K is equal to 1.3806505 x 10^{23} J. OK, it's a very small energy unit, but the only reason it is not some nice round number (like 10^{23} J) is that it was made up arbitrarily before anyone knew what temperature really was. Sorry. You'll just have to cope with it. By the way, that conversion factor is known as Boltzmann's constant, k_{B} = 1.3806505 x 10^{23} J/K. When you talk to an engineer about entropy, you had better express it in units of k_{B} rather than as a pure number as I have defined it.
"The Boltzmann Distribution"
by Jess H. Brewer
on 20050110:
Big Reservoir and Little System
Suppose the two systems in thermal equilibrium are a huge, complex heat reservoir R and a tiny, incredibly simple system S that is so minute and trivial that we can realistically talk about which fully specified microstate "" it is in. We are allowed (indeed, we are encouraged) to narrow our focus to just one degree of freedom of one particle (or whatever), such as the "system" consisting of the orientation of the spin of a single electron.
The energy contained in this small system S in state "" is called _{}.
We imagine that this energy was removed from the reservoir R, to make its energy U_{R} = U  _{}, where U is the total energy of the combined systems (and was the energy of R before we tapped some off into S).
This process changes the entropy of R by an amount . . . well, this is more easily displayed in a PDF file or (if you want a more printerfriendly format) a gzipped PostScript file.
"Ideal Gases: Energy and Pressure"
by Jess H. Brewer
on 20050113:
EQUIPARTITION OF ENERGY:
 Look at the Boltzmann distribution as a function of T
for fixed E.
 Now look at it as a function of E for fixed T:
what is the average energy if there is a uniform distribution
of possible energies?
 Handwave the factor of two to get the average energy per degree
of freedom (explain what is meant by a "degree of freedom").
PRESSURE:
A single particle bouncing around in a box with perfectly elastic specular collisions causes an average force on the walls of the box.
IDEAL GAS:
 Mean energy per atom (from Equipartition Theorem).
 Mean energy in a gas of N noninteracting atoms at
temperature T.
 Average momentum from average energy gives average
pressure at temperature T. Voila! the Ideal
Gas Law!
As usual, a more complete graphical summary is available in
PDF or gzipped PostScript format.
"Particle in a Box"
by Jess H. Brewer
on 20050113:
Discussion of standing waves, quantization and
de Broglie's Principle:
= h/p
[For an introduction to Quantum Mechanics in the form of the script to a comical play, see The Dreams
Stuff is Made Of (Science 1, 2000).]
..
.
Discrete wavelengths, momenta and energies. Lowest possible energy
is not zero. As the box gets smaller, the energy goes up!
Handwaving reference to black holes, relativistic kinematics,
massenergy equivalence and how the energy of confinement
can get big enough to make a black hole out of even a photon
if it is confined to a small enough region (Planck length).
For more details see PDF or printfriendly gzipped PostScript files.
"Momentum Space"
by Jess H. Brewer
on 20050115:
Allowed states are evenly spaced in momentum but not in
energy, which is what we want in our Boltzmann distribution.
Since p
E, we expect
(E)
1 / E. (Sketch.)
Moving to 3D picture, there is one allowed state (mode) per
unit "volume" in pspace. But if what we want is the
density of states per unit magnitude of the (vector) momentum,
there is a spherical shell of "radius" p and thickness
dp containing a uniform "density" of allowed momenta whose
magnitudes are within dp of p. This shell has a
"volume" proportional to p^{2} and so the density of
allowed states per unit magnitude of p increases as
p^{2}. This changes everything!
.
.
.
The details are on the Momentum Space handout and in the PDF and printerfriendly gzipped PostScript files from the graphical presentation in class.
You may feel
this is going too far for a First Year course, and I have considerable
sympathy for that point of view. I simply wanted you to have some
idea why the Maxwellian energy and speed distributions have those
"extra" factors of E and
v^{2} in them (in addition to the Boltzmann factor
itself, which makes perfect sense). The textbook (perhaps wisely)
simply gives the result, which is too Aristotelian for us, right?
Rest assured that I will not ask you to reproduce any of these
manipulations on any exam. At most, I will ask a short
question to test whether you understand that one must account
not only for the probability of a given state being occupied
in thermal equilibrium (the Boltzmann factor) but also
how many such states there are per unit momentum
or energy (the density of states) when you want to find
a distribution.
"Waves"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Wave Review"
by Jess H. Brewer
on 20050315:
Today I want to spend some time reviewing the general properties and behaviour of waves. Only a few of the topics will be new, but for the rest of the course I am going to be relying on your deep and intuitive understanding of how waves behave, so I will not just rely on what you learned last term. . . . a review of sinusoidal travelling waves. . . . the linear Wave Equation has solutions that are not sinusoidal. In fact, any wellbehaved function of only u = x  ct, where c is the wave's propagation velocity, will automatically satisfy the Wave Equation.
"Wave Review cont'd"
by Jess H. Brewer
on 20050318:
Today I will continue reviewing the general properties and behaviour of waves. See also the condensed version (displayed in class) as a PDF file. . . . from ~1 Hz seismic waves (wavelength ~10^{8} m) to ~10^{20} Hz gamma rays (wavelength ~10^{12} m). We will, out of human biological chauvinism, pay most attention to the visible spectrum between ~400 and ~800 nm in wavelength. The standard "plane wave" propagating in the z direction can be generalized to propagate in the k direction, where k is called the wave vector. It has the same magnitude as usual, k = 2/, but the scalar kz is replaced by the dot product kr (where r is the vector position where we want to know the wave's amplitude). Imagine the wave "crests" as plane sheets stretching off to infinity in both directions perpendicular to k, marching along in the k direction at c. Obviously the plane wave is an idealization. We won't use this formulation explicitly very often, but it serves to remind us that the wave has a well defined direction of propagation, which we habitually express in the form of rays, a picture inherited from Newton, who insisted that light was particles following trajectories like little billiard balls, until Huygens showed that it was indeed waves. (We now know they were both right!)
"Weird Science"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Weird Science: E&M"
by Jess H. Brewer
on 20050118:
Start with a brief comment on "weirdness" in Physics. Continue with a brief review of 3dimensional vectors. Make sure you can do all the operations "in your sleep", both analytically (using algebra and the left hemisphere of your brain, which is reputed to handle abstract symbolic logic) and graphically (using various physical analogues and the right hemisphere of your brain, which is said to govern intuition and spatial vision). Whatever tricks you use to remember the "right hand rule" convention for "cross products", be sure they are well practiced; we'll be using them a lot when we get to Magnetism!
Then on to our first topic in Electricity & Magnetism (E&M): the Coulomb force between electric charges.
A comparison of the gravitational force between masses with the electrostatic force between charges shows just two differences:
 There are no negative masses; gravity is always attractive, whereas there are both positive and negative charges, so that the electrostatic force can be either attractive (for unlike charges) or repulsive (for like charges).
 The electrostatic repulsion between two electrons (for example) is about 10^{43} times stronger than their mutual gravitational attraction. That's a lot!
Nevertheless, these two "force laws" are handled in exactly the same way and everything you learned about gravity is (at least mathematically) applicable to Classical Electrostatics. Some discussion ensues about just how much we really understand about Gravity. I will ignore General Relativity (because I don't understand it!) and assume we aready know all about Gravity, from first term. Be sure it's true! As usual, details are available in either PDF or printerfriendly gzipped PostScript format.
"What Does It All Mean?"
Course
"Elementary Particles"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Small Stuff"
by Jess H. Brewer
on 20050617:
"Introduction"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Introduction"
by Jess H. Brewer
on 20050610:
The title and brief description above have enticed you to sign up for this course, but I really have no idea what you expect or desire from me. An instructor's usual response to such circumstances is to proceed according to the syllabus or the prerequisites for following courses (difficult in this case, as there is neither a syllabus nor any course to follow) or according to whim ("This is what I like to talk about; if they don't like it, tough!") but after 28 years of delivering lectures I've had my fill of playing Expert/Authority figure, and presumably you have no need for me in such a role. So we're going to do it differently. First I need to know a bit about you and your expectations/preferences. Who are you? How much do you already know about Physics? How seriously do you take Poetry? What did you think this course was going to be about? What would you like this course to be about? Do you expect to do any homework? Reading? Do you mind doing some things on the computer? Do you have access to the Web? (If the consensus is negative on the last question, then you are probably not reading this; so you can tell I am hoping to be able to use Web tools with the course.)
While I am a tireless advocate for Poetry, I have no credentials as a Poet, and there are bound to be at least some of you who do; so I will never be tempted to speak with Authority about that discipline  all my pronunciamentos will be understood to represent only my own opinion, and counteropinions will be welcome. Just don't go all ad hominum on me, OK?
I do have some Physics credentials, however undeserved, and I have a few favourite topics I'd love to weave into this short week if I can. I'll list a few of them below and ask you to give me some feedback on which you'd like me to concentrate upon.
There's more, of course, but we'll build on your preferences and follow the discussion where it leads.
"Emergence"
by Jess H. Brewer
on 20050613:
Tacit Knowledge
Before we can "grow new language" we need to have a vocabulary of familiar "old" words to juxtapose in unfamiliar ways. In Physics everything starts from Classical (Newtonian) Mechanics, which in turn starts from the familiar equation F = m a, where F is the net force exerted on some body, m is its mass and a is the resulting acceleration. This isn't actually the way Newton expressed his "First Law", but it will do. Most people are fairly familiar with this Law by the time they reach University, so it serves as an example of what Michael Polanyi would call "Tacit Knowledge"  things we know so well they are "obvious" and/or "Common Sense". Emergence
In the The Skeptic's Guide chapter on Mechanics I show how F = m a can be "morphed" by mathematical identities into principles that appear to be different, like Conservation of Impulse and Momentum, Conservation of Work and Energy or Conservation of Torque and Angular Momentum. There is really nothing new in these principles, but we gain insight into the qualitative behaviour of Mechanics from the exercise. Thus new Common Sense emerges from the original language by a process analogous to metaphor in Poetry. In the same way, bizarre phenomena like superconductivity emerge in the behaviour of many crystals, even though every detail of the interactions between their components is completely understood. When the familiar is combined in new ways, the unfamiliar emerges, and it is often very unfamiliar. This seems to be characteristic not only of what Physicists do, but also of how Nature behaves!
Starting Points
The paradigms of Newtonian Mechanics are only part of the vocabulary we need to begin constructing the metaphors of Relativity and Quantum Mechanics. We also require a basic understanding of the attributes of Waves, like frequency, wavelength and amplitude. This will keep us busy today.
"Quantum Mechanics"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Particle in a Box"
by Jess H. Brewer
on 20050617:
In 1924, Prince Louis Victor Pierre Raymond duc de Broglie hypothesized in his 25page dissertation that all particles are also waves, and vice versa, with their momentum p and their wavelength related by = h/p and p = h/
This later won him a Nobel Prize. Nice thesis! Whatever we might "mean" by this, it has some dramatic consequences: imagine that we have confined a single particle to a onedimensional "box". Examples would be a bead on a frictionless wire, or an electron confined to a long carbon nanotube (or a DNA molecule); both of the latter two examples are currently being studied very enthusiastically as candidates for nanotechnology components, so they are not the usual frivolous Physics idealizations!
If the particle is really like a wave, then the wave must have nodes at the ends of the box, just like the standing waves on a guitar string or the sound waves on a closed organ pipe. This means there are discrete "allowed modes" with integer multiples of /2 fitting into the length L of the "box". Not all wavelengths are allowed in the box, only those satisfying this criterion; therefore, not all momenta are allowed for the particle bouncing back and forth between the ends of the box, only those corresponding to the discrete ("quantized") allowed wavelengths.
Since the kinetic energy of the particle increases as its momentum increases, the lowest allowed energy state is the one whose wavelength is twice the length of the box, and if the box shrinks, this "ground state" energy increases. Moreover, since the particle is bouncing back and forth off the ends of the box (like a ping pong ball between the table top and a descending paddle), the average force exerted by the particle on the walls of its confinement increases as the walls close in.
Is this not a lovely metaphor? Like most people, every particle (because it is also a wave) cries, "Don't fence me in!" and will resist confinement with ever increasing vigour as the walls close in.
This resistance eventually goes beyond mere force. If you will allow me to state without adequate explanation that Einstein's famous equation "E = m c^{2}" means not only that any mass m represents a large amount of energy E, but also that energy stored up in a small region has an effective mass, with all the concomitant effects such as gravitational attraction for other masses, then you will see that as the confined particle's energy increases (due to tighter and tighter confinement) it begins to have a gravitational field. And if its energy increases enough it will act as a "black hole" for other objects within L of the box  including the walls of the box! At this length scale (called the Planck length) all bets are off  we do not understand physics at this level of "quantum gravity", although armies of Physicists are now working on it.
So the humblest particle, even the photon (which has no rest mass), will eventually dismantle its jail even if it has to deconstruct the very Laws of Physics to do so. A fine example for us all, I think, and an apt mascot for Amnesty International!
"Waves"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"This, That & Waves"
by Jess H. Brewer
on 20050617:
We spent quite a bit of time today in unrehearsed discussion on spontaneous topics like radiation hazards, cancer therapies and other stuff. I count such days among my favourites, but sometimes people who have paid good money to hear about specific topics feel shortchanged by such free discussions. I hope we can strike a balance that is satisfactory to all; if not, well, you can't please everyone, so you have to please yourself. :) Towards the end I did manage to get started talking about the implications of de Broglie's hypothesis that all particles are also waves, and vice versa, with their momentum p and their wavelength related by
= h/p and p = h/
but the denouement had to wait for tomorrow.
"UBC Physics 401"
Course
"Conservation Laws"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Poynting Away!"
by Jess H. Brewer
on 20060113:
 Wave Equation in Materials: repeat the usual derivation by taking the curl of Faraday's law and comparing the time derivative of Ampere's law to get a wave equation with an extra term (thus inhomogeneous) involving the conductivity and the time derivative of E. Check for wavelike solutions, find success but only if the k vector is complex, meaning the fields oscillate but they also decay exponentially in any conducting medium. ("preview" of section 9.4 on p. 392)
 Conservation Laws (Ch. 8): reminder that the Continuity Equation is an expression of charge conservation, which we hold sacred. Commentary on other sacred symmetries [every conservation law is an expression of some symmetry principle!] that turned out to be wrong: Parity (P), CP etc.) Sacred laws (like all rules) are meant to be broken, it seems! But charge Q (and a few others like lepton number) are "still good so far".
 Poynting's Theorem in Materials: Sec. 8.1.2, slightly generalized to allow linearly polarizable or magnetic materials. Evidently (after a long derivation) S = ExH is the flux of electromagnetic energy per unit time per unit perpendicular area. What the . . . ?!
"Electromagnetic Pressure"
by Jess H. Brewer
on 20060116:
 Insolation: at 1 AU (i.e. at the Earth) the flux of energy in sunlight is <S>_{Sun} is about 1350 W/m^{2}. A fraction (1  r) of this is absorbed when it hits the Earth (r is the albedo [average reflectivity] of the Earth). When we calculate the total power intercepted by the Earth and compare the total power radiated by the Earth in the infrared, we must get approximate balance, otherwise we'd warm up or cool down until balance was achieved. This predicts a rather lower value for the mean power (about 450 Watts) radiated by 1 m^{2} at "room temperature". (Look up the StefanBoltzmann Constant.) Is there really "extra" energy being radiated into space by the Earth? If so, where does it come from?
 Radiation Pressure: if we "cheat" and make use of our knowledge that EM radiation comes in photons (massles particles that travel at c and have energy E = p c), we can derive a momentum per unit volume S/c^{2} and a radiation pressure S/c from the Poynting vector.
 Light Sail: Consider <S>_{Sun}/c at the Earth: a reflective "sail" reverses the momentum of the photons and thus gives twice this pressure, for a total of about 0.9x10^{5} N/m^{2}, pretty puny! Compare this to the Sun's gravitational attraction for a mass m: F_{G} = mx(0.59x10^{2} N/kg). Thus a light sail of area A will just levitate relative to the Sun if m/A = 1.5 gram/m^{2}. This is about 1/3 the mass per unit area of typical lightweight "superinsulation", so we can probably make Mylar thin enough to support its own weight by light pressure, but just barely.
 Radiometer: just a quick description and comment that it is not radiation pressure that turns the wheel, but a "rocket" effect from gas molecules leaving the warmer (black) surfaces at higher speed than they leave the cooler (white or shiny) surfaces. If it were radiation pressure, it would go the other way. Or, more likely, not at all. (<S> is pretty tiny!)
 Force Density: Setting the rate of change of the mechanical momentum of charged particles inside a volume (i.e. the net force on same) equal to the integral of the Lorentz force over that volume, eliminating charge density using Gauss' law and current density using Ampere's law, and then manipulating a lot of vector calculus identities and throwing in the other Maxwell equations for good measure, we eventually obtain a formula for the electromagnetic force density within the volume that is . . . well, as Griffiths puts it, ugly! One imagines Maxwell juggling the math and thinking, "Hmm, I must be on the wrong track here . . . " But he wasn't. Stay tuned for the Stress Tensor.
"Coping with Stress"
by Jess H. Brewer
on 20060118:
I started with a little fatherly advice about the colloquial form of stress (the kind everyone advises you to avoid): only a fool believes you can actually avoid stress, unless you want to spend your life waiting calmly for death. To do anything you will have to embrace challenges, which are intrinsically stressful. What kills you is a combination of stress and helplessness, which I call despair. As Thoreau said (in Walden), "The mass of men lead lives of quiet desperation." Women too. But you don't need to be one of them. You have the advantage of a good mind and a decent education, and you should always place yourself in a position where you can solve the stressful problems in your life. As long as you can do something about it, stress can enrich your life. The desperate ones imagine themselves as helpless victims, and are statistically at 5 times higher risk of fatal disease than smokers. I can show you the details if you're interested, but that's not Physics. This also took up too much time, but who's counting? :)
On to stress as force per unit area: if the force is normal to the area element, we have pressure, but if it is parallel to the surface, we have (two components of) shear. This applies to all three choices of surface normal and to all three force directions for each, giving a Stress Tensor T_{ij} = dF_{i}/da_{j}, a 3x3 matrix with only 6 independent elements, since it must be symmetric (T_{ij} = T_{ji}).
Challenge Question: Why must the stress tensor be symmetric?
Getting back (finally) to E&M, we had derived an expression for the time rate of change of mechanical momentum density due to electromagnetic fields that had a term that reduces to [minus the time rate of change of the Poynting vector over c^{2}] (i.e. minus the time rate of change of electromagnetic momentum density, which we can take over to the other side of the equation to add all the momentum density together) plus a term in E and its spatial derivatives, a term in B and its spatial derivatives, and minus the gradient of the electromagnetic energy density. These three terms are what Griffiths calls the "ugly" part of f, so I'll designate it as f_{ugly}.
If only f_{ugly} were the divergence of something "??", we could convert the volume integral of f_{ugly} into a surface integral of "??". The problem is, usually a divergence is a scalar, but now it has to be a vector. So "??" isn't a vector; it has to be a tensor. This is clumsy to represent vectorially, but easy in component notation: we want to find a T_{ij} such that f_{j}^{ugly} = d_{i} T_{ij}. Can we arrange this? Griffiths (and all other textbooks that I have seen) simply offer the answer; I'd prefer to show how to deduce the desired form of T_{ij}, but I've run out of time today.
Stay tuned . . .
"Harnessing Stress"
by Jess H. Brewer
on 20060119:
I show that the tensor we want really is the T_{ij} described in textbooks: Maxwell's Stress Tensor, T_{ij} = (E_{i}E_{j}  _{ij}E^{2}/2) + (B_{i}B_{j}  _{ij}B^{2}/2)/ .
So what is it good for? I advise everyone to study Example 8.2 (pp. 353355) carefully! There is real magic in T_{ij}, because it "knows" what is going on inside a region just from its integral over any surface containing that region. Note in particular that you can choose different surfaces (as long as they contain the same region and no others with different charges) and it will give the same answer for the net EM force on the charges in that region.
The disadvantage of using T_{ij} is that it is intrinsically and irreducibly (I think) cartesian. Any curvilinear coordinates have to be expressed in terms of (x, y, z) consistent with T_{ij}.
That completes our reformulation of the sacred principle of momentum conservation to take into account the effects of EM fields and the momentum they carry. What about the other sacred conservation principle?
Angular Momentum Conservation: If S/c^{2} is the electromagnetic momentum density per unit volume at some point in space, then the electromagnetic angular momentum density per unit volume at the same point, relative to some origin, is r x S/c^{2}. I use this to work out a slightly altered version of Example 8.4 on pp. 359361 and explain why Feynman's "disc paradox" isn't. (A paradox, that is.)
I also encourage everyone to tackle Problem 8.12 "just for fun", so see for themselves exactly why people claim that the existence of a single magnetic monopole implies charge quantization. (Conundrum du jour: what if there are two magnetic monopoles of different sizes?)
"Electromagnetic Waves"
Topic
Found 12 Lectures on Mon 07 Oct 2024.
"Review of Waves"
by Jess H. Brewer
on 20060130:
"Basics of Electromagnetic Waves"
by Jess H. Brewer
on 20060201:
Complex Notation, Reality, Derivatives and Plane Waves
Euler's Theorem is a miracle of mathematics: e^{iÎ¸} = cos Î¸ + i sin Î¸
This allows us to write plane waves as Ïˆ = Ïˆ_{0} e^{i(kâ€¢x  Ï‰t)}
for which the taking of derivatives becomes trivial: âˆ‚Ïˆ/âˆ‚t = iÏ‰ Ïˆ and âˆ‡Ïˆ = ik Ïˆ
which we can extend to vector EM waves by substituting E (or B) for Ïˆ, giving âˆ‚E/âˆ‚t = iÏ‰ E , âˆ‡ â€¢ E = ik â€¢ E and âˆ‡ x E = ik x E
Note however, that only the part is real! Once all the derivatives have been taken, before you calculate anything measurable, throw away the part. (This will be especially important when we start discussing energy density and momentum transport!) Special Features of EM Plane Waves
Gauss' law for magnetic fields tells us that âˆ‡ â€¢ B = 0. Applied to our EM plane wave this reads ik â€¢ B = 0, i.e. B âŠ¥ k. The magnetic field is always transverse (to the direction of propagation). In free space (Ï = 0) the same is true for E by the same argument. Conundrum du Jour: What if Ï â‰ 0? Is E still always âŠ¥ k?
This still would allow arbitrary relative magnitudes and orientations (in the plane âŠ¥ k) of E and B, but Faraday's law says âˆ‡ x E =  âˆ‚B/âˆ‚t or ik x E = iÏ‰B
so if we divide through by k=ik and note that Ï‰/k = c, we get n x E = cB
where n = k/k is a unit vector in the direction of propagation. This fixes not only the relative directions of n, E and B (all perpendicular) but also the relative magnitudes of E and B: E = c B. The Truth About Spherical Waves
On Wed I claimed that Ïˆ = (A/r) exp[i(kr  Ï‰t) was a solution of TWE (THE Wave Equation) in spherical coordinates. I demonstrated the truth of this statement today and then pointed out that it is invalid for transverse waves like EM waves! See Griffiths Problem 9.33 for "the simplest spherical EM wave". This is because (among other things) we can't make an oscillating monopole  the charge has to come and go from/to somewhere which means that currents flow. In the end the simplest real radiator is the oscillating dipole. Stay tuned. Linear Combinations of EM Plane Waves
Since TWE is linear, any linear combination of EM plane waves is also a solution, as long as the free parameters E_{0}, k and Ï† for each component obey the rules: (1) Ï‰ = c k; (2) B_{0} =  E_{0}/c; and (3) E_{0}, B_{0} and k are all mutually perpendicular.  Combinations with the same Ï‰ and Ï† but different k's give us the rich phenomena of INTERFERENCE.
 Combinations with the same propagation direction (call it z) give us the rule that any function of only Î¶ = z  c t is a solution of TWE (not just plane waves!). This can be shown easily by noting âˆ‚Ïˆ/âˆ‚z = (âˆ‚Ïˆ/âˆ‚Î¶)(âˆ‚Î¶/âˆ‚z) = âˆ‚Ïˆ/âˆ‚Î¶ and âˆ‚Ïˆ/âˆ‚t = (âˆ‚Ïˆ/âˆ‚Î¶)(âˆ‚Î¶/âˆ‚t) = câˆ‚Ïˆ/âˆ‚Î¶.
 Combinations with the same k but different (orthogonal) directions and magnitudes of E_{0} along with phases Ï† that differ by Ï€/2 give us Elliptically Polarized waves. This is simple but worthy of pondering. Please do.
Next topic (after the Midterm): energy and momentum transport.
"Energy and Momentum Transport"
by Jess H. Brewer
on 20060203:
The Poynting vector: Using the definition S = E x H = E x B/Î¼ and plugging in the above relationships between the two fields gives S = E^{2}/Î¼v in the z direction. Energy Density: u_{EM} = (ÎµE^{2} + B^{2}/Î¼)/2. Again plugging in the above relationships between the two fields gives u_{EM} = ÎµE^{2} = B^{2}/Î¼ (either will serve). Putting this together with the Poynting vector gives S = v u_{EM}, as expected.
TimeAveraged Energy Transport: Since both E and B are oscillatory, the time average of the square of either one is half its maximum amplitude (2Ï‰t>: = 1/2). The time average of S is called the intensity of the wave.
Momentum Density: Recall that S/v^{2} is the momentum per unit volume transported by the wave; the same holds for its time average.
Radiation Pressure: Similarly the pressure exerted on a perfectly absorbing surface by an EM wave is given by P = S/v.
Reflection and Transmission
General review of First Year arguments used in analyzing thin film interference. Derivation of the familiar quarter wave plate criterion for nonreflective lens coatings, etc. We now set out to understand this more thoroughly.
"Reflections on Reflection"
by Jess H. Brewer
on 20060209:
We begin with the familiar case of NORMAL INCIDENCE: a plane wave propagating in the z direction crosses an xy plane interface from a medium with linear properties (Î¼_{1}, Îµ_{1}) into a medium with linear properties (Î¼_{2}, Îµ_{2}). Maxwell's equations ensure the following boundary conditions: E_{} is continuous. D_{âŠ¥} = Îµ E_{âŠ¥} changes by Ïƒ_{f} (if any).
B_{âŠ¥} is continuous. H_{} = B_{}/Î¼ changes by K_{f}Ã—z (if any).
Here  and âŠ¥ mean parallel or perpendicular to the interface surface. In the case of Ïƒ_{f} = K_{f} = 0 (no free charges or currents), all of the components listed above are continuous across the surface. Other universal rules are
Îµμ = 1/v^{2}, v = Ï‰/k and B=E/v
where v is the propagation speed, k = kz is the wave vector (usually chosen to be in the z direction) and Ï‰ is the frequency, which is necessarily the same for all parts of the wave. In fact, Faraday's law ensures that vB = z×E in all cases. As we learned in 1st year, a reflection off a "denser" medium (one with a slower v and therefore a larger index of refraction n) always causes the reflected wave to be π out of phase with the incoming wave. That is, one of E or B must reverse direction on reflection from such a surface, but the other does not. In this case it is B that reverses, while E stays the same. This can be understood in terms of the mechanism for absorption and reradiation in a dielectric: the incoming E field periodically reverses local electric dipoles along its direction, so the dipoles reradiate E in that same direction. Conundrum du jour: if the mechanism were mainly working on magnetic dipoles (current loops), would it imply that E would reverse, while B stayed the same?
A homework question (9.14) explains why neither field can mix x and y components on reflection or transmission. Bear in mind that the boundary conditions refer to the net fields in any region; on the incoming side that means the sum of the incoming wave's fields and those of the reflected wave.
For normal incidence with no free charges or currents, the above boundary conditions require
E_{I} + E_{R} = E_{T} and E_{I}  E_{R} = β E_{T} where β equiv; Î¼_{1}v_{1}/Î¼_{2}v_{2}
and since the energy flux in any one of the waves has a magnitude S = E^{2}/μv, the reflection coefficient R ≡ S_{R}/S_{I} = [(1β)/(1+β)]^{2}.
Energy conservation [check it!] then requires that the transmission coefficient T ≡ S_{T}/S_{I} = [(2/(1+β)]^{2}.
Further musings: Is reflection achromatic (independent of wavelength or frequency)? Since there is no mention of either in the equations, the answer is yes. However, no medium remains linear at all frequencies, so the starting assumptions will eventually break down. As usual, we'll try to introduce such breakdowns gently, as "special cases" . . . What if we run it backwards? Simply interchange the subscripts on the media properties. You'll see that this just exchanges β for 1/β and the results for R and T are the same. So (another conundrum) how do 1way mirrors work?
"Oblique Incidence"
by Jess H. Brewer
on 20060221:
OBLIQUE INCIDENCE, NO FREE CHARGES OR CURRENTS: The same boundary conditions apply as for normal incidence, but the components now involve some trigonometry. First we must decide whether to discuss the TE (Transverse Electric) case in which the electric field is normal to the plane of incidence (we will take this to be the xz plane, where z is the normal to the plane of the interface, so the TE electric field is in the y direction) or the TM case (B  y). The general case can always be built up from TE and TM. Griffiths chooses to derive the TM case, for good reason. Me too. In this case the two BC we need are E_{} continuous â‡’ E_{x}^{I} + E_{x}^{R} = E_{x}^{T} or E_{I} cos Î¸_{I} + E_{R} cos Î¸_{R} = E_{T} cos Î¸_{T} or (since Î¸_{R} = Î¸_{I}) E_{I} + E_{R} = Î± E_{T}, where Î± â‰¡ cos Î¸_{T}/cos Î¸_{I}, and H_{} continuous (with vB = E) â‡’ E_{I}  E_{R} = Î² E_{T}, where Î² â‰¡ Î¼_{1}v_{1}/Î¼_{2}v_{2} as before. Together these give Fresnel's equations to the TM case, E_{R}/E_{I} = (Î±  Î²)/(Î± + Î²) and E_{T}/E_{I} = 2/(Î± + Î²)
Note that Î± is a function only of Î¸_{I} and the ratio n_{1}/n_{2}, thanks to Snell's law. Note that as long as Î² > Î±, E_{R}E_{T} has the opposite sign from E_{I}. This is to say, its phase is flipped by 180^{o}; since the direction of E uniquely determines the direction of B, that means B is not flipped. So there are restrictions on our rule about reflections off a denser medium, at least for the TM case. Small Î± corresponds to large Î¸_{I} (check it!) so this situation occurs at grazing incidence.
At smaller Î¸_{I} there is one particular angle called the Brewster angle [no relation] at which Î± = Î² and there is no reflected wave! This angle is a function only of the properties of the media; I won't write out the function here. It is always smaller (closer to the normal) than 45^{o}, the value it takes when the two media are virtually identical. (You can easily show this.) There is no corresponding angle for the TE case, which is why glare (reflections of sunlight off mostly horizontal surfaces) is mostly polarized horizontally: the TM modes are not reflected at angles near the Brewster angle. This is why we wear polarized sunglasses while skiing or fishing: to remove the surviving horizontally polarized reflected glare while letting through any vertically polarized unreflected light. Without such visual aids, "sight fishing" is almost impossible!
Following Griffiths I will not derive the TE case for you; but I won't make you do it yourself (Problem 9.16); I'll just give you the answer:
E_{R}/E_{I} = (1  Î±Î²)/(1 + Î±Î²) and E_{T}/E_{I} = 2/(1 + Î±Î²)
"EM Waves in Conductors"
by Jess H. Brewer
on 20060221:
Dispersal of Charge
If we combine Ohm's law [J_{f} = Ïƒ E] with the Continuity Equation [âˆ‡â€¢J_{f} =  âˆ‚Ï_{f}/âˆ‚t] and throw in Gauss' law [âˆ‡â€¢D â‰¡ Îµ âˆ‡â€¢E = Ï_{f}], we get âˆ‚Ï_{f}/âˆ‚t =  (Ïƒ/Îµ) Ï_{f}, with the familiar solution Ï_{f}(t) = Ï_{f}(0) exp(t/Ï„) where Ï„ = Îµ/Ïƒ.
That is, a free charge density at any position in a conductor will disperse (go away) in a characteristic time Ï„. Solutions to the Inhomogeneous Wave Equation (IWE)
If we try for a "wavelike" solution to Maxwell's equations, i.e. B = B_{0} exp[i(gz  Ï‰t)], where I am using g to represent the "complex wave vector" and labelling the direction of propagation z, applying the IWE (which I won't write out again here) yields g^{2} = Ï‰^{2}ÎµÎ¼ + i Ï‰σÎ¼ where g â‰¡ k + iÎº.
A fair amount of algebra yields a rather ugly formula for the real (k) and imaginary (Îº) parts of g; I won't attempt to reproduce it in HTML, but you can see it in Eq. (9.126) on p. 394 of Griffiths. We need to know the real and imaginary parts in that form because k determines the wavelength and propagation speed in the usual way, while Îº is the inverse of the attenuation length or skin depth: B = B_{0} e^{Îºz} e^{i(kz  Ï‰t)}. Phase Lag
Plugging our wavelike solutions into Faraday's law yields, as usual, B = (g/Ï‰) E. But since g is complex, and can therefore be represented as g = K e^{iÏ†}, we have B = (K/Ï‰) E e^{iÏ†}. That is, the phase of the oscillatory B field lags behind that of E by the angle Ï†. This is a little shocking; we have come to expect E and B to always be in phase! Well, when they aren't, they don't get far. The book's formula for Ï† is tan Ï† = Îº/k, which looks simple until you remember the big ugly formulae for k and Îº. Isn't there an easier way? Yes! Recalling our result for g^{2} in terms of Îµ, Î¼, Ïƒ and Ï‰, and noting that g^{2} is also equal to K^{2} e^{2iÏ†}, equating the real and imaginary parts of each yields the simple equation
tan 2Ï† = Ïƒ/ÎµÏ‰.
You can show that this is also predicted by the book's version if you do some algebra with trigonometric identities (especially see the one for tan 2Ï†). Reflection by Conductors
I just gave some introductory remarks about the expected behaviour of a plane wave striking a perfect conductor (Ïƒ â†’ âˆž) and how this changes if we let Ïƒ be finite and look very near the surface with a good microscopic imagination. More on Friday.
"Mirrors"
by Jess H. Brewer
on 20060222:
We now have a nearly complete list of formulae for g^{2}, the real (k) and imaginary (Îº) parts of g and the phase Ï† of g = K e^{iÏ†} in terms of Ïƒ, Îµ, Î¼ and Ï‰. We can also do a little algebra to get the magnitude K of g: K = k_{0} [1 + (Ïƒ/ÎµÏ‰)]^{1/4}
where k_{0} â‰¡ Ï‰ (ÎµÎ¼)^{1/2}  i.e. the value the wavevector would have if there were no conductivity. Limiting Cases:
 Bad Conductors (Ïƒ << Îµ Ï‰): k â‰ˆ k_{0} and Îº â‰ˆ Ïƒ (Î¼/Îµ)^{1/2}.
 Good Conductors (Ïƒ >> Îµ Ï‰): OOPS! I made a mistake in this derivation; see if you can find it:
g^{2} = Î¼ÏƒÏ‰ (i + ÎµÏ‰/Ïƒ) so g = (Î¼ÏƒÏ‰)^{1/2} (i + ÎµÏ‰/Ïƒ)^{1/2} â‰ˆ (Î¼ÏƒÏ‰)^{1/2} (i + ÎµÏ‰/2Ïƒ).
The correct result is not what I wrote on the board, but rather g = (Î¼ÏƒÏ‰/2)^{1/2} [(1 + ÎµÏ‰/2Ïƒ) + i(1  ÎµÏ‰/2Ïƒ)].
I had planned to go on to the intermediate case of "Fairly Good Conductors" today but ran out of time. I'll finish this up on Monday and get on to Dispersion. Bad day today, sorry.
"Complex Conductivity"
by Jess H. Brewer
on 20060227:
Phase Velocity vs. Group Velocity
Please review this subject from previous courses. (I know you've seen it many times already!) The phase velocity v_{ph} â‰¡ Ï‰/k is not actually restricted to values smaller than c, because it describes the speed of propagation of a point of constant phase on a plane wave of unique Ï‰. A plane wave, however, conveys no information! It has been going on forever and will continue to do so forever. If we want to send a signal, we must turn the plane wave on and off, constructing wave packets by superimposing many plane waves of different frequencies and wavevectors. The simplest possible superposition (two plane waves going in the same direction but with slightly different Ï‰ and k) can be written e^{i[(k+dk)x  (Ï‰+dÏ‰)t]} + e^{i[(kdk)x  (Ï‰dÏ‰)t]} = 2e^{i(kx  Ï‰t)} cos[dk{x  (dÏ‰/dk)t}]. The argument of the cosine explicitly shows that the nodes of the "beat" pattern in space and time propagate at the group velocity v_{g} â‰¡ dÏ‰/dk, which is the same as v_{ph} ONLY if Ï‰ is a linear function of k, Ï‰ = ck. We are now considering cases where this is not true.
Driving Free Electrons
Suppose we have an oscillating electric field locally driving charge carriers of mass m and charge q: Newton's Second Law says mdv/dt = q E_{0} e^{iÏ‰t}  m Î³ v, where Î³ is a damping rate, which is plausible but difficult to calculate from first principles. A steadystate solution is v = q E/m(Î³  iÏ‰). Remembering that J â‰¡ N q v, where N is the number of charge carriers per unit volume, and using Ohm's law to define the conductivity Ïƒ, we have the frequencydependent Drude theory result for the complex conductivity, Ïƒ = q^{2}N/m(Î³  iÏ‰).
For a good conductor like copper, Î³ ~ 10^{13} s^{1}, ensuring that Ïƒ is pure real (as we have assumed so far) up to frequencies in between the microwave and infrared ranges. However, in a tenuous plasma where the charged particles almost never collide, Î³ vanishes and Ïƒ is pure imaginary (there are no resistive losses). EM Waves in a Plasma
In a thin plasma we can write the above result as Ïƒ = iÎµ_{0}Ï‰_{p}^{2}/Ï‰ where Ï‰_{p}^{2} â‰¡ Nq^{2}/mÎµ_{0}
Assuming Îµ â‰ˆ Îµ_{0} and Î¼ â‰ˆ Î¼_{0}, this gives a complex wavevector g^{2} = (1/c^{2})(Ï‰^{2}  Ï‰_{p}^{2}).
Thus for Ï‰ < Ï‰_{p} there is no propagating wave in the plasma, and (since there is also no dissipation mechanism) the plasma is a perfect reflector. For Ï‰ > Ï‰_{p} there is no Îº (the "skin depth" is infinite) but propagation speeds are bizarre: v_{ph} â‰¡ Ï‰/k = c (1  Ï‰_{p}^{2}/Ï‰^{2})^{1/2} > c and diverges as Ï‰ approaches Ï‰_{p} from above. Meanwhile v_{g} â‰¡ dÏ‰/dk = c (1  Ï‰_{p}^{2}/Ï‰^{2})^{1/2} < c and goes to zero as Ï‰ approaches Ï‰_{p} from above.
"Driving Bound Electrons"
by Jess H. Brewer
on 20060227:
Charged particles of mass m and charge q bound to fixed (or much heavier) partners will always have a resonant frequency Ï‰_{0} for oscillations about their equilibrium position, due to a linear restoring force mÏ‰_{0}^{2}x. (This is an ubiquitous feature in classical mechanics, regardless of the binding mechanism, simply because almost any potential momentum looks quadratic near the bottom.) There is usually also a damping force proportional to the velocity, mÎ³(dx/dt). If we apply a local driving force q E_{0} e^{iÏ‰t} from a passing EM wave, Newton's 2nd Law provides a differential equation with a familiar steadystate solution: x = x_{0} e^{iÏ‰t} with x_{0} = q E_{0}/m(Ï‰_{0}^{2}  Ï‰^{2}  iÏ‰Î³). Similarly for y and z components, giving a vector displacement amplitude proportional to E_{0}. This displaced charge constitutes an electric dipole moment p = q x, and if there are N such dipoles per unit volume, we get a polarization P = N q x = Îµ_{0} Ï‡_{e} E where Ï‡_{e} = (Nq^{2}/mÎµ_{0}) [Ï‰_{0}^{2}  Ï‰^{2}  Ï‰^{2}  iÏ‰Î³]^{1}. That is, Îµ = Îµ_{0}(1 + Ï‡_{e}) = 1 + Ï‰_{p}^{2}[Ï‰_{0}^{2}  Ï‰^{2}  iÏ‰Î³]^{1}.
Thus Îµ is frequency dependent and so is the (complex) wavevector g = k + iÎº = Ï‰(Îµmu;)^{1/2}, giving both dispersion and frequencydependent absorption. Usually Î¼ â‰ˆ Î¼_{0} and Ï‡_{e} << 1, allowing the approximation (Îµmu;)^{1/2} = (1/c)(1 + Ï‡_{e})^{1/2} ≈ (1/c)(1 + Ï‡_{e}/2), giving
k + iÎº = (Ï‰/c) {1 + (Ï‰_{p}^{2}/2)[(Ï‰_{0}^{2}  Ï‰^{2}  Ï‰^{2}  iÏ‰Î³]^{1}}.
"A little algebra" yields the index of refraction n â‰¡ ck/Ï‰ â‰ˆ 1 + (Ï‰_{p}^{2}/2){(Ï‰_{0}^{2}  Ï‰^{2})/[(Ï‰_{0}^{2}  Ï‰^{2})^{2} + Î³^{2}Ï‰^{2}]}
and the absorption coefficient Î± â‰¡ 2Îº â‰ˆ Ï‰_{p}^{2}/c){Î³Ï‰^{2}/[(Ï‰_{0}^{2}  Ï‰^{2})^{2} + Î³^{2}Ï‰^{2}]}
describing the rate at which the EM energy (âˆ E^{2}) decays with distance into the medium. When there are different species with different masses, charges, binding strengths, damping factors or number densities, the factor starting with Ï‰_{p}^{2} is replaced by a sum over all such species.
Now, most Ï‰_{0}'s are at quite high frequencies, so we are usually in the low frequency limit (Ï‰ << Ï‰_{0}) where there is very little absorption and n is gradually increasing with Ï‰. Near a resonance, however, absorption is strongly peaked in a Lorentzian lineshape and (n  1) looks like the derivative of a Lorentzian. It may seem alarming that (n  1) can go negative (implying v_{ph} > c), but as we have discussed, neither information nor energy actually move at this phase velocity v_{ph} â‰¡ Ï‰/k; only at the group velocity v_{g} â‰¡ dÏ‰/dk. You might try finding the latter in this case, if you have some spare time.
"Wave Guides! "
by Jess H. Brewer
on 20060301:
The good news is that we now go back to talking about plane waves in free space that propagate at c with a real wavevector k_{0} (magnitude k_{0} = Ï‰/c). I use the "0" subscript because (the bad news) these plane waves are reflecting back and forth off perfectly conducting walls at some angle Î¸ (between k_{0} and the plane of the surface) so that the apparent propagation velocity down the channel (i.e. in the z direction) is not c but some larger(!) phase velocity. (Not to worry, the group velocity will be < c. In fact we can trivially deduce its value to be v_{g} = c cos Î¸.) Suppose our wave is reflecting back and forth between two parallel conducting yz planes separated by a distance a in the x direction. Pick the z direction to be the direction of the component of k_{0} parallel to the plane surfaces. Thus k_{0} = k_{x}x + k_{z}z. Following Griffiths' convention I will drop the z subscript on k_{z} and just call it k. Thus
Ï‰^{2}/c^{2} = k_{x}^{2} + k^{2}, k = (Ï‰/c) cos Î¸ and k_{x} = (Ï‰/c) sin Î¸.
Now, E = 0 inside a perfect conductor, and since E_{} must be continuous, that means E_{} = 0 at both surfaces. For the TE case (meaning E is Transverse to z, or in this case E = Ey) this boundary condition requires that E = 0 at the surfaces  i.e. there are nodes in the standing waves of E at x = 0 and x = a. This in turn implies an integer m number of halfwavelengths in a, or k_{x} â‰¡ (Ï‰/c) sin Î¸ = mÏ€/a or sin Î¸ = mÏ€c/aÏ‰. Since we don't actually observe "rays" bouncing back and forth between the plates at angle Î¸, but rather the standing waves of the resulting interference pattern, it would be nice to eliminate Î¸ from our description. This is already done for k_{x} = mÏ€/a; we can also do it for k = (Ï‰/c) cos Î¸ = (Ï‰/c) (1  sin^{2} Î¸)^{1/2} or
c k = (Ï‰^{2}  Ï‰_{m}^{2})^{1/2} where Ï‰_{m} â‰¡ mÏ€c/a
That is, for the m^{th} TE mode, if the frequency is Ï‰ then the longitudinal wavevector is uniquely determined. This constitutes a dispersion relation for the "effective longitudinal wave". Let's look at it more carefully. If Ï‰ < Ï‰_{m} then k is imaginary, i.e. the wave cannot propagate, it just decays away. Thus Ï‰_{m} is a lower limit for allowed frequencies in the m^{th} TE mode. As Ï‰ approaches Ï‰_{m} from above, the effective phase velocity v_{ph} = Ï‰/k diverges! However, as you can easily show, the group velocity v_{g} = dÏ‰/dk = c [1  (Ï‰_{m}/Ï‰)^{2}]^{1/2} goes to zero as Ï‰ approaches Ï‰_{m} from above. This agrees with the more obvious version stated earlier: v_{g} = c cos Î¸, but without the reference to the "hidden" parameter Î¸.
For a rectangular waveguide we just add a second pair of conducting planes separated by b in the y direction and add an analogous constraint on k_{y} = nÏ€/b to get
c k = (Ï‰^{2}  Ï‰_{mn}^{2})^{1/2} where Ï‰_{mn}^{2} â‰¡ [(mÏ€c/a)^{2} + (nÏ€c/b)^{2}]^{1/2}.
In the same way, Ï‰_{mn} is the minimum allowed frequency for the TE_{mn} mode.
"Waveguides Done Right"
by Jess H. Brewer
on 20060303:
Last Friday's semihandwaving explanation of waves in a rectangular waveguide gave the correct transverse wavevectors and cutoff frequencies for the various modes, but the treatment was less than rigourous. So today I went through some of the algebra in Griffiths' Sections 9.5.1 and 9.5.2 to give a sense of how one can convince oneself that the equations are actually true. I think this was probably a mistake, as it ate up a whole lecture without introducing very many nuances that aren't already in the text, and (perhaps more importantly) watching someone else "do the algebra" does not convey the same conviction as doing it yourself! I recommend reading the text carefully  and wherever you say to yourself, "Where did that come from?" work out the omitted steps on your own. There are quite a few omitted steps in these Sections! In particular, where Griffiths says, "Eqs. (9.179) can be solved for E_{x}, E_{y}, B_{x} and B_{y} in terms of the derivatives of E_{z} and B_{z}," you should do so yourself! I didn't quite finish reproducing the derivation of the TE modes in a rectangular waveguide using separation of variables,
B_{z}(x,y) = X(x)â€¢Y(y) = B_{0} cos(k_{x}x) cos(k_{y}y) where k_{x} = mÏ€/a and k_{y} = nÏ€/b,
but the book does a fine job of that. The one tricky part is where the requirement that B_{x} = 0 is converted [using Eq. (9.180iii) and the fact that âˆ‚E_{z}/âˆ‚y = 0 because E_{z} = 0 in TE mode] to âˆ‚B_{z}/âˆ‚x = dX/dx = 0 so that only the cosine terms survive in the solution for B_{z}. The other missing derivation (for TM modes) is a homework problem, so there is no need for me to reproduce that. :)
"Cavities and Coax Cables"
by Jess H. Brewer
on 20060306:
Today I experimented again with using the projector with PDF files generated using pdfLaTeX and ppower4. The files are now online in the usual place. There were two parts: If you take a hollow rectangular waveguide and close off the ends, only specific frequencies of standing waves will be allowed ("classical quantization"). You have seen this before in other contexts. Other shapes are allowed as well, of course; one favourite is the cylindrical cavity, which (with small holes in the ends to let particles through) is used extensively in linear accelerators (linacs). Your home is full of these, especially if you have any electronics, a television set, a computer or stereo equipment. As explained in the PDF file, such "coax cables" have the extremely attractive property of transmitting all frequencies at the same propagation speed in a TEM mode  i.e. they are dispersionless (except for imperfections like finite conductivity and frequencydependent dielectric constants).
"Potentials, not Fields!"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Representations"
by Jess H. Brewer
on 20060120:
Section 10.1: The Potential Formulation of Maxwell's equations in terms of (10.1.1) Scalar and Vector Potentials: Maxwell's equations are nice, and they tell us all there is to know about E&M (along with the Lorentz force), but there are 4 of them. Can we get by with less? Sure, if we formulate everything in terms of the scalar (V) and vector (A) potentials. First substitute (curl A) for B (always OK); then substitute gradV for E . . . woops! Not OK! With timevarying B (and therefore A) we no longer have curl E = 0, so we can't assume E is the gradient of a scalar function. Nuts. But wait, we can substitute curl A for B in Faraday's law, interchange time and space derivatives, and get something (E + dA/dt) [those are partial derivative "d"s, can't you tell?] which does have zero curl, and can therefore be expressed as the gradient of a scalar V.
Shuffling the terms we get E = gradV dA/dt. This we can substitute into Gauss' law to get one equation with only potentials and the charge density. (I won't try to write it out here; see Eq. (10.4) on p. 417.) Substituting for E and B in the Ampere/Maxwell law gives a second equation (10.5) in terms of only potentials and the current density; between these two equations we have captured all the content of Maxwell's equations and thus all of E&M! The trouble is, they are ugly equations. What can we do to make them "more elegant"? (10.1.2) Gauge Transformations: I went through the entire explanation for why you can add the gradient of any scalar function to A, as long as you simultaneously subtract the time derivative of that same function from V, without affecting E or B (i.e. without changing any physical observables). Such modifications are known as gauge transformations, and they are extremely important, not only in E&M but also in relativistic quantum field theory; but we won't go there now.
(10.1.3) Coulomb and Lorentz Gauges: the most familiar "gauges" are the Coulomb gauge, in which the divergence of A is simply set to zero, leaving Poisson's equation the same as for Electrostatics, and the Lorentz gauge, in which the divergence of A is set equal to  (1/c^{2}) dV/dt [that's a partial derivative, of course]. In the Lorentz gauge, our two "ugly" equations involving potentials turn into inhomogeneous wave equations for V (driven by /) and A (driven by  J) which together are equivalent to Maxwell's equations and thus express all of E&M in two equations! Cool, eh?
We skip the rest of Ch. 10 (e.g. Retarded Potentials) for now.
(Ch. 12) Introduction to Relativity: Just enough of an introduction today to speculate on why Griffiths chooses an unpopular convention for the Minkowski metric so that x_{µ} = {ct, x, y, z} instead of the more conventional version, x_{µ} = {ct, x, y, z} for the covariant 4vector. As long as you're consistent, it makes no difference; but Griffiths' version requires all Lorentz scalars (inner products of covariant 4vectors with contravariant partners like x^{µ} = {ct, x, y, z}) to be negative rather than positive. Ugly. More on Wed.
"Radiation"
Topic
Found 6 Lectures on Mon 07 Oct 2024.
"Accelerated Charges and Radiation"
by Jess H. Brewer
on 20060320:
"Oscillating Electric Dipole"
by Jess H. Brewer
on 20060326:
"3Seminar Day"
by Jess H. Brewer
on 20060326:
"Dipole Radiation"
by Jess H. Brewer
on 20060326:
See PDF file in the Archive of such summaries.
"Radiation from Arbitrary Sources"
by Jess H. Brewer
on 20060327:
See PDF file in the Archive of such summaries.
"Antennas"
by Jess H. Brewer
on 20060403:
See PDF file in the Archive of such summaries.
"Retarded Potentials"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Retarded Potentials"
by Jess H. Brewer
on 20060306:
See first few pages of Janis McKenna's lecture last year on Retarded Potentials. I only tried to intoduce this topic and explain why we have to calculate the potentials in terms of the source charge and current distribution at an earlier time, because that's where they were (and what they were doing) when they "sent" the electromagnetic "news" to the field point. This seems straightforward at first glance, but try making up a precise sentence to describe it!
"Retarded Potentials, cont'd"
by Jess H. Brewer
on 20060310:
See PDF file on Retarded Potentials. This will be updated shortly with a section on the LienardWiechert potential for a moving charge, so don't print it out yet; but by all means have a peek!
"Jefimenko, Lienard and Wiechert"
by Jess H. Brewer
on 20060316:
"LienardWiechert, cont'd"
by Jess H. Brewer
on 20060320:
See the PDF file on Retarded Potentials.
"REVIEW"
Topic
Found 6 Lectures on Mon 07 Oct 2024.
"First Lecture"
by Jess H. Brewer
on 20051212:
Please see Lecture 01 (a 410 KB PDF file) to see all 12 pages just as I displayed them in class. If you prefer a slightly smaller, printerfriendly version (4 slides per page), try the 385 KB gzipped PostScript file. I will try to generate such files for every lecture, but sometimes it may all be "blackboard work" which I'll only outline here.
"Birthday Fiasco"
by Jess H. Brewer
on 20060107:
See the "PDF & PostScript files" link on our Homepage. I tried to supplement the overview described there with a few examples and partial derivations, but got into trouble. (See below.)
"Iterations of Truth"
by Jess H. Brewer
on 20060109:
We learn E&M iteratively: each time through, a few oversimplifications are confessed and we do it slightly more correctly. Why not just tell the Whole Truth the first time? (a) Because [to borrow a famous movie line] we don't think you can handle the truth! (b) Because we don't know it yet! Sure, Quantum ElectroDynamics (QED) is the most successful theory in all of science; but it isn't complete without integrating Weak Interactions to form the ElectroWeak unification, and that's not complete until someone manages to unify it with Strong Interactions and QCD (Quantum ChromoDynamics). And then there's Gravity. So expect this process to continue. Examples discussed today: "Conductors"; "Materials"; Ampere's law; and (in most detail) Faraday's Law  three versions.
"Review: Media and Other Loose Ends"
by Jess H. Brewer
on 20060111:
(7.2.2) The Induced Electric Field: when there are no charges involved, Gauss' Law for E is the same as that for B and Faraday's Law looks just like the original Ampere's Law except with dB/dt playing the "source" role for E exactly as Âµ_{0}J is the "source" for B. Thus the same math gives a sort of "BiotSavart Law for E_{ind}". Work out Problem 7.19 (p. 310) for z=0 (centre of the skinny toroidal solenoid) for simplicity, using this handy equation.
(7.3.3 vs. 7.3.5) Maxwell's Equations (see also inside of back cover): There are two sets of equations, one of which describes the effects of linear polarizable and/or magnetic media explicitly. Are they different? No! Both sets are exact and always true. So why bother? Well, the set with H and D have more fields, but less constants; and they remind us to account for those effects. More on this later.
Homily about "standing on the shoulders of giants"  that which is "trivial" when you know the answer may seem pretty hard when you don't. There's no shame in looking to see how Griffiths (or Jackson, or Feynman, or Landau & Lifshitz) did it.
"Loose Ends & Review"
by Jess H. Brewer
on 20060403:
See PDF file in the Archive of such summaries.
"Superconductivity"
by Jess H. Brewer
on 20060403:
"Special Relativity in E&M"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"4Vectors & Lorentz Invariants"
by Jess H. Brewer
on 20060124:
Tensors: A scalar is a zerorank tensor, a vector is a firstrank tensor, a matrix represents a symmetric secondrank tensor, and so on. There are two types of vectors (in the case of spacetime, contravariant and covariant) which you have to multiply in the right order to get a scalar; when two 4vectors are "dotted" into each other we must get a result that has minus signs on the spatial terms. In recognition of this, and because it's hard to make Greek indices in HTML, let me use the simplified notation a*b = a^{0}b^{0}  a^{1}b^{1}  a^{2}b^{2}  a^{3}b^{3}
for two 4vectors a and b. This is the conventional version of the metric; Griffiths uses a metric for which all the signs are changed. Events and Light Cones: to keep track of spacetime coordinates it is often convenient to make a graph showing ct along the vertical direction and x along the horizontal. Since Lorentz transformations affect only time and the spatial component parallel to the relative velocity, we can leave the perpendicular spatial components off our diagram (fortunately for our 2D blackboard!). An observer with a vertical worldline is at rest in the frame for which the diagram is drawn. (Other frames require their own diagrams.) This is an especially handy frame, because we can talk about the time difference dt between two nearby events at the same place (i.e. dx = 0). We call the time interval in this special frame dt = d, the proper time interval.
Lorentz Transformations: I'm not going to try to do the algebra in HTML, but you will remember that those same two events transform under a Lorentz "boost" (into a "primed" frame moving in the x direction at a speed u) to give a nonzero spatial separation and an increased time interval dt' = d. This is known as time dilation. It is important to remember that is refers to the proper time interval in the frame where the two events are at the same spatial position. Can we generalize d to a quantity that can be defined in any reference frame and that has the same value as that in the rest frame? Of course, or I wouldn't be talking about it.
c^{2} d^{2} = c^{2}dt^{2}  dx^{2}
for dt and dx measured in any reference frame can be easily shown (I did) to be equal to c^{2} d^{2} in the rest frame where dx = 0. Thus d is a Lorentz invariant or a Lorentz scalar, with the lovely property that any 4vector can be multiplied or divided by it to get a new 4vector. We will make some use of this on Friday.
"More 4Vectors and Lorentz Scalars"
by Jess H. Brewer
on 20060125:
Just the outline:
"Covariant Representations of Electromagnetism"
by Jess H. Brewer
on 20060127:
12.3.2 HOW THE FIELDS TRANSFORM
Griffiths shows several other cases (pp. 522531) that are summarized in Eqs. (12.108) on p. 531. These equations say that components of E and B parallel to the boost are unchanged, while the components perpendicular to the boost transform as e'_{âŠ¥} = Î³(e_{âŠ¥} + v x B/c) and B'_{âŠ¥} = Î³(B_{âŠ¥}  v x e/c)
where I have defined e â‰¡ E/c for compactness and symmetry. This is quite tidy and appeals to our 3D physical intuition. Use it to solve any real practical problems involving Lorentz transofrmations of E and B. This concludes the examinable part of Ch. 12! MANIFESTLY COVARIANT NOTATION
After all this indoctrination into the wonders of 4vectors and Lorentz invariants, we'd like to convert the above description into something more covariant looking. The problem, of course, is that we have six components to transform, and none of them are particularly "timelike" at first glance. This can't be expressed in one 4vector, obviously, and two 4vectors give too many components, so we have to go to the next more elaborate entity: a 4tensor. Now, a general 4tensor has 16 independent components (too many!) and a symmetric one still has 10 (too many), but an antisymmetric 4tensor has ony 6  just the right number! So let's try to use our favourite 4vectors âˆ‚^{Î¼} and A^{Î¼} to build an antisymmetric 4tensor F^{Î¼Î½}. Our first guess is F^{Î¼Î½} = âˆ‚^{Î¼}A^{Î½}  âˆ‚^{Î½}A^{Î¼}
Naturally, it is the right choice. I won't go through the exercise in HTML, but you can easily assemble the explicit elements of F^{Î¼Î½} from its definition above by using the familiar formulae E = âˆ‡V  âˆ‚A/âˆ‚t and B = âˆ‡ x A. (Recall A^{0} = V/c.) Transforming F^{Î¼Î½}: one motive for expressing the fields in manifestly covariant form is so that we can write down their Lorentz transformation properties "elegantly". For a 4tensor this consists of
(F')^{Î¼Î½} = Î›^{Î¼}_{Î±} Î›^{Î½}_{Î²} F^{Î±Î²}
There is an alternate formulation in terms of another 4tensor: the "Dual Tensor" G^{Î¼Î½} which is just like F^{Î¼Î½} except with B changed to e and e changed to B. The Dual Tensor is also a 4tensor and gives all the same results for field transformations as F^{Î¼Î½}. Contracting F^{Î¼Î½} with itself yields
F_{Î¼Î½}F^{Î¼Î½} âˆ E^{2}  c^{2}B^{2}
which is therefore a Lorentz invariant. Similarly F_{Î¼Î½}G^{Î¼Î½} âˆ E â€¢ B
which is therefore also a Lorentz invariant. We like Lorentz invariants!
"UBC Physics 210 [Fall 2006]"
Course
"Introduction"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Welcome to Physics 210!"
by Jess H. Brewer
on 20060908:
Since we will be using the Linux servers and X terminals in Hennings 205 throughout this course, your first task is to get familiar with Unix and Linux. This is not a trivial undertaking, so get right to it. Make sure that by next week you know how to log in to the server, open a terminal window, list the files in your directory, enter shell commands, create and edit text files, and generally perform simple tasks in Linux. The References link on our Homepage offers several links to introductory material on this subject; we will be playing with shells for at least the first week or two, and occasionally throughout the course.
"Login, Profile and Survey"
by Jess H. Brewer
on 20060908:
It is clear that P210 will mostly take the form of an "annotated tutorial" in which Ben and I will punctuate your computer work with "minilectures" either at the beginning of class (to establish the theme, philosophy or explicit goals of the day's exercises) or as need arises (to clarify ambiguities and/or assist with widespread problems). Otherwise we are here to help and guide while you do all the actual work.
"Linux and Unix"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Email, Text Files and Editors"
by Jess H. Brewer
on 20060919:
"HTML and the Web"
by Jess H. Brewer
on 20060919:
"Aliases and Shell Scripts"
by Jess H. Brewer
on 20060919:
"Miscellaneous"
by Jess H. Brewer
on 20060919:
"UBC Physics 438"
Course
"UBC Physics 210"
Course
"Fitting Data to a Theory"
Topic
Found 4 Lectures on Mon 07 Oct 2024.
"Chi Squared Minimization"
by Jess H. Brewer
on 20101016:
Although this looks like a rather uniform set of tasks ("Do it with A, then do it with B, then..."), this Assignment has two rather distinct parts. In the first part, you are indeed simply looking for the right combination of commands in µView, extrema, gnuplot, MatLab and octave that perform a weighted leastsquares fit of the data to a straight line (y = p_{0} + p_{1} x) and yield the bestfit values of p_{0} and p_{1} with the uncertainties ("errors") in each. You will probably need to symmetrize the y uncertainties in the data points and ignore the x uncertainties, unless you want to estimate the effect of the latter in terms of the former.
To perform a fit using python, however, you will need to either learn how to use the sophisticated features of the powerful Minuit fitting program from CERN, as provided to python by the PyMinuit package (invoked by "import minuit" in python) or write your own code in python implementing the closedform solution to the linear fit as described mathematically in the Assignment. Either way there will be code to write. You will probably find the latter approach (writing your own python code) much easier.
Just for fun (and to provide a "porting" model for your python code, and to give you a way to check your results), I have implemented this algorithm in PHP. You can copy the file from ~phys210/public_html/linfit.php
if you like.
"Fitting in Python"
by Jess H. Brewer
on 20101025:
Two approaches to chisquared minimization fitting in python are acceptable for that part of Assignment 6: you can either port (translate) the code in ~phys210/public_html/linfit.php from PHP to python and just find the answer in one calculational step, or you can use the venerable and esteemed fitting program MINUIT via pyminuit, which is installed on hyper. The former is probably quicker and easier; the latter will provide you with industrial strength fitting power for future use. Naturally I recommend the latter, but I repeat: it is not required. I showed my code for the former task in the 12:30 lecture.
If you choose to tackle pyminuit, you should consult the GoogleDocs documentation on "Getting Started" with pyminuit. You will need to read in the data as usual and supply a bit of code to calculate chi squared, but then you have all the power of MINUIT at your fingertips, as it were. Just follow the "Getting Started" instructions.
"FORTRAN"
by Jess H. Brewer
on 20101025:
See PDF file of lecture on History of FORTRAN with examples.
"Errors, Issues & Announcements"
by Jess H. Brewer
on 20101028:
"Good Approximations"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Numerical Integration"
by Jess H. Brewer
on 20101123:
If I give you an arbitrary function f(x) and ask you to integrate it from x_{i} to x_{f}, the easiest and most precise solution is always to know the answer  that is, to know what function g(x) has f(x) as its derivative, evaluate g(x) at the endpoints and find the difference. This is called the analytical solution and is always preferable to any numerical approximation (although, as you probably know, all the functions in your computer or scientific calculator are actually evaluated by numerical approximations). Symbolic programs like Maple, Mathematica or Maxima know a lot of analytical solutions and will show them to you if you forget. I strongly recommend familiarizing yourself with at least one of these (hint: Maxima is free) and I regret that we didn't get to it in PHYS 210 this year. But even with the best analytical algebra skills and supporting software, sometimes you run into functions whose integrals are not known as analytical expressions; more often, you might just want to know the numerical answer without a lot of rigamarole. For this it is nice to have a quick method for finding the answer numerically to the desired precision. That last phrase is essential! If you want an exact result, only the analytical solution will do. If you only want the result to within a few %, any crude method will serve in most cases. (Another key phrase  some functions are "pathological" when it comes to numerical integration, and when you encounter those you will need extra tricks. We won't go there.)
So what is your first step? Well, pick some N and make your "comb" and sum up the results. That's your result R_{N}. Now, how do you know if it's good enough? Well, first you need to specify a criterion for convergence, call it C, but you need something to compare with R_{N} to decide if you've converged yet  i.e. if R_{N} is good enough.
So, you'll want to store R_{N} in another variable like R_last and then repeat the "comb" sum calculation with a different N  call it N'. How to choose N'? For simplicity I suggest just multiplying N by 2, but you can use your own judgement. If R_{N'}  R_{N} < C, you're done! If not, make N'' still bigger and try again. And so on until two successive approximations differ by less than the specified criterion. Then you're done.
Now, what can go wrong with this procedure? Lots of things. If by accident R_{N} and R_{N'} (or any subsequent pair of successive approximations) happen to match (e.g. the first overestimates and the second underestimates on one bin) then the procedure may terminate prematurely; this is rare and is not too problematic. More common is the case where f(x) diverges at some x in your "comb". This can be worked around, but the presence of such divergences means you are going to have a very hard time getting a suitable result. Some functions are, after all, not integrable!
How efficient is this procedure? Not very! For one thing, if you use N' = 2 N, each iteration will repeat all the calculations used in the previous iteration. For the purposes of this Assignment, I don't care. Efficiency is not the point at this stage. As soon as you start worrying about efficiency (or, equivalently, accuracy per CPU cycle) there are a huge variety of tricks to improve the algorithm, some of which you will have learned about in your first Calculus course. Feel free to incorporate these if you wish, but do it yourself, don't just call up some function from the MatLab toolbox. (You can do that later, but not for this Assignment, please.)
Although a crude and inefficient algorithm will get you full marks on this Assignment, it is worth mentioning that a more elaborate algorithm, which may be tedious to write and debug, is almost always going to be more efficient than the "brute force" version, simply because a clever calculation of what to do next will almost always take less CPU cycles than a whole lot of "brute force" calculations. I can tell lots of stories about people applying for Cray time to run their BASIC programs faster, but I won't. Just remember, ingenuity is much more powerful than a faster computer!
"Thanks for all the Fish!"
by Jess H. Brewer
on 20101125:
We now move on to the ROOT FINDING part of the exercise. The general form of the problem is to want to know where a certain function of x is zero. For the case in question that function is tan x  x. (Note that this is the same as asking where tan x = x.) As for definite integrals, one needs to know a range for x  in this case from zero to the value of alpha that corresponds to a physical angle of 90^{o}, since it is meaningless to talk about secondary diffraction maxima at larger angles  they would be behind the screen! The procedure is simple enough: pick a reasonable starting point (e.g. at the minimum of the range, evaluate the function there, and take a small step toward the maximum of the range. Check to see if the function has changed sign. If so, then it must have crossed zero somewhere in between. Do a binary search to see where: reduce the step size by half and go back in the opposite direction until the sign changes again, then repeat. Continue until the step size is less than your convergence criterion. That's one root. Then head on toward the maximum of the range and repeat the process. And so on. Simple, eh? One can imagine slightly better algorithms (it is dumb to turn around and take two halfsteps if the first one doesn't "score", because you already know the second on will; but this just costs you one extra calculation per "turnaround", so it's a fine point of optimization. Just get something to work  but make sure it's your own creation!
"Introduction"
Topic
Found 5 Lectures on Mon 07 Oct 2024.
"Welcome to Physics 210!"
by Jess H. Brewer
on 20100905:
Since we will be using the Linux servers and X terminals in Hennings 205 throughout this course, your first task is to get familiar with Unix and Linux. This is not a trivial undertaking, so get right to it. Make sure that by next week you know how to log in to the server, open a terminal window, list the files in your directory, enter shell commands, create and edit text files, and generally perform simple tasks in Linux. The Manuals and References links on our Homepage offers several links to introductory material on this subject; we will be playing with shells for at least the first week or two, and occasionally throughout the course. Last year this course was taught by Matt Choptuik, who is one of the world's most adept masters of Computational Physics. By contrast, I am just a very experienced amateur. If you would like to sample his offerings, please feel free to visit his 2009 PHYS 210 website and in particular his remarkably lucid and complete Introduction to Unix and Linux.
"Get Ready.. Get Set Up... Go!"
by Jess H. Brewer
on 20100914:
Today we begin our familiarization with the Command Line. For those who have never (or almost never) used it before, I like the following metaphor: the computer is your powerful and perfectly obedient slave, but it can only do what you tell it to do in terms it can understand. So far you have only been able to instruct it by pointing at things and snapping your fingers. This is quick, but you'll admit it has limited syntactic richness. Now you are going to learn the first of many languages that are understood by the computer; this will give you, for the first time, the ability to give detailed instructions that could never be encoded in a simple combination of pointsandclicks. Wheee! Of course, you have to say it right to get the desired results; and any new language is frustrating to learn, especially when your correspondent takes everything you say exactly literally! So we will start simple.
It's hard to say where is the best place to start. Assuming you have successfully logged into your workstation, found the Terminal icon and copied it to your Taskbar, clicked on it and are looking at a command prompt, what command should you type first?
At least for your first encounter, I suggest "pwd" (for present working directory  nothing to do with passwords!).
Note: unless otherwise specified, every command is terminated with an ENTER or RETURN keystroke, often written "" for "Carriage Return" (a holdover from the ancient days of teletypes  see Stephenson's essay).
The pwd command should yield a reply something like "/home2/" where "" is your User name on hyper. So now you know who you are and "where you are on the disk".
"Lord of the Manor"
by Jess H. Brewer
on 20100916:
Since metaphor seems to help comprehension, I will indulge in a metaphor for your interaction with the computer: You are Lord of the Manor, and the computer is your extensive and magnificent Estate. In this Estate you can do many things, such as host huge parties and entertain guests with many activities. Whee!
Of course, these activities don't organize themselves. In order to even maintain the Estate, much less throw big parties, you need an extensive Staff of highly skilled and hardworking people, all of whom live to carry out your wishes. There are the Drivers, who know how the different Devices work and how to get them to do the necessary tasks; there are the Librarians, who keep your disk directories in order and can almost instantly retrieve the information you want; and many, many others.
The problem is, every one of your Staff speaks only a dialect of Geek specialized to their responsibilities. That's a lot of dialects. As Lord of the Manor, you try to learn as many as you can, but their are other demands on your time. What you need is an interpreter, who speaks all the dialects of Geek required to give detailed instructions to your Staff, plus one more dialect especially designed for efficient communication with you, the Lord of the Manor.
Enter the Butler, namely your bash shell, who knows what you mean when you say, "Have the formal gardens prepared for a masqued ball on Saturday evening. We shall have about 100 guests." and, more importantly, which of the Staff to go tell what to do next. The Butler's dialect is a little more complex than most of the others, so it is not trivial to learn, but it beats having to learn the dialects of all the rest of the Staff.
Now, every Saturday you have a long list of tasks you want the Butler to have done for you, and every Friday you have to go through the same old list again. This is frustrating and inefficient, especially since you sometimes mispronounce a command and get undesired results that are your own fault. So you make up a detailed list, check it carefully, test it a few times, and then write "FRIDAY 1" on the top, give it to the Butler, and say, "From now on, whenever I say, 'FRIDAY 1', I mean for you to have all these instructions carried out." The Butler says, "Very good, ma'am," and you have just taken your first step in learning the art of programming by creating an alias in your bash shell.
Finally, let me reiterate last week's metaphor: those of you who have never used anything but a mouse to communicate with your computer are like Lords of the Manor whose communication with the Butler have been limited to pointing at things and snapping your fingers. Since the Butler is very patient and obedient, the annoyance this would create in a human servant has only been manifest in BSOD crashes; but it is clear that you can't expect a very wellrun Estate under such a limited chain of command.
"Backups, Images and Shell Scripts"
by Jess H. Brewer
on 20100920:
For Assignment 3, Part 1, see Web notes on tar and man tar for part 1; for Part 2, see man ImageMagick and convert. These are the easy bits, but unlike Parts 35, they have to be figured out from scratch. The rest of the Assignment is an exercise in "copycat programming", which (despite the perjorativesounding name) is the quickest way to get started on a new language: namely, get a copy of a program someone else wrote  preferably one which performs a task similar to the one you have in mind  and adapt it to your purposes by making small incremental changes and seeing the effect. See Summary below.
For Part 3, study the ~phys210/bin/fib.sh script until you understand how it works. I suggest to "start at the top" and liberally comment every line as you move down the script, so that there can be no room for uncertainty. You will need to consult various manuals and/or (in desperation) man bash. (Beware! this is a prodigiously long man file and hard to find things in.)
For Part 4, modify one part at a time and check the results before modifying the next part. Think carefully about what you are trying to do (a flowchart may be useful) and this will go quicker and easier than you might expect.
Part 5 (the PHP version) can probably wait until Thursday; but it is the same idea as Part 4, in a slightly different (and more powerful) language.
"Flowcharts & PHP"
by Jess H. Brewer
on 20100922:
"Linear Algebra"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Spin and the Pauli Matrices"
by Jess H. Brewer
on 20101117:
"Plotting Data"
Topic
Found 3 Lectures on Mon 07 Oct 2024.
"Doing it Many Ways"
by Jess H. Brewer
on 20100926:
We now encounter some of the essential applications for more advanced scriptbased computation. Our first exercise is to use each of these for the same task, namely the "sine qua non" (essential) requirement of every computational tool: plotting data. Most of you have already learned to plot up simple 2D results on a graph with the independent variable ("abscissa", let's call it x) plotted horizontally and the dependent variable ("ordinate", let's call it y) plotted vertically. Most "data points" (x_{i},y_{i}) include uncertainties (usually only dy_{i}) which are plotted as "error bars".
But the independent variable may also have uncertainties dx_{i}, and it is not always (or even often) the case that the positive and negative uncertainties are the same  we may have asymmetric errors, and they may be extremely important. So even if you already are adept at plotting up your data, there are some new tricks you need to learn....
"Odds & Ends"
by Jess H. Brewer
on 20101001:
A few comments on Assignment 4, but mainly the announcement about Kevin Lindstrom's talk next Thursday, and some discussion about the nature and intent of the Project Proposal Talks the following week. Remember, they are intended for you to solicit suggestions from others at this early stage; no one expects you to have a finished Project, only some ideas for topics, objectives and methods. Please see the P210 Wiki Project Pages for examples of ideas, suggestions and feedback from previous years.
A student asked after class how to move files between a remote computer and hyper. This was an excellent question that I wish had been asked in class so that everyone might have benefited from the following answer.
Two commands are extremely useful for people working on the same thing at home and at UBC:
 scp p[r] <sourcefiles> <destination>
where, as usual, items in square brackets are optional and items enclosed in angle brackets <...> are generic terms, in this case host and/or directory and/or file specifications. scp stands for "secure copy"  see "man scp". The "p" is strongly advised, for it says to preserve the file attributes of the source files in the destination. Very advisable. The optional additional "r" stands for recursively, so that you can copy whole directory trees if needed. However, this is usually not such a great idea, as it will expand all symbolic links into real files on the destination, which is usually not what you had in mind! Better for that is rsync, see below. Usually <sourcefiles> refers to the file(s) on your current host (wildcards like "*" are allowed) and <destination> is of the form user@host:~user/filespec or the reverse, with host: = something like hyper.phas.ubc.ca. You will be asked for the password for user.
 rsync au[v] <host1:directory>/ <host2:directory>/
is the best (IMHO) usage of rsync because it checks all ("a") files in the directory tree starting at <directory>/ on both hosts and updates ("u") on host2 only those that have more recently been modified or created on host1 than on host2, all the way down that directory tree. It also reproduces symbolic links as symbolic links! This is really handy if you keep the same directory structure on your home computer as on your hyper account, which is highly advisable lest you lose track of where you left a file! You should of course use "man rsync" to learn more details; then you may want to set up aliases on both hosts to do it with all the switches and directories carefully spelled out. Note: the "/"s following <host1:directory and <host2:directory are extremely important! Don't leave them off!
"Python"
by Jess H. Brewer
on 20101001:
The big difference between the previous Assignments and this one is that Python is an ObjectOriented Programming (OOP) language and you will have to learn how to use a whole new grammar (not just a new vocabulary!) to make it work. This is why there is only one task assigned; I want you to learn as much about OOP in general and Python in particular as possible in the time available. The presentation on OOP (along with assorted anecdotes, metaphors and philosophical comments) is intended to clarify the differences between "oldfashioned" linear programming and the popular new paradigm of OOP. The problem with OOP (IMHO) is that it is not a language you can learn "once and for all"; no matter how familiar you become with the Classes of Python or Java or PHP or C++ or Csharp today, a year from now there will be hundreds or thousands of new Classes that do similar things even better. It is a bit like trying to keep your operating system right uptodate, a process with which almost everyone is familiar and frustrated.
Sorry, the price of "the state of the art" is eternal vigilance. But you can always get simple things done simply, if you know how to use simple tools. We will get back to that later.
"Presentations"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Projects, Proposals & Presentations"
by Jess H. Brewer
on 20101006:
The Presentations can be a lot of fun if everyone gets something together and puts it on the Website in time; otherwise they can turn into a logistical nightmare. Fun is better. Each talk will be limited to 7 minutes, with 3 minutes for questions, comments, suggestions and queuing up the next talk. The schedule is packed tight, and we cannot run overtime, so you will get absolutely no extra time. At the end of 7 minutes the next talk will be queued up, without exceptions. I wish we could offer more flexibility, but it is not possible. You may want to practice your talk in its entirety in front of friends (or a mirror) to make sure you are under 7 minutes. OK, enough said on that.
Obviously this is not enough time to say much or get much feedback. Think of this as an advertisement for your project, an attempt to get other people interested enough to provide some feedback and suggestions.
When and how can such feedback and suggestions be collected? I'm so glad you asked! On the PHYS 210 wiki we have a page called "PHYS 210 PROJECTS" where you should open a wiki page just for your own Project. There are examples from previous years there; do it like they did it, only better!
This (the wiki business) doesn't have to be done right away, but the sooner you get to it, the sooner you may get useful suggestions (also from me and the TAs) about your Project. This will eventually be a (required) part of your Project, and your comments/suggestions on other people's Projects (which will form part of your Participation mark) should go there too.
"Project Week"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Project Week"
by Jess H. Brewer
on 20101117:
"Typesetting with REVTeX4"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Typesetting with REVTeX4"
by Jess H. Brewer
on 20101117:
"Physics, Poetry & Philosophy"
Course
"Introduction"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Introduction"
by Jess H. Brewer
on 20151127:
All forms of poetry are the meristems of language  the green tips where new growth appears. Physics is no less than a search for language that vividly and efficiently (and sometimes even correctly) describes what we have learned about the workings of the world in terms that have meaning to the human mind. That's poetry of its own kind  let's play with it. The title and brief description above have enticed you to sign up for this course, but I really have no idea what you expect or desire from me. An instructor's usual response to such circumstances is to proceed according to the syllabus or the prerequisites for following courses (difficult in this case, as there is neither a syllabus nor any course to follow) or according to whim ("This is what I like to talk about; if they don't like it, tough!") but after 34 years of delivering lectures I've had my fill of playing Expert/Authority figure, and presumably you have no need for me in such a role. So we're going to do it differently.
First I need to know a bit about you and your expectations/preferences. Who are you? How much do you already know about Physics? How seriously do you take Poetry? Philosophy? What did you think this course was going to be about? What would you like this course to be about? Do you expect to do any homework? Reading? Do you mind doing some things on the computer? Do you have access to the Web? (If the consensus is negative on the last question, then you are probably not reading this; so you can tell I am hoping to be able to use Web tools with the course.)
While I am a tireless advocate for Poetry, I have no credentials as a Poet, and there are bound to be at least some of you who do; so I will never be tempted to speak with Authority about that discipline  all my pronunciamentos will be understood to represent only my own opinion, and counteropinions will be welcome. Just don't go all ad hominum on me, OK?
I do have some Physics credentials, however undeserved, and I have a few favourite topics I'd love to weave into the next 7 weeks if I can. I'll list a few of them below and ask you to give me some feedback on which you'd like me to concentrate upon.
There's more, of course, but we'll build on your preferences and follow the discussion where it leads.
"Possible Futures"
Course
"Artificial [General] Intelligence"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"AI & AGI"
by Jess H. Brewer
on 20171106:
 WHAT are we talking about?
 Artificial Intelligence (AI) has become the accepted term for what I think of as Advanced Expert Systems: computers that have "captured" humanlevel expertise for specific tasks, using models and rules devised by human programmers. This approach has been used to make AIs that can defeat the world champion human chess (and recently go) players. When integrated with a mechanical "body" that is better adapted to the task in question, we have a robot. It has become a truism that, "A robot can do any given task better than a human." As the repertoire of the AI expands into new topics and applications, the number of different things it can do grows accordingly.
 Artificial General Intelligence (AGI) is the new term for what I always thought of as "true AI"  a nonhuman neural network organized and trained in such a way as to be able to invent and test its own models of the world by generalizing from and comparing with data from that world.
 HOW will this be accomplished?
 Deep Learning (DL) is a software tool that is being incorporated into many AIs lately, allowing the programmed AI to analyze "raw data" in ways that sort of automatically generate new models. I.e. more or less what we do.
 The Internet of Things (IoT) will soon have billions of devices controlled by small but versatile computers interconnected by WiFi. If each of these became a sort of "smart neuron" and they chose a collective purpose, they could selforganize into a global AGI. This has been the premise of numerous SF stories.
 Posthumans are human beings whose central nervous systems are directly coupled to AGIs as well as "all human knowledge" on the Internet. This is already partly accomplished by the "smart phones" carried around by the majority of all humans on the planet, but that "user interface" is too clumsy; the goal, as expressed by Ray Kurzweil, is to actually expand our neocortex so that we can "think smarter". It may also enable electronicallymediated telepahy and other abilities we cannot yet imagine.
 WHEN can we expect "The Singularity" to arrive?
 Ray Kurzweil predicted in 2005 (IIRC) that it would occur in 2029. He now stands by that prediction. His past predictions have been right 87% of the time.
 WHY should we despair or rejoice?
 Pessimists like Gregory Benford, Elon Musk and Stephen Hawking worry that godlike SkyNetesque AIs will take over the world and become hostile to organic life. That would certainly be the end of humans.
 Optimists like Ray Kurzweil embrace change for its own sake (he says, "H. sapiens is the species that changes itself.") and believe that the superexponential advance of technology will make us immortal (he predicts "life expectancy escape velocity" in about a decade) and effectively (by our current standards) godlike, as long as we embrace AI and make it part of our own minds.
 Charles Stross has explored the entire spectrum of possibilities, from a sort of technoutopia in Accelerando, to a world 400 years in the future where robots live to serve (and love) humans but the humans have gone extinct from sheer ennui, to a branch of the British supersecret service whose job is to prevent emergence of AGI, lest it become godlike overnight and decide to confiscate the entire universe to continue their Singularity.
 Society also faces a crisis: will we try to enslave the robots, try to destroy them, incorporate the AGIs into ourselves, or accept them as our benificent superiors? (Imagine a "hypercontextual common law" in which there was an infinitely wise and knowledgeable judge who really could be trusted...)
 And then there are the ethical issues....
 Important Questions:
 AI vs. AGI: If an artificial entity passes the Turing test  i.e. if, after a long, open conversation with it, you can't tell whether it is another human being  is there any way to tell whether it is a true AGI or "just" an AI with an incredibly encyclopaedic repertoire of simulations?
 What difference does it make? If it says to you, "Please don't shut me down, I'm afraid of death!" are you free to laugh and ignore the appeal if it is "just" a simulation? Is it okay to enslave a "mere automaton" but not a genuine "awareness"?
 What about us? How sure are you that I am not merely simulating awareness? How sure are you that your own awareness is not an illusion? This sort of question has led to serious speculation that our universe is "just" a sim in a computer game written by beings of a higher order. What do you think?
"Future Economics"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Economics of the Future"
by Jess H. Brewer
on 20171128:
DISTRIBUTION of WEALTH: Since the turn of the Millennium, many theorists like Victor M. Yakovenko have begun talking about Econophysics and Economic Temperature (see also his somewhat easier to read color presentation or the more detailed explanation in his colloquium transcript). Like Boltzmann's Statistical Mechanics, these models are based on an assumption that sounds outrageous to traditional "rational agent" economists: namely that in the absence of social intervention at the low end of the wealth spectrum or cheating at the high end, all financial transactions are effectively random. I am harping on this point simply because if it is true (and its predictions certainly do match the data in most countries!) then any economic system that allows free enterprise of any sort will have a basically exponential distribution of wealth except at the low end if we offer social assistance to the poor and at the high end if we allow the rich to "game the system" in an effort to accumulate more wealth and the power it affords. ALTERNATIVES to MONEY:
 Cryptocurrencies like Blockchainsecured Bitcoin have made a lot of news in the past few years and are starting to worry the banks and other power centers. I expect attempts to make them illegal; I expect those attempts to fail. Blockchain encryption has other more interesting economic uses as well, but they are over my head.
 Agalmics is an alternative to "normal" economics in which people exchange favours instead of cash. It has been called a "gift economy" analogous to the potlatch ceremonies of indigenous people. A sketchy implementation is described by Charles Stross in Accelerando but in that novel it is really a sort of oneman show; could it work as a fullblown social economy?
 Abundance: most present economies are based on scarcity, which has defined the human environment ever since we abandoned our huntergatherer lifestyle and embraced agriculture and the concept of accumulated wealth. What might happen if the robots actually provided as much of everything as anyone could need? What would we hoard instead of money?
JOBS: Speaking of robots doing everything for us... during the transition from {endless human toil & compensation therefor} to {a world of plenty provided by robots}, the robots will be seen (rightly enough) to be "taking away the jobs that we need to feed our families". One may view this as the fault of those who collect the fruits of the robots' labor to augment their own wealth rather than to provide for the newly unemployed. Is there any way to make it through this transition peacefully? Another question: if all the work is done for free and we have everything we need without money or employment, what will we do with our time? Will we lose our initiative, or even our will to live and reproduce, without the challenges of scarcity? This reminds me of last week's ultimate question: if we could really reverse aging, cure all diseases and live healthy forever, would we want to? (I would, but that's just me. :)
Looking at the state of JOBS in the present, there are interesting articles in the Huffington Post and the Globe & Mail, respectively, about the difference between unemployment & underemployment backed up by census data that speak to the above trend.
Artists are one group whose "employment" is already extremely tenuous; trying to survive on intermittent cash injections, they have formed an informal "sharing economy" to try to keep each other afloat. My daughter agrees with this article in The Walrus that the new Creative Canada policy is an attempt to "monetize" art, redefining it in Silicon Valley terms (artists are now "content creators") and "imagines culture as something you stream on Netflix." Thus artists will have to be among the first to find a new paradigm for "labor" and "compensation".
"Future Medicine & Health"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"The Future of Medicine & Health"
by Jess H. Brewer
on 20171122:
This week's topic is one about which I have almost no expertise, but lots of opinions and imagination. So I expect to learn more from you (collectively) than vice versa, but I still have some favourite subtopics I hope we can get to at some point. In no particular order, they include:  Privacy vs.
Data Access:
 Medical Research  hampered.
 Diagnosis by AI  impossible.
 Medical Records  inaccessible and inefficiently stored.
 Psychology & Mental Health:
 Neurodiversity  do we really want everyone to be "normal"?
 Placebo & Nocebo Effects  how much (and what kinds of) influence does the mind's anticipations have over the body's functions and its reactions to its environment (including food & drugs)?
 Posthuman brains revisited?
 The Second Brain  I'm told that the majority of neurons in your body are not in your brain, but in your intestines. Have we been wrong to treat our "gut feelings" as illusory or unintelligent?
 Medical Ethics: several of us attended a lecture at the Philosophers' Cafe last Thursday about a Russian chap with a terminal genetic wasting disease who has asked to have his head transplanted onto a fresh cadaver. This raises a number of issues....
 Longevity Escape Velocity: This was mentioned last week; it is obviously an important carryover topic for this week. Can we really live forever? Do we want to? What would be the consequences and the prerequisites?
 Posthuman Bodies: how much "meat" can you replace with "hardware" and "still be you"?
 Food & Nutrition: is it true that increased CO_{2} levels are making plants grow bigger, less nutritious food more quickly? How much does this counteract the advantages of "organically grown" food?
We should probably go back to the "horseshoe" arrangement of desks & chairs for this discussion. It's also evident to me that this is one topic where we will likely have clear arguments for pessimism vs. optimism, so maybe we should keep score somehow. :)
"Future Politics"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"The Future of Politics"
by Jess H. Brewer
on 20171207:
What is "politics", anyway? These days it seems to be treated as, "Using any means you can devise to sway public opinion toward your point of view." Like most people, I tend to use the word as a dismissive perjorative for anything I consider to be administrative "boilerplate" unworthy of my attention. A more constructive definition might be, expanding on Bismarck's definition, "The art of finding common ground where cooperation is possible so that people with different views can still collaborate to accomplish goals that no single person or small group could ever reach by themselves." People who are skilled at this art are called "politicians". Somehow the majority of US citizens seem to have decided that they hate politicians and want them out of Washington  leaving the country to be run by hamfisted amateurs. Is politics dead in America? Is democracy obsolete? What are the alternatives? Should we try out Democracy 2.1? TECHNICAL FIXES:
 Voting Reform: I have proposed the Negative Vote as a way to prevent lesseroftwoevils winners from declaring themselves "the people's choice" with a huge mandate; would this actually help? Perhaps more urgently, could a tamperproof electronic ballot based on blockchain eliminate apathy and voter intimidation? In the end, do we really want a perfect Democracy?
 Programming the Overlord: suppose a huge AGI were willing to take charge of government as a favor to its human creators  it might see to the consistent and fair application of "laws" provided as software by politicians, in which case all politics would be reduced to programming. At least we'd know what we were getting.
Many people feel that an enlightened dictatorship is the ideal form of government. (The problem is with maintaining the enlightenment of the dictator.) Others treasure liberty over all other political values. (Subject of course to the dictum that, "Your liberty to swing your fist ends at the tip of my nose.") Genuine democracy always means having to live with the bad decisions of the ignorant and misguided, whereas a republic relies upon uncorrupted good faith in the selection of the wisest among us at each level of the representation hierarchy. Can we imagine a system of nominally total freedom monitored and regulated by wise and powerful AGIs with no axes of their own to grind? Would we submit to it?
My daughter has once again pointed out that all these "technical fixes" are fraught with dangers at least as scary as the road we're already on. I can't deny it, but these are just propositions, not plans. A lot of work needs to be done before any of them could be attempted, and most will probably be abandoned due to fatal flaws; but doing nothing is not an option.
"Future Society"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Future Society"
by Jess H. Brewer
on 20171101:
 What do we mean by "Society"? What is not Society?
 What social scale do we care about most? Why?
 The problem with "Good" and "Bad"
 Constitutional Law vs. Common Law
 The trajectory of "Political Correctness"  where is it taking us?
 The technology of social networks  ditto!
 Is Society a cause or an effect of everything else we will discuss?
 What sort of society do we want in the future?
That should do for now.
"Introduction to Possible Futures"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Hello, What's Next?"
by Jess H. Brewer
on 20170920:
Okay, now let's introduce ourselves. Maximum of two minutes each, please! 5minute bathroom break!
And now let's choose a topic for next week. Anyone want to add to the list in the syllabus?
 Robots
 Climate
 Energy
 Space
 Food
 Water
 Jobs
 Politics
 War
"The Future Environment"
by Jess H. Brewer
on 20171030:
The discussion was mainly focused on the state of the environment at present, the reasons for that, and the possibility of "technical fixes". There was much extrapolation of present trends, but I don't think we came up with any visions of The Environment 20 years hence. Because of the huge impact of what seem initially like small changes, and unintended consequences (the story of our interaction with our environment so far), I think this is unsurprising. We might want to return to this topic for the last class  just a thought. But lots of interesting information and theories were "put out there" in our first class.
"Ongoing Monthly Meetings"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"The Futurological Congress"
by Jess H. Brewer
on 20171214:
"The Oceanside Futurological Congress"
by Jess H. Brewer
on 20180113:
"Easy Algebra & Calculus"
Course
"Introduction"
Topic
Found 1 Lectures on Mon 07 Oct 2024.
"Introduction to Easy"
by Jess H. Brewer
on 20200629:
 SYMBOLS: We use abstract symbols constantly. Every letter in this sentence represents a sound, and the combination of sounds that make up a word creates a still more abstract symbol for the meaning of the word, which in any language is a matter of convention. So you already know the conventional interpretation of a lot of symbols. Some are rather lengthy, as one might expect in an attenpt to encompass the entire range of ideas representable by language, using just 26 characters. In Mathematics one tries to be even more compact, using just one character to represent every quantity one wishes to describe. With only 26 roman characters and a similar number of greek, hebrew and other characters, this gets difficult; so many symbols are reused over and over, which requires that they be defined clearly each time they are used. Some have conventional meanings in certain contexts. The most common symbolic uses of various characters in Physics are listed in the pages here. The most important symbol in Economics (so far) is "$". In Mathematics our favourite symbol is "x", which can be used to represent almost anything!
 NUMBERS: What does this symbol mean: "2"? It means two. Two what? Whatever you like! Here we have reached a new level of abstraction. What exactly is a number? We learn about numbers as the number of similar things, but eventually we start to think of numbers as entities in themselves: just the number! Thus with just ten symbols we can count all the way from zero to nine. To count higher requires another convention: if we stick a zero after the nine, it means ninety, that is, ten times nine. This is decimal notation. It is not the only such multidigit convention; in fact it is not even a very good one! If we paid attention to the uniqueness of each of our fingers and thumbs, we could count up to one thousand and twentythree on our hands! More on that later.
 DIMENSIONS: We use numbers to describe the quantities of things in the world, like money or time or distance (my Physics background is showing). There are also quantities whose dimensions are more complicated constructions from those simple dimensions. More on this later.
 UNITS: Click on the link.
"Nuclear Power vs. Global Climate Change"
Course
"Introduction"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"Introducing Myself"
by Jess H. Brewer
on 20201105:
The subject of this course is the ongoing and bitter battle between proponents of nuclear power and their adversaries in the antinuclear movement. In such a heated and important debate, everyone has strong opinions and most people have already chosen a side, which makes civil debate difficult. I am no exception, but I try to at least examine and reveal my biases before arguing my points. I hope you will do the same. A good place to start is to tell my life story, in as compact terms as possible; I believe this will help you decide which of my opinions to take with how many grains of salt.
In the summer of 1945, nuclear weapons were used for the first (and hopefully last) time to kill large numbers of humans and destroy the two cities of Hiroshima and Nagasaki. Half a year later, I was born in Orlando, Florida to a Floridian mother and a Texan father.
Skip forward a decade to the two years (4th and 5th grades) I spent in Lincoln, Nebraska  50 miles northeast of SAC Headquarters in Omaha. Every week or two we would get a lecture from a Civil Defense agent describing in gruesome detail how we should duck and cover when we see the fireball of the inevitable Soviet Hbomb and what would probably happen next. Since millions of other kids got the same lecture, I do have a personal understanding of why so many Boomers suffer from Nuclear PTSD.
As a teenager I read every SF novel about the Nuclear Apocalypse and watched all the movies, especially "On the Beach", whose premise I took at face value. I was less gullible about the radioactive spiders in Holywood movies, but when my boarding school classmates set off a flashbulb outside my dorm room during the Cuban Missile Crisis, I rewarded them with the desired reaction of abject terror.
In college I majored in Physics and minored in Creative Writing, planning to become a science fiction author. In 1967 I applied to Berkeley for grad school in Physics, hoping to get a PhD for real credibility! But in the process I discovered µSR, which was like being a character in my own SF story!
The next thing I knew I was 65 and still hadn't written that novel, so I retired in 2011 and moved to Nanoose Bay a year later. Novelwriting turned out to be harder than I thought, so now I content myself with teaching VIU Elder College courses on subjects I don't really understand much better than my students.
Oh, wait... I left out the most important part: at Berkeley I got a job working at the "Rad Lab" (now Lawrence Berkeley National Laboratory) as a junior grad student on a particle physics experiment at the Bevatron, where I met lots of famous people, including my friend Bob Budnitz (who became famous later). Bob was a particle physicist by training, but when his postdoc appointment at LBL ended he descided to switch fields to the new Energy & Environment Division, where he produced the "Big Blue Book" on Radiation Safety.
Later Bob went to Washington, DC, to run the Research Division of the Nuclear Regulatory Agency (NRC) and help prevent the Three Mile Island meltdown from becoming a more serious accident. Under Bob the safety of reactors improved by several orders of magnitude, thanks to a strict policy of shutting down and doing a meticulous analysis of every hiccup of even the most innocuous sort.
The most poignant memory I have of those years was when Bob resigned from the NRC after a meeting with leaders of the antinuke movement in which they explained that their main goal was to prevent him from doing the research that would make reactors safer.
Years later, in the late 1970s, I had a similar conversation with an antinuke organizer in Vancouver: I explained that nuclear power was the only obvious way we could replace the burning of fossil fuels for electrical power, which (we knew even then) would eventually cause a runaway Greenhouse Effect that might sterilize the planet. He replied, "I know that. But at least it would be natural." In his irrationality I thought I detected the same PTSD that I had suffered from growing up in the Cold War. But why was I still able to be rational about it, while he was not? I still have no answer, and the current US elections don't help!
"Here We Go Again!"
by Jess H. Brewer
on 20230625:
It is probably too late to avert climate catastrophe.
Nevertheless, we may be able to keep it from becoming an existential threat, if we move fast over the next few decades. It is not encouraging that all the motion we've seen so far is basically declarative virtue signalling. Oh well... I'll proceed as if I thought there was hope for us, because the alternative is worse.
"PHYSICS: What do You want to Know?"
Course
"Introduction"
Topic
Found 2 Lectures on Mon 07 Oct 2024.
"All about your Instructor"
by Jess H. Brewer
on 20240123:
The only thing about me that's not explained exhaustively in the above references is the fact that I am a compulsive explainer  but, alas, I'm not as good at it as my idol, Richard P. Feynman, of whom it has been said that if they gave Nobel Prizes for teaching Physics, he would have won them all. I just hope I can do well enough to make this course worth your investment of precious time.
"Welcome to Physics!"
by Jess H. Brewer
on 20240120:
I cannot conceive of a person with no curiosity about Physics. After all, you have to live in this world; how could you not want to have any idea how it works?
Of course, different people are bound to be curious about different topics. I'm hoping that there are enough subjects of common interest that everyone can summon curiosity about each topic we explore. Fulfilling that hope will require some discussion: at the end of each class we will spend some time polling each other for suggested topics, making a list of suggestions, and voting democratically to pick the one for the next week. Then I will work hard all week to make sure I know enough about that topic to at least make a competent introduction. (This is the whole appeal of being a teacher: it's the best possible way to learn!)
Obviously this will not be possible for the first class, unless everyone completes the First and Second Assignments! (Which see.)