The course ``Directions in Modern Physics'' is intended to provide an elementary introduction to recent exciting developments having occurred in physics. The topics that will be discussed are usually studied after a few years in graduate school because they require a lot preliminary training in order to be mastered. For a physicist, mastering a subject means that you are able to perform calculations according to definite procedures. However, if one relaxes this requirement, I mean if at the end you will not be asked to carry through quantitative evaluations, it is possible to communicate the intellectual excitement of new discoveries to a larger audience then the one of experienced physicists.
The course will be taught in such a way that it can be followed by students having a very minimal mathematical background. I shall assume that the students are familiar with the following topics: addition and multiplication, solution of a linear equation ax+b=c, coordinates of a vector, angles, graph of a function (e.g f(x)=x^2) and powers of 10. Sometimes, more advanced notions will be necessary, but they will be introduced during the lecture. The main textbooks used (see syllabus) are based on public lectures given by people who are considered as leaders in their field and which are intended to a non-specialist audience.
The courses has two parts:
Before discussing each theme in more detail, we briefly explain how they are related historically and what are the name of the subfields of physics related to them.
This part of the course will discuss some aspects of the motion of large objects (e.g., planets) or common size objects (e.g., spinning tops, coins, billiards, swings). Isaac Newton (1642-1727) discovered that in many circumstances it is possible to relate the change of velocity (speed and direction) of an object to some external influences (called ``forces'') exerted on this object. At the same time he created a new branch of mathematics (calculus) which provided the appropriate language to express this idea. This part of physics is called Newtonian Mechanics.
One important success of classical mechanics is that the
effect of the
gravitational force between the sun and each of the planets can be calculated.
The results of these calculations fit the observations very accurately.
The mathematical aspects of classical mechanics
were fully developed during the 18th and 19th century,
for instance by J. Lagrange (1736-1813) and H. Poincare (1854-1912).
Classical mechanics provides examples of deterministic rules
of evolution. Many of the concepts used in this context can be used in
more general situations and in other natural sciences:
state of the system and evolution rules (these characterize a Dynamical
System),
equilibrium, (in)stability, bifurcation, cycles, etc...
The study of many dynamical systems has been greatly facilitated
by the possibility of using more and more sophisticated computers.
We are now able to solve easily many problems
that would have seemed hopeless in Poincare time.
If we know the rules of evolution, can we predict the outcome
of any experiment? The equations
describing the motion of a coin being tossed have been known
for more than two hundred years (L. Euler, 1707-1783), however
we know that for regular coins, we have almost the same {\it chance}
to end up with head or tail. But maybe you think that it just because you
do not know Euler's equations that you cannot get tail at each toss,
so we will consider simpler examples of a
dynamical system with a complicated dynamics.
Consider an hypothetical quantity ( ``state of the system'') which after n seconds is given by x(n) a number between -1 and 1. Suppose that knowing x(n) you can calculate x(n+1) according to the `` rule of evolution'': x(n+1) = 1-2(x(n))^2 This may not describe any natural phenomenon very accurately but it is easy to implement with a pocket calculator. Try x(0)=0.2 and calculate x(n) for n up to 20 and then repeat the calculation with x(0)=0.2001. You will see that the small change in the the initial state has very important effects. This is a simple example of what is often called a chaotic evolution. Nevertheless, as in the case where you are tossing a coin, it possible to give a probabilistic answer, for instance, what is the the chance that x(20) will be between 0.1 and 0.3.
If you have Mathematica 3.0, you can download a Notebook which
will display a more sophisticated example of a chaotic system:
the Lorenz attractor which
you see on the main web page for the course.
Symmetry is an important aspect of several of physical theories. Indeed, it seems that whenever one tries to understand natural phenomena at shorter distance, the symmetry increases.
In physics and mathematics, the word symmetry has a definite technical meaning: it means that some object remains unchanged after a certain transformation. For instance, there is no difference between a given square and the same square rotated clockwise by 90 degrees about an axis perpendicular to its surface and passing by its center. We then say that this square is invariant under this rotation, or that this rotation is a symmetry of the square. In addition, if we perform this rotation two consecutive times, we obtain a transformation which could have been obtained by rotating the square once by 180 degrees. We can also ``undo'' clockwise rotations by performing the corresponding counterclockwise rotation. The set of rotations which leave the square invariant form a group with four elements.
In this special case, it does not matter in which order two transformations are performed. You can work out the symmetry group of the cube. You will find that this group has 24 elements and that in some cases the order in which you perform two transformations matters. This difference between the symmetry group of the square and the symmetry group of the cube is crucial to understand the difference between the electromagnetic and the strong interactions! (see below).
There exists a deep connection between the symmetries of a physical theory and conservation laws, i.e., the fact that some some quantities remain constant during the time evolution. The most familiar example of a conservation law is probably the conservation of energy, which indeed is a consequence of the symmetry under a change of origin in the time variable (i.e., we can choose arbitrarily which instant is called time 0). The connection between these two features is explained in a theorem due to Emmy Noether and is difficult to explain at an elementary level.
Newtonian mechanics does not provide an adequate description of all the natural phenomena. For instance, it becomes inadequate when speeds of the order of 10^8 meters per second are reached. To be concrete, if you could go that fast, you could go from Chicago to New-York in a time of the order of one hundredth of a second. Such a speed is not small compared to the speed of light which is 299,792,458 meter/second.
The great novelty of the
theory of Special Relativity proposed by A. Einstein in 1905
is to use as basic principle that the speed of light
is the same for two observers
(technically speaking, we should say
inertial observers}
moving with respect to each
other at constant velocity.
This principle is not valid if you are considering
the speed of familiar objects which change when you are moving
away or toward them.
Maxwell's equations, written in 1873 and which summarized what was
known about electricity and magnetism,
played an important role in the development of special relativity.
These equations imply that light
propagates at a fixed speed which can be determined by making
experiments with charges and currents.
Galileo (1564-1642) realized that if two objects having different masses are dropped at the same time, their speed and position remain identical during the fall.\foot{If the effects due the resistance of the air are the same for the two objects.} Later, the result of this experiment was attributed to the fact that the inertial mass (which measures how hard it is to get an object in motion) was indeed proportional to the gravitational mass (which measures how strongly an object is attracted by the earth, for instance) and could be eliminated from Newton's equation (F=ma). Around 1915, A. Einstein reinterpreted Galileo's experiment by saying that space is curved due to the presence of matter and that falling objects follow the analogue of straight lines - which are independent of their masses - in this curved space. The new field of physics which was developed from this reinterpretation is called General Relativity .
You can get an idea about the effects of curvature by imagining you are
a two-dimensional object living on the
surface of sphere and having no way to realize if there is anything
inside or outside this sphere. The ``straight line'' joining two points
is defined as the shortest path between these two points.
It is not too difficult to realize that these straight lines are part
of a great circle. Great circles are the largest
circles you can draw on a sphere
(like the equator or the meridians). Using this definition of straight line,
you can figure out the new properties of circles and triangles
on a sphere.
Human beings have the understandable tendency to think that the place where they live is very special. However, if you want to describe the motion of the planets it is much more simple to use the sun as the center of your system of coordinates than the earth. This idea first put forward by Copernicus (1473-1543) and defended by Galileo, was at first very unpopular (and fought aggressively), however it prevailed after we got a precise understanding of the laws of gravitation.
Similarly, Einstein's main assumption to derive a theory
of the universe was not geocentric. This assumption, called the
Cosmological Principle, states that the universe is
in average homogeneous (there are no special locations) and
isotropic (there are no special directions). From the measurements
made from the earth (or nearby), it has been established
that homogeneity and isotropy characterizes
the distribution of observable objects, provided that one compares large
``cells'' having a size of the order of 10^25 meter, i.e.,
approximately a hundred thousand billion times the distance between the earth
and the sun (which is itself around a million times the distance between
Iowa City and Des Moines).
Under the assumption of homogeneity and isotropy, the equations relating
the curvature of space to the presence of matter become simpler and
predict the expansion of the universe. A simplified illustration
of this phenomenon in two dimensions is the inflation of a balloon.
First notice that if the balloon is perfectly
spherical, there are no special points
or special directions. Imagine then that the balloon is progressively
inflated in such away that after 1 $second$ the length between any two
points is multiplied by two. Seen from a particular point $A$, two points which
were respectively at 10 and 20 cm from A at time 0, will be respectively
at distances of 20 and 40 cm from A after 1 second. In other word,
the closest point to A has moved away from A by 10 cm in 1 second.
while the farthest has moved away by 20 cm during the same amount of time.
The observation that farther objects move away faster
was first made by Hubble around 1925. From his measurements, a time scale
of the order of 10^(10) years emerged as characteristic of the expansion
process.
Newtonian mechanics or Maxwell's equations do not explain why some substances emit only light of selected colors. When it comes to the description of small objects such as the atoms, we need a more evolved formalism called Quantum Mechanics. The order of magnitude of the size of an atom is 10^(-10) meter, or in other words, one tenth of one thousandth of one thousandth of a millimeter. By order of magnitude of the size of an atom, we mean a typical length that will often appear multiplied by numbers close to 1 (like 0.529 or 3.14) when you do atomic physics.
In quantum mechanics, all the information
concerning the state of a system is contained in an object called
the wavefunction. If two wavefunctions
are added, one obtains a new wavefunction.
This is called the superposition principle.
In quantum mechanics, it is impossible to construct a wavefunction
where position and velocity are precisely defined at the same time.
This intrinsic limitation is called uncertainty principle.
These concepts were developed in articles published between
1925 and 1930 by physicists like W. Heisenberg, L. de Broglie,
P. Dirac, N. Bohr and E. Schrodinger.
Hopefully, after reading the first chapter of Feynman's book, these
concepts will sound less abstract.
Quantum Electrodynamics (QED) is a theory which combines quantum mechanics
and special relativity to describe the interactions of electrons (and
other charged particles) among themselves and with light.
QED predicts very accurately the
motion of an electron
in presence of a
magnet. The agreement between theory
and experiment can reach eight significant figures which
is considered as quite remarkable.
In 1965, R. Feynman, J. Schwinger and S. Tomonaga
were awarded the Nobel Price for
their work about QED, done around 1950.
QED can be seen as a part of a larger theoretical model called the ``standard model'' of electromagnetic, weak and strong interactions. The standard model was developed during the sixties and the seventies and accounts for a large variety of phenomena: for instance, the fact that due to the weak interactions, the muon, a particle similar to the electron but approximately 200 times heavier, decays in average after 2 10^(-6) second, and how the strong forces bind three quarks into a proton of a size which is roughly 10^(-15) meter. Some of its crucial predictions (the existence of the Z and W particles) were verified in 1983. However, in 1997 important theoretical and experimental aspects of the standard model remain to be understood.
During the last decades, many particle physicists have attempted to build
unified theories, where the electromagnetic, weak and strong interactions
follow from a single type of force or principle. Gravitational forces
were also involved in more ambitious schemes which would explain
all the known interactions. However,
reputable physicists strongly disagree on the issue of a
"Theory of Everything".
Using the laws of physics ruling the behavior of a few objects to describe the behavior of a very large number (like 10^20) of similar objects is in general a very difficult enterprise. However, substantial progress regarding the statistical approach in mechanics, or Statistical Mechanics for short, have been made during the second half of the 19-th century, e.g., by J. Maxwell (1831-1879), L. Boltzmann (1844-1906) and J. Gibbs (1839-1903). When considering a large number of particles, it useful to focus the attention on macroscopic features. Thermodynamics provides empirical relations among macroscopic quantities, for instance the temperature, the pressure and the volume of a gas. These relations should in principle satisfy two requirements called the first and the second laws of thermodynamics. The first law expresses the conservation of energy and the second restricts further the way heat is transformed into mechanical energy.
One very interesting (and very difficult) question in statistical
mechanics is the existence of phase transitions. Everybody is familiar
with ice melting or water boiling, however no completely satisfactory
understanding of these phenomena is at hand. A particularly interesting
phenomenon which occurs when the pressure and the temperature reach
their critical values, is the disappearance of a clear
distinction between the liquid and gaseous phases.
Analogous phenomena are encountered in the study of other systems and
share quantitative properties, namely the precise
way certain quantities
become large when the critical temperature is reached.
This quantitative similarity is often called universality.
Around 1970,
Ken Wilson made a decisive step toward an understanding of the
behavior of systems with many objects near their critical temperature.
The second law of thermodynamics leads to the introduction of a new quantity
called entropy, first suggested by Boltzmann, and
which is useful to describe
irreversible processes such as the free expansion of a gas.
It is difficult to understand that
irreversible process can occur in a
system governed by reversible microscopic laws
(i.e., that at a given time, if you could reverse the signs of the velocities,
the system would evolve ``backward'' in time). A proper understanding of
this apparent paradox is obtained by showing that
due to the very large number of microscopic particles involved,
the chance
that a gas returns to its initial state after a free expansion
is ridiculously small.