The Theoretical Minimum

What You Need to Know to Start Doing Physics


By Leonard Susskind

By George Hrabovsky

Formats and Prices




$16.99 CAD



  1. ebook $12.99 $16.99 CAD
  2. Trade Paperback $17.99 $22.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around April 22, 2014. This date is subject to change due to shipping delays beyond our control.

A master teacher presents the ultimate introduction to classical mechanics for people who are serious about learning physics

"Beautifully clear explanations of famously 'difficult' things," — Wall Street Journal

If you ever regretted not taking physics in college — or simply want to know how to think like a physicist — this is the book for you. In this bestselling introduction to classical mechanics, physicist Leonard Susskind and hacker-scientist George Hrabovsky offer a first course in physics and associated math for the ardent amateur. Challenging, lucid, and concise, The Theoretical Minimum provides a tool kit for amateur scientists to learn physics at their own pace.



I’ve always enjoyed explaining physics. For me it’s much more than teaching: It’s a way of thinking. Even when I’m at my desk doing research, there’s a dialog going on in my head. Figuring out the best way to explain something is almost always the best way to understand it yourself.

About ten years ago someone asked me if I would teach a course for the public. As it happens, the Stanford area has a lot of people who once wanted to study physics, but life got in the way. They had had all kinds of careers but never forgot their one-time infatuation with the laws of the universe. Now, after a career or two, they wanted to get back into it, at least at a casual level.

Unfortunately there was not much opportunity for such folks to take courses. As a rule, Stanford and other universities don’t allow outsiders into classes, and, for most of these grownups, going back to school as a full-time student is not a realistic option. That bothered me. There ought to be a way for people to develop their interest by interacting with active scientists, but there didn’t seem to be one.

That’s when I first found out about Stanford’s Continuing Studies program. This program offers courses for people in the local nonacademic community. So I thought that it might just serve my purposes in finding someone to explain physics to, as well as their purposes, and it might also be fun to teach a course on modern physics. For one academic quarter anyhow.

It was fun. And it was very satisfying in a way that teaching undergraduate and graduate students was sometimes not. These students were there for only one reason: Not to get credit, not to get a degree, and not to be tested, but just to learn and indulge their curiosity. Also, having been “around the block” a few times, they were not at all afraid to ask questions, so the class had a lively vibrancy that academic classes often lack. I decided to do it again. And again.

What became clear after a couple of quarters is that the students were not completely satisfied with the layperson’s courses I was teaching. They wanted more than the Scientific American experience. A lot of them had a bit of background, a bit of physics, a rusty but not dead knowledge of calculus, and some experience at solving technical problems. They were ready to try their hand at learning the real thing—with equations. The result was a sequence of courses intended to bring these students to the forefront of modern physics and cosmology.

Fortunately, someone (not I) had the bright idea to videorecord the classes. They are out on the Internet, and it seems that they are tremendously popular: Stanford is not the only place with people hungry to learn physics. From all over the world I get thousands of e-mail messages. One of the main inquiries is whether I will ever convert the lectures into books? The Theoretical Minimum is the answer.

The term theoretical minimum was not my own invention. It originated with the great Russian physicist Lev Landau. The TM in Russia meant everything a student needed to know to work under Landau himself. Landau was a very demanding man: His theoretical minimum meant just about everything he knew, which of course no one else could possibly know.

I use the term differently. For me, the theoretical minimum means just what you need to know in order to proceed to the next level. It means not fat encyclopedic textbooks that explain everything, but thin books that explain everything important. The books closely follow the Internet courses that you will find on the Web.

Welcome, then, to The Theoretical Minimum—Classical Mechanics, and good luck!

Leonard Susskind

Stanford, California, July 2012

I started to teach myself math and physics when I was eleven. That was forty years ago. A lot of things have happened since then—I am one of those individuals who got sidetracked by life. Still, I have learned a lot of math and physics. Despite the fact that people pay me to do research for them, I never pursued a degree.

For me, this book began with an e-mail. After watching the lectures that form the basis for the book, I wrote an e-mail to Leonard Susskind asking if he wanted to turn the lectures into a book. One thing led to another, and here we are.

We could not fit everything we wanted into this book, or it wouldn’t be The Theoretical Minimum—Classical Mechanics, it would be A-Big-Fat-Mechanics-Book. That is what the Internet is for: Taking up large quantities of bandwidth to display stuff that doesn’t fit elsewhere! You can find extra material at the website This material will include answers to the problems, demonstrations, and additional material that we couldn’t put in the book.

I hope you enjoy reading this book as much as we enjoyed writing it.

George Hrabovsky

Madison, Wisconsin, July 2012

Lecture 1: The Nature of Classical Physics

Somewhere in Steinbeck country two tired men sit down at the side of the road. Lenny combs his beard with his fingers and says, “Tell me about the laws of physics, George.” George looks down for a moment, then peers at Lenny over the tops of his glasses. “Okay, Lenny, but just the minimum.”

What Is Classical Physics?

The term classical physics refers to physics before the advent of quantum mechanics. Classical physics includes Newton’s equations for the motion of particles, the Maxwell-Faraday theory of electromagnetic fields, and Einstein’s general theory of relativity. But it is more than just specific theories of specific phenomena; it is a set of principles and rules—an underlying logic—that governs all phenomena for which quantum uncertainty is not important. Those general rules are called classical mechanics.

The job of classical mechanics is to predict the future. The great eighteenth-century physicist Pierre-Simon Laplace laid it out in a famous quote:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

In classical physics, if you know everything about a system at some instant of time, and you also know the equations that govern how the system changes, then you can predict the future. That’s what we mean when we say that the classical laws of physics are deterministic. If we can say the same thing, but with the past and future reversed, then the same equations tell you everything about the past. Such a system is called reversible.

Simple Dynamical Systems and the Space of States

A collection of objects—particles, fields, waves, or whatever—is called a system. A system that is either the entire universe or is so isolated from everything else that it behaves as if nothing else exists is a closed system.

Exercise 1: Since the notion is so important to theoretical physics, think about what a closed system is and speculate on whether closed systems can actually exist. What assumptions are implicit in establishing a closed system? What is an open system?

To get an idea of what deterministic and reversible mean, we are going to begin with some extremely simple closed systems. They are much simpler than the things we usually study in physics, but they satisfy rules that are rudimentary versions of the laws of classical mechanics. We begin with an example that is so simple it is trivial. Imagine an abstract object that has only one state. We could think of it as a coin glued to the table—forever showing heads. In physics jargon, the collection of all states occupied by a system is its space of states, or, more simply, its state-space. The state-space is not ordinary space; it’s a mathematical set whose elements label the possible states of the system. Here the state-space consists of a single point—namely Heads (or just H)—because the system has only one state. Predicting the future of this system is extremely simple: Nothing ever happens and the outcome of any observation is always H.

The next simplest system has a state-space consisting of two points; in this case we have one abstract object and two possible states. Imagine a coin that can be either Heads or Tails (H or T). See Figure 1.

One very simple dynamical law is that whatever the state at some instant, the next state is the same. In the case of our example, it has two possible histories: H H H H H H . . . and T T T T T T . . . .

Another dynamical law dictates that whatever the current state, the next state is the opposite. We can make diagrams to illustrate these two laws. Figure 2 illustrates the first law, where the arrow from H goes to H and the arrow from T goes to T. Once again it is easy to predict the future: If you start with H, the system stays H; if you start with T, the system stays T.

Our coin has one degree of freedom, which we can denote by the greek letter sigma, σ. Sigma has only two possible values; σ = 1 and σ = −1, respectively, for H and T. We also use a symbol to keep track of the time. When we are considering a continuous evolution in time, we can symbolize it with t. Here we have a discrete evolution and will use n. The state at time n is described by the symbol σ(n), which stands for σ at n. The value of n is a sequence of natural numbers beginning with 1.

Let’s write equations of evolution for the two laws. The first law says that no change takes place. In equation form,

endlessly is called a cycle. For example, if we start with 3 then the history is 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 2, . . . . We’ll call this pattern Dynamical Law 1.

history will be 2, 6, 1, 2, 6, 1, . . . and you will never get to 5. If you start at 5 the history is 5, 3, 4, 5, 3, 4, . . . and you will never get to 6.

It would take a long time to write out all of the possible dynamical laws for a six-state system.

Exercise 2: Can you think of a general way to classify the laws that are possible for a six-state system?

Rules That Are Not Allowed: The Minus-First Law

According to the rules of classical physics, not all laws are legal. It’s not enough for a dynamical law to be deterministic; it must also be reversible.

The meaning of reversible—in the context of physics—can be described a few different ways. The most concise description is to say that if you reverse all the arrows, the resulting law is still deterministic. Another way, is to say the laws are deterministic into the past as well as the future. Recall Laplace’s remark, “for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” Can one conceive of laws that are deterministic into the future, but not into the past? In other words, can we formulate irreversible laws? Indeed we can. Consider Figure 9.

There is no ambiguity about the future. But the past is a different matter. Suppose you are at 2. Where were you just before that? You could have come from 3 or from 1. The diagram just does not tell you. Even worse, in terms of reversibility, there is no state that leads to 1; state 1 has no past. The law of Figure 9 is irreversible. It illustrates just the kind of situation that is prohibited by the principles of classical physics.

Notice that if you reverse the arrows in Figure 9 to give Figure 10, the corresponding law fails to tell you where to go in the future.

simply the rule that every state has one arrow in and one arrow out. It ensures that you never lose track of where you started.

The conservation of information is not a conventional conservation law. We will return to conservation laws after a digression into systems with infinitely many states.

Dynamical Systems with an Infinite Number of States

So far, all our examples have had state-spaces with only a finite number of states. There is no reason why you can’t have a dynamical system with an infinite number of states. For example, imagine a line with an infinite number of discrete points along it—like a train track with an infinite sequence of stations in both directions. Suppose that a marker of some sort can jump from one point to another according to some rule. To describe such a system, we can label the points along the line by integers the same way we labeled the discrete instants of time above. Because we have already used the notation n for the discrete time steps, let’s use an uppercase N for points on the track. A history of the marker would consist of a function N(n), telling you the place along the track N at every time n. A short portion of this state-space is shown in Figure 11.

Figure 12: A dynamical rule for an infinite system.

This is allowable because each state has one arrow in and one arrow out. We can easily express this rule in the form of an equation.


Figure 13: Breaking an infinite configuration space into finite and infinite cycles.

If we start with a number, then we just keep proceeding through the upper line, as in Figure 12. On the other hand, if we start at A or B, then we cycle between them. Thus we can have mixtures where we cycle around in some states, while in others we move off to infinity.

Cycles and Conservation Laws

When the state-space is separated into several cycles, the system remains in whatever cycle it started in. Each cycle has its own dynamical rule, but they are all part of the same state-space because they describe the same dynamical system. Let’s consider a system with three cycles. Each of states 1 and 2 belongs to its own cycle, while 3 and 4 belong to the third (see Figure 14).

something is kept intact for all time. To make the conservation law quantitative, we give each cycle a numerical value called Q. In the example in Figure 15 the three cycles are labeled Q = +1, Q = −1, and Q = 0. Whatever the value of Q, it remains the same for all time because the dynamical law does not allow jumping from one cycle to another. Simply stated, Q is conserved.

similar in appearance to the usual single-digit integers, but with enough slight differences so that there are a million distinguishable labels. If one knew the dynamical law, and if one were able to recognize the initial label, one could predict the future history of the die. However, if Laplace’s vast intellect suffered from a slight vision impairment, so that he was unable to distinguish among similar labels, his predicting ability would be limited.

In the real world, it’s even worse; the space of states is not only huge in its number of points—it is continuously infinite. In other words, it is labeled by a collection of real numbers such as the coordinates of the particles. Real numbers are so dense that every one of them is arbitrarily close in value to an infinite number of neighbors. The ability to distinguish the neighboring values of these numbers is the “resolving power” of any experiment, and for any real observer it is limited. In principle we cannot know the initial conditions with infinite precision. In most cases the tiniest differences in the initial conditions—the starting state—leads to large eventual differences in outcomes. This phenomenon is called chaos. If a system is chaotic (most are), then it implies that however good the resolving power may be, the time over which the system is predictable is limited. Perfect predictability is not achievable, simply because we are limited in our resolving power.

Interlude 1: Spaces, Trigonometry, and Vectors

“Where are we, George?”

George pulled out his map and spread it out in front of Lenny. “We’re right here Lenny, coordinates 36.60709N, –121.618652W.”

“Huh? What’s a coordinate George?”


To describe points quantitatively, we need to have a coordinate system. Constructing a coordinate system begins with choosing a point of space to be the origin. Sometimes the origin is chosen to make the equations especially simple. For example, the theory of the solar system would look more complicated if we put the origin anywhere but at the Sun. Strictly speaking, the location of the origin is arbitrary—put it anywhere—but once it is chosen, stick with the choice.

The next step is to choose three perpendicular axes. Again, their location is somewhat arbitrary as long as they are perpendicular. The axes are usually called x, y, and z but we can also call them x1, x2, and x3. Such a system of axes is called a Cartesian coordinate system, as in Figure 1.

Figure 1. A three-dimensional Cartesian coordinate system.

We want to describe a certain point in space; call it P. It can be located by giving the x, y, z coordinates of the point. In other words, we identify the point P with the ordered triple of numbers (x, y, z) (see Figure 2).

Figure 3: A plane defined by setting x = 0, and the distance to P along the x axis.

When we study motion, we also need to keep track of time. Again we start with an origin—that is, the zero of time. We could pick the origin to be the Big Bang, or the Birth of Jesus, or just the start of an experiment. But once we pick it, we don’t change it.

Next we need to fix a direction of time. The usual convention is that positive times are to the future of the origin and negative times are to the past. We could do it the other way, but we won’t.

Finally, we need units for time. Seconds are the physicist’s customary units, but hours, nanoseconds, or years are also possible. Once having picked the units and the origin, we can label any time by a number t.

There are two implicit assumptions about time in classical mechanics. The first is that time runs uniformly—an interval of 1 second has exactly the same meaning at one time as at another. For example, it took the same number of seconds for a weight to fall from the Tower of Pisa in Galileo’s time as it takes in our time. One second meant the same thing then as it does now.

The other assumption is that times can be compared at different locations. This means that clocks located in different places can be synchronized. Given these assumptions, the four coordinates—x, y, z, t—define a reference frame. Any event in the reference frame must be assigned a value for each of the coordinates.

Given the function f(t) = t2, we can plot the points on a coordinate system. We will use one axis for time, t, and another for the function, f(t) (see Figure 4).

In this way we can visualize functions.

Exercise 1: Using a graphing calculator or a program like Mathematica, plot each of the following functions. See the next section if you are unfamiliar with the trigonometric functions.

Figure 6: The radian as the angle subtended by an arc equal to the radius of the circle.

We can graph these functions to see how they vary (see Figures 8 through 10).

Figure 11: A right triangle drawn in a circle.

Here the line connecting the center of the circle to any point along its circumference forms the hypotenuse of a right triangle, and the horizontal and vertical components of the point are the base and altitude of that triangle. The position of a point can be specified by two coordinates, x and y, where

(Notice the notation used here: sin2 θ = sin θ sin θ.) This equation is the Pythagorean theorem in disguise. If we choose the radius of the circle in Figure 11 to be 1, then the sides a and b are the sine and cosine of θ, and the hypotenuse is 1. Equation (1) is the familiar relation among the three sides of a right triangle: a2 + b2 = c2.


Vector notation is another mathematical subject that we assume you have seen before, but—just to level the playing field—let’s review vector methods in ordinary three-dimensional space.

A vector can be thought of as an object that has both a length (or magnitude) and a direction in space. An example is displacement. If an object is moved from some particular starting location, it is not enough to know how far it is moved in order to know where it winds up. One also has to know the direction of the displacement. Displacement is the simplest example of a vector quantity. Graphically, a vector is depicted as an arrow with a length and direction, as shown in Figure 12.

corresponding components.

Exercise 3: Show that the magnitude of a vector satisfies

Lecture 2: Motion

Lenny complained, “George, this jumpy stroboscopic stuff makes me nervous. Is time really so bumpy? I wish things would go a little more smoothly.”

George thought for a moment, wiping the blackboard. “Okay, Lenny, today let’s study systems that do change smoothly.”

Mathematical Interlude: Differential Calculus

In this book we will mostly be dealing with how various quantities change with time. Most of classical mechanics deals with things that change smoothly—continuously is the mathematical term—as time changes continuously. Dynamical laws that update a state will have to involve such continuous changes of time, unlike the stroboscopic changes of the first lecture. Thus we will be interested in functions of the independent variable t.

To cope, mathematically, with continuous changes, we use the mathematics of calculus. Calculus is about limits, so let’s get that idea in place. Suppose we have a sequence of numbers, l1, l2, l3, . . ., that get closer and closer to some value L. Here is an example: 0.9, 0.99, 0.999, 0.9999, . . . . The limit of this sequence is 1. None of the entries is equal to 1, but they get closer and closer to that value. To indicate this we write

We can apply the same idea to functions. Suppose we have a function, f(t), and we want to describe how it varies as t gets closer and closer to some value, say a. If f(t) gets arbitrarily close to L as t tends to a, then we say that the limit of f(t) as t approaches a is the number L. Symbolically,

Let’s calculate a few derivatives. Begin with functions defined by powers of t. In particular, let’s illustrate the method by calculating the derivative of f(t) = t2. We apply Eq. (1) and begin by defining f(t + Δt):

Now divide by Δt,

Exercise 1: Calculate the derivatives of each of these functions.

Particle Motion

The concept of a point particle is an idealization. No object is so small that it is a point—not even an electron. But in many situations we can ignore the extended structure of objects and treat them as points. For example, the planet Earth is obviously not a point, but in calculating its orbit around the Sun, we can ignore the size of Earth to a high degree of accuracy.

The position of a particle is specified by giving a value for each of the three spatial coordinates, and the motion of the particle is defined by its position at every time. Mathematically, we can specify a position by giving the three spatial coordinates as functions of t: x(t), y(t), z(t).

The position can also be thought of as a vector


As time progresses, the −gt


  • "Beautifully clear explanations of famously 'difficult things.'"—John Gribbin, Wall Street Journal
  • "What a wonderful and unique resource. For anyone who is determined to learn physics for real, looking beyond conventional popularizations, this is the ideal place to start."—Sean Carroll, New York Times-bestselling author of Something Deeply Hidden
  • "A spectacular effort to make the real stuff of theoretical physics accessible to amateurs."—Tom Siegfried, Science News
  • "Very readable. Abstract concepts are well explained.... [The Theoretical Minimum] does provide a clear description of advanced classical physics concepts, and gives readers who want a challenge the opportunity to exercise their brain in new ways."—Lowry Kirkby, Physics World
  • "Readers ready to embrace their inner applied mathematics will enjoy this brisk, bare-bares introduction to classical mechanics."—Publishers Weekly

On Sale
Apr 22, 2014
Page Count
256 pages
Basic Books

Leonard Susskind

About the Author

Leonard Susskind is the Felix Bloch Professor in Theoretical Physics at Stanford University. He is the author of Quantum Mechanics (with Art Friedman) and The Theoretical Minimum (with George Hrabovsky), among other books. He lives in Palo Alto, California.

Learn more about this author

George Hrabovsky

About the Author

George Hrabovsky is the president of Madison Area Science and Technology (MAST), a nonprofit organization dedicated to scientific and technological research and education. He lives in Madison, Wisconsin.

Learn more about this author