A dozen ways I strive to develop more effective scripts
This is an expandable sampler of some ways that well-prepared
scripts can be more engaging, memorable, and effective.
The Origin and Evolution of Earth (L4 "Ur Minerals, First Crystals in
the Cosmos")
The first mineral in the cosmos was diamond.
Formed in the carbon-rich envelopes of exploding stars. Temperatures were
still high but cooling, allowing tiny crystals of carbon to form. Soon
other crystals followed. These were the first minerals. Together, these
dozen or so minerals began to seed the universe with their own dust,
becoming the initial raw-materials for the formation of any Earth-like
planet. There is beauty in the idea that the first mineral in our universe
was diamond. But it astonishes that until just a few years ago, in
November of 2008 to be exact, no one seems to have asked when and where
the first mineral formed. What was the first mineral in the universe? That
basic question had not been asked.
Think about this. Scientists have long asked when the universe itself
began, or when Earth was formed, or when life emerged from the
primordial soup. We're constantly improving our estimates of when the
first plants, or land animals, or primates evolved. Indeed, it seems to
be the most basic question in almost any field of study, to ask how
things got started. But in spite of an extensive search, I can't find a
single mention of the oldest mineral anywhere in the vast literature of
geology. In this lecture we're going to look closely at that question.
The question of the beginning of cosmic mineralogy, which in a very real
sense is the beginning of all the rocky planets like Earth.
Charlemagne: Father of Europe (L1 "The Making of Emperor
Charlemagne")
It was Christmas Day 800. Charles, as he was then called, was spending
Christmas in Rome. It was still the largest and greatest of all the
cities in Christian Europe, with a resident population of perhaps 20,000
people. Admittedly, some eight hundred years earlier, Rome had housed
one million people. The Rome that Charlemagne visited consisted mostly
of empty, abandoned buildings. But even in its diminished state, Rome
was still larger than any town north of the Alps. In Charlemagne’s own
native kingdom, the Kingdom of the Franks, even the largest towns such
as Paris had only perhaps 5,000 residents.
Rome’s reputation rested on much more than its former size. Its
reputation rested, above all else, on its many old churches. Charlemagne
attended Christmas Mass in one of those churches: the basilica of Saint
Peter, an early version of the building located in what is today Vatican
City. On this day, Charlemagne had shown up for Christmas Mass dressed
in Roman clothing, which he had worn perhaps once before in his life: he
sported a long Roman tunic, a Roman cloak, and pointed Roman shoes.
Given how he was decked out, wearing spiffy Roman clothes far removed
from the usual Frankish wardrobe that he preferred, it was almost as if
Charlemagne was expecting something unusual to happen.
Presiding over Christmas Mass was Pope Leo III. As pope, on Christmas
Eve and Christmas Day, he had to officiate at four different masses held
at four different locations: the mass at St. Peter’s was the fourth of
four. The grueling cycle of masses that he had to lead may well have
left him looking the worse for wear.
Certainly his five years as pope had taken a toll on him. In 799, just
one year earlier, assailants had mutilated Pope Leo’s tongue and eyes as
part of an effort to depose him. Leo had fled to Charlemagne for
support, Charlemagne had helped Leo retain the papacy, and now, just a
year later, both men were together again, in Rome.
Christmas Mass at St. Peter’s in 800 did indeed take an unusual turn.
During the Mass, Pope Leo III placed a crown on Charlemagne’s head.
Inhabitants of Rome attending mass acclaimed Charlemagne as emperor: “To
the august Charles, crowned by God, the great and peaceful emperor of
the Romans, life and victory!” That they shouted these acclamations in
unison suggests rehearsal and preparation. The pope then performed a
gesture of submission to Charlemagne—most likely, the pope prostrated
himself before the new emperor, a gesture called proskynesis.
The newly crowned emperor, in turn, expressed his continuing
veneration for St. Peter’s basilica and for other Roman churches by
giving them gifts: valuable liturgical items such as crucifixes and
chalices, made from silver and gold, and adorned with jewels.
When Leo III crowned Charlemagne as emperor in 800, there had been no
emperor in western Europe since 476; back in that year, a barbarian
general named Odoacer had deposed the last western Roman emperor.
Between 476 and 800, there had been kings and kingdoms in western
Europe, and plenty of them.
Italy had been home to one of those kingdoms, the Kingdom of the
Eastern Goths, or Ostrogoths. The surviving eastern half of the Roman
Empire, which historians call the Byzantine Empire, toppled the
Ostrogothic Kingdom in the mid-6th century and reimposed imperial rule.
But even before the end of the 6th century, a new and independent
kingdom emerged in northern and central Italy, the Kingdom of the
Lombards The Iberian peninsula had been home to several kingdoms, chief
among them the Kingdom of the Western Goths, the Visigoths. It had
succumbed to Muslim conquerors two generations before Charlemagne’s
birth. In what had once been Roman Gaul, there had been various
Frankish-ruled kingdoms, centered on regions such as Burgundy,
Aquitaine, Neustria, and Austrasia. Charles’s grandfather had held high
office in Austrasia, and wielded considerable influence in Neustria as
well. Charles’s father, like Charles himself, had ruled them as their
king.
In what had once been Roman Britain, there were even more kingdoms,
usually seven in number, modestly sized and ruled by Anglo-Saxons.
Europe had not always been a continent of disunited kingdoms. Under the
Roman Empire, a single Roman emperor had ruled all of these lands.
Empire and the office of emperor stood, above all else, for unity and
unified rule. Charlemagne’s imperial coronation in 800 raised the
possibility that Europe would no longer consist of a multiplicity of
kingdoms.
Perhaps Europe’s future would be that of a single empire, like
contemporary China under the Tang dynasty, about which Charlemagne knew
next to nothing. Or like the Byzantine Empire, about which Charlemagne
knew a great deal more, and with which he had substantial dealings
throughout his life. The Byzantine Empire was Greek-ruled, and its
capital was at Constantinople. That city housed perhaps as many as two
hundred thousand residents, which would have made it ten to twenty times
larger than any town or city over which Charlemagne ever ruled.
Charlemagne’s imperial revival was audacious. And it caused political
and conceptual problems.
By reviving the imperial title, Charlemagne had raised awkward
questions about the principle of imperial unity. The entity that we call
the Byzantine Empire still called itself the Roman Empire, and its
emperor still called himself the Roman Emperor. And they were right to
do so. The Byzantine Empire was a direct continuation of the Roman
Empire; when the last western emperor had been deposed in 476, the
eastern emperor had remained in office.
As a result of Charlemagne’s imperial coronation, now there were two
emperors claiming to rule over the Romans. The Byzantines had liked it
better when there was just one such emperor. Even worse—far worse—
Charlemagne was a Frank. The phenomenon of co-emperors had a long
history within the Roman world; in such cases, the pretense of imperial
unity could be preserved, provided that all the emperors were Romans.
Charlemagne’s Frankishness left little or no room for such a pretense.
Moreover, Emperor Charlemagne did not rule over all the places that had
once been part of the western half of the Roman Empire. He never ruled
over the entire Iberian peninsula, or any part of the British Isles.
Still, the empire over which he did rule was larger, by far, than any
European kingdom that had existed since the deposing of the last western
Roman emperor in 476. During his reign as king and then emperor,
Charlemagne doubled the size of the territories over which he ruled.
Charlemagne’s Empire was clearly in the ascendant, which was more than
could be said for its chief rivals. The Byzantine Empire had lost most
of its territory to the Islamic Conquests of the seventh and early
eighth centuries; it was still early the process of an uncertain
recovery.
The house of Islam, after an explosive century of expansion, had
itself started to fragment in the 8th century, a process that would
eventually lead to the emergence of multiple, rival caliphates. By
contrast, Charlemagne’s Frankish-ruled empire seemed well-positioned to
continue expanding, bringing about an even broader political and
cultural unification of Europe.
Fast-forward 1,150 years later. In 1950, the German city of Aachen,
located on the border with Belgium and the Netherlands, created an
annual prize to honor those who worked to foster West European
understanding. The name of the award is the Charlemagne Prize.
In 1990, as the Warsaw Pact and then the Soviet Union were
disintegrating, Aachen broadened the criteria for winning. Since then,
the prize has honored those who worked for the “overall unification of
the peoples of Europe”— not just Western Europe anymore, but all Europe.
Among those who have been awarded the Charlemagne Prize are Winston
Churchill and Emmanuel Macron, Henry Kissinger and Bill Clinton, Pope
Francis and the entire population of Luxembourg—a rather eclectic list.
The city of Aachen and Charlemagne had close historical ties. Aachen
was Charlemagne’s favorite residence late in life; he built his most
famous palace there.
Aachen’s post-World War II invocation of Charlemagne was not unique to
that city. Today, several departments of the European Commission (which
constitutes the executive branch of the European Union) are housed in
the European Commission Charlemagne Building, built in Brussels in 1967.
Two years after moving into their Charlemagne building, the European
Commission itself won the Charlemagne Prize.
And those bent upon European unification of a quite different sort
than that envisioned by the European Union have sometimes embraced
Charlemagne, too, as a precursor, an inspiration, and a symbol.
In September 1944, Germany’s ruling Nazi party organized the Waffen
Grenadier Brigade of the SS Charlemagne. The SS Charlemagne consisted
primarily of French volunteers; after the war, these volunteers stated
that they had served in the SS Charlemagne because they loved and wanted
to defend Europe. Admittedly, these retrospective justifications might
have been intended to obscure other motives. Nonetheless, the fact that
veterans of the SS Charlemagne offered the “love of Europe” as a
plausible explanation for their service is evidence of how, more than
one thousand years after Charlemagne’s death, people across the
political spectrum linked Charlemagne and Europe.
That link extends all the way back to Charlemagne’s own lifetime. At
some point between 800 and 803, an anonymous poet attached to
Charlemagne’s court wrote an epic poem, only part of which survives
today, that modern scholars have dubbed “The Paderborn Epic.” The poet
bestowed on Charlemagne several appellations intended to flatter him.
One was “Lighthouse of Europe.” That one never caught on. The poet’s
“Apex of Europe” did not catch on either.
“Father of Europe,” on the other hand—pater Europae—did catch on.
About thirty years after Charlemagne’s death, a chronicler named Nithard
likened Charlemagne to a good father who had made beneficial bequests to
Europe: “Charles of happy memory, and deservedly known as the Great,
called emperor by all nations and dying at a ripe old age, left Europe
filled with every good thing.”
Note how Nithard’s testimony reveals that, already by the 840s,
Charles was known as Charles the Great: Karolus magnus in Latin.
Centuries later, as early German and early French came into being, that
became Karl der Grosse in German, and Charl-le-magne in French, whence
the English Charlemagne.
But even “father of Europe” is understated compared to some of the
accolades that contemporaries heaped on Charlemagne. The anonymous
author of “The Paderborn Epic” describes the recently crowned emperor
Charlemagne thus:
HE IS POWERFUL, WISE, KNOWING, PRUDENT, BRILLIANT, APPROACHABLE,
LEARNED, GOOD, MIGHTY, VIRTUOUS, GENTLE, DISTINGUISHED, JUST, PIOUS, A
FAMOUS WARRIOR, KING, RULER, VENERABLE SUMMIT, AUGUST, BOUNTIFUL,
DISTINGUISHED ARBITER, JUDGE, SYMPATHETIC TO THE NEEDY, PEACEMAKING,
GENEROUS, CLEVER, CHEERFUL, AND HANDSOME.
Not everyone has viewed Charlemagne in such a positive light, not
during his lifetime, and not since then. Ten years after Charlemagne’s
death in 814, a monk named Wetti, who was himself dying, experienced a
religious vision. He related what he had seen to his fellow monks, one
of whom composed a prose work called the “Vision of Wetti,” in 824.
Another monk expanded on this work in verse form three years later.
According to these accounts, Wetti had seen individuals undergoing
purgatorial punishment for the sins that they had committed while alive.
Among those whom Wetti saw was someone whom he clearly recognized, a
former ruler over Italy and the Roman people. An angel told Wetti that
God had denied this ruler immediate entrance into heaven because the
ruler had “defiled himself with vile lechery.” As punishment, a wild
animal was lacerating and tearing off the ruler’s genitals. Wetti does
not name the ruler, but the verse version of 827 takes the form of an
acrostic poem. The first letters of each line, when put together, spell
out a name and a title: Carolus imperator, Charles the emperor, which is
to say, Charlemagne.
In sum, many have lauded Charlemagne; others have regarded him as
having much to answer for. And this debate has been going on for over a
millennium. What was it about Charlemagne that has caused such
fascination with him? What did he do to generate so much controversy?
Why have opinions about Charlemagne diverged so much?
An opening hook makes a fetching start, but what comes next is the main
experience.
A Field Guide to the Planets (L3 "Venus, the Veiled Greenhouse
Planet")
Humans have been to the Moon. Where should we go next? Although the
answer is usually assumed to be Mars, let's consider travel to the
planet Venus. Venus is the brightest object in our sky, after the Sun
and Moon, inspiring names such as Morning Star and Evening Star. The
modern name Venus comes from the Roman goddess of love and beauty. Venus
is a planet quite similar to Earth. Some even call Venus Earth's twin
planet. Venus's diameter and average density are 95% those of Earth.
Venus's orbit is also closest to Earth's orbit, at 72% the distance.
That means travel time from Earth to Venus could be less than travel
time from Earth to Mars.
And there are other Earthly features, too. Suppose we travel to Venus
with equipment that can open to form a giant airship, like the Goodyear
Blimp. NASA has simulated how the parachute for such a craft might open
high in the Venus atmosphere, how a giant balloon might be extended, how
it could be inflated and become a floating airship high in the
atmosphere.
At around 50-65 kilometers above the surface, we find that Venus offers
some conditions surprisingly similar to the surface of Earth. The
temperature, atmospheric pressure, and even shielding from the Sun's
radiation are comparable to what we take for granted at Earth's surface.
Compared to Mars or open space, the atmosphere of Venus would have less
radiation from the Sun, which would make long-term Venus missions much
less hazardous for astronauts and equipment. In fact, if you were going
to pick a single destination anywhere in the solar system that's most
similar to Earth, floating high in the atmosphere of Venus might be just
the ticket.
Of course, there are challenges. For one thing, once we enter the
atmosphere of Venus, we can't see anything. There is a shroud of highly
reflective cloud and haze blanketing the planet. The veil hiding Venus
allowed older science fiction writers some free reign in imagining
Venus's surface. Could it be a jungle? A desert? How about an ocean
world? All these ideas were motivated by the fact that Venus is closer
to the Sun than Earth is. Temperatures were expected to be hotter.
That's true. But we now know that the atmosphere of Venus makes it much
hotter than any jungle or desert, in fact, twice as hot as the highest
setting of the oven in your kitchen.
Venus's atmosphere is also much denser, and under higher pressure, than
on Earth. Remember that the pressure 50 kilometers above Venus's surface
is similar to the pressure at Earth's surface. That means if we go
further down in Venus's atmosphere, the pressure and density increase.
At Venus's surface, the pressure is 92 bars. That's 92 times Earth's
atmospheric pressure at sea level. It's about the same pressure you
would experience if you were under a kilometer of water in Earth's
oceans.
And don't bother bringing champagne to celebrate once you reach the
surface. Champagne is bottled on Earth at a pressure of only 2 bars. If
the bottle isn't crushed by the incredible pressure on Venus, opening
the bottle isn't going to allow carbon dioxide to escape and become
bubbly. Instead of depressurizing, the contents of the bottle would
pressurize much more.
Now that we're at the surface, let's compare the atmospheres of Venus
and Earth from the ground up. On Earth, the bottom layer of the
atmosphere, called the troposphere, extends from the surface up to an
average of about 10 kilometers. This is the layer of the atmosphere
where most of the weather happens. Three-quarters or more of all
atmospheric mass is here. The defining characteristic of the troposphere
is this is where temperature decreases with height due to convection.
Venus has a troposphere, but it extends from the surface up to about 65
kilometers, so it's, on average, 6½ times taller. And again, you have
to be at least 50 kilometers high in this troposphere to experience the
pressure we have at Earth's surface.
But while Venus has this extremely tall troposphere layer, the other
layers are smaller than those on Earth, and the total height of the
Venus atmosphere is actually less than Earth. Venus transitions directly
from troposphere to a relatively calm mesosphere, which means middle
atmosphere. Outermost is an exosphere, similar to Mercury, where
particles no longer behave like a gas because the density is too low for
collisions to happen. For Venus, the exosphere starts at about 220 to
350 kilometers altitude. In contrast, Earth's exosphere starts at about
600 kilometers altitude. So even though Venus's troposphere is taller,
and very dense, the total atmosphere is only 1/2 the height of Earth's
atmosphere.
What's in the air is also very different. Venus's atmosphere is
overwhelmingly carbon dioxide. Earth's atmosphere is 3/4 nitrogen. Why
are their compositions so different? It turns out that Earth, as a
planet, still has about the same amount of carbon dioxide as Venus; it's
just that Earth removes it from the atmosphere. Earth buries carbon
dioxide in carbonate rocks. And plants convert it to oxygen in the air.
Burial of carbon dioxide in rocks happens mostly in our oceans. These
carbonate rocks are then recycled back into Earth through plate
tectonics. Essentially, Earth's carbon cycle regulates the amount of
carbon dioxide in the atmosphere. Any oceans Venus may have had early in
its history boiled away when the temperatures increased due to a runaway
greenhouse effect. So, Venus didn't have oceans to soak up carbon
dioxide from the atmosphere. Venus also doesn't have plate tectonics to
remove carbon from the atmosphere by sending carbonate rocks deep into
the planet. And of course, Venus never had plants that trap even more
carbon dioxide from the air and replace it with oxygen.
The result is there is nowhere on Venus for the carbon dioxide to go,
except the atmosphere. Interestingly, if you were to take all the carbon
locked away in all the rocks and plants on Earth and move it to the
atmosphere in the form of carbon dioxide gas, Earth would have
atmospheric pressure and composition quite similar to Venus.
Understanding Gravity (L2 "Free Fall and Inertia")
Last time we began our exploration of gravity with Isaac Newton and the
famous story of the apple. This time let's start with a more extreme
example of free fall. In 2012 an Austrian adventurer named Felix
Baumgartner did something extraordinary: he rode a balloon to an altitude
of 39 kilometers, more than 24 miles above the ground. Then he opened
the door of his gondola and jumped out. Baumgartner fell for more than
four minutes before he opened his parachute. It was the highest skydive
in history, breaking a record more than 50 years old. It was a rather
dramatic experiment in gravity.
Some of the most important insights about gravity and about mechanics,
the science of force and motion, a subject of extreme interest to Felix
Baumgartner in his jump, some of those insights actually predate
Newton's work by more than half a century. Indeed, these original basic
discoveries were not only the inspiration for Newton, but also for
Einstein. If Isaac Newton was the father of mechanics, then the
grandfather of mechanics was Galileo Galilei.
Let's return for a minute to Felix Baumgartner and his record setting
jump from a balloon 39 kilometers above the Earth. At first after he
jumps, Baumgartner is in free fall. After one second, he is traveling 10
meters per second, having fallen 5 meters. After two seconds he is
traveling 20 meters per second, having fallen 20 meters since the start.
After three seconds he is traveling 30 meters per second, having fallen
45 meters. After four seconds he is traveling 40 meters per second,
having fallen 80 meters. All of these agree perfectly with our equations
for free fall. But pretty soon, after less than a minute, he stops
falling any more rapidly. He is no longer accelerating.
In fact, he begins to slow down a little. Why? Is he defying the law of
free fall? No.
The reason Baumgartner does not continue to accelerate is that another
force has begun to be important: air drag. Once Baumgartner is going
fast enough the friction of the air prevents any further speed up. This
maximum speed is called his terminal velocity. Terminal velocity depends
on mass and shape and air density. In Baumgartner's case his maximum
speed is over 375 meters per second, that's faster than the speed of
sound. As he falls through denser and denser air, though, that terminal
velocity actually gets less. On Earth, at the surface, the terminal
velocity of a falling human body is still pretty fast. That's why you
need a parachute to slow the terminal velocity even more. On a planet
with a much denser atmosphere, the terminal velocity might be a lot
slower. You might not need a parachute to land safely.
I once wrote a science fiction story set on the planet Venus. The
atmosphere there is more than 50 times denser than on Earth. The hero of
my story finds it necessary to skydive without a parachute from an
altitude of more than 70 kilometers above the surface of the planet,
even higher than Felix Baumgartner. It is just possible, just barely,
for my hero to survive because the terminal velocity is so much slower
in the dense atmosphere of Venus.
With no atmosphere, as Galileo said, a falling body continues to
accelerate, and everything continues to accelerate the same. Of course,
Galileo could not actually do an experiment without air. In 1971, the
Apollo 15 mission spent a few days exploring the Moon. The Moon, of
course, has no appreciable air. On the last day on the Moon, Dave Scott,
the mission commander, performed what has to be the coolest physics
classroom demonstration in history.
[Video start.]
[Astronaut Dave Scott is speaking] In my left hand I have a feather. In
my right hand, a hammer. Well, in my left hand, I have a feather; in my
right hand, a hammer. And I guess one of the reasons we got here today
was because of a gentleman named Galileo, a long time ago, who made a
rather significant discovery about falling objects in gravity fields.
And we thought, where would be a better place to confirm his findings
than on the Moon. So, we thought we'd try it here for you. The feather
happens to be, appropriately, a falcon feather for our Falcon. And I'll
drop the two of them here and, hopefully, they'll hit the ground at the
same time. [Scott drops the objects. They land at the same time.]
How about that? [Video clip with Scott ends.]
It is too bad Galileo could not do his experiments on the Moon. There
are two things to note about this. First, on the Moon the acceleration
of gravity is not the same as on Earth. It's about 1/6th as great. In
Lecture 4 we'll find out why. Two, in the absence of air resistance, the
feather and the hammer really do fall just the same. Galileo had it
exactly right. All objects behave the same in free fall. It's a basic
fact about gravity. In fact, the law of free fall contains a secret
message that won't be decoded for 300 years until Albert Einstein
realizes that it is the key to everything.
What Darwin Didn’t Know: The Modern Science of Evolution (L7 "Rapid
Evolution within Species")
Darwin considered evolutionary change to be a process that occurred
very slowly. In the Origins of Species, he went out of his way to
emphasize the slow pace of evolution. He wrote, “We see nothing of these
slow changes in progress,” and he used that word “slow” 144 times
throughout the text. Even for the modifications of our domestic breeds,
Darwin declared that “the chance will be infinitely small of any record
having been preserved of such slow, varying, and insensible change.” But
research on a wide range of different species has shown that evolution
can actually happen rather quickly. So quickly, in fact, that we can
watch it as it plays out in real time.
Ironically, one of the best examples of rapid evolution comes from the
Galapagos finches that helped inspire Darwin’s ideas about slow, gradual
change. Galapagos finches have since become some of the best studied
organisms on the planet, the subject of decades of research by
biologists. Among them are Peter and Rosemary Grant, a husband-and-wife
team that have spent their careers conducting detailed studies of these
birds.
Much of the Grants’ time in the field has been spent on a single small
island called Daphne Major. Like many of the smaller islands in the
Galapagos, Daphne Major is largely uninhabited, and for good reason.
There is no reliable source of freshwater. There aren’t even any trees
to provide shade from the equatorial Sun. One advantage of working on
the remote, inaccessible island of Daphne Major is that there has never
been much of a human presence on the island to affect the natural
processes as they play out. Not even tourists stop on Daphne Major. Yet
the Grants and their collaborators have returned there every year for
more than 4 decades, beginning in 1973.
Another advantage of working on Daphne Major is its small size. At just
84 acres in area, the Grants are able to capture every single finch on
the island. By doing so each year, they came to know each of the birds
as individuals and have tracked the entire population from year to year.
And by taking blood samples, they could determine how all of the birds
are related. As they captured each bird, they also recorded many
different body measurements, from the length of its legs, to the depth
of its beak, to the color of its plumage. And the Grants collected data
on the other inhabitants of the island, including the few species of
plants that manage to survive in the harsh, volcanic rock.
In 1977, the Galapagos Islands experienced a severe drought. With their
detailed data, the Grants were poised to learn how the finches of Daphne
Major were affected by the drought. When the Grants returned after the
drought, they found that, compared to a population of 751 medium ground
finches before the drought, after the drought there were only 90. The
Grants took all their usual measurements, and when they analyzed their
data, they discovered something no one had ever seen before; evidence
that evolution had occurred in a wild species in just one generation!
Prior to the drought, the medium ground finches had beaks that ranged in
depth from about 8 to 11 mm, with an average of 9.2 mm. After the
drought, the surviving finches had an average beak depth of 9.7 mm, an
increase of 15%. That might not seem like much, but it’s a big enough
difference to make the Grants ask: What happened? Despite experiencing a
population bottleneck, the change in beak depth wasn’t caused by random
genetic drift; larger beaks were favored by natural selection. The
drought had had a big impact on the plants living on Daphne Major,
including a plant called spurge that makes small seeds that the finches
like to eat. Without their favorite food during the drought, the finches
were forced to try to eat the only other source of food on the island, a
larger, spiky seed called caltrop. Caltrop seeds are not only spiky,
they’re also hard, and the finches struggle to crack them open. But
finches with larger, deeper beaks can apply more force to the caltrop
seeds and are therefore better at cracking them open. Beaks better
adapted to switching to the alternative food source had a better chance
of surviving.
This was an exciting result, but the Grants didn’t stop there. They
continued returning to Daphne Major every summer, and before long they
witnessed natural selection at work again. It was particularly rainy in
late 1982 to early 1983, leading to an abundance of the finches’
preferred food, spurge seeds. With lots of tiny seeds to eat, big beaks
became more of a liability than an asset, and the average beak depth
declined by 2.5%. The Grants had not only shown that natural selection
can cause rapid evolutionary change, they had also shown that the traits
favored by natural selection can fluctuate as the environment changes.
How would Darwin have reacted to these findings? On the one hand, he
would probably have been thrilled to see clear evidence that the
mechanism he proposed for evolutionary change works exactly as he had
imagined, and the data came from a group of species that he was
responsible for bringing to the world’s attention. On the other hand, it
showed that Darwin was wrong about evolution being slow. The Galapagos
finches proved that natural selection could change a species in a
noticeable way in just one generation. Comparison of the genomes of 13
species of Galapagos finches suggests that the evolution of the entire
group has happened rapidly. The common ancestor of the 13 species lived
just 2 million years ago, and some finch species may have come into
existence in just the last 100,000 to 300,000 years. And extrapolating
with the help of DNA data, the Grants estimated that a new species might
emerge in as little 200 years of sustained change in a single direction.
Darwin used the idea that humans can cause other species to evolve as
the opening argument in The Origin of Species. He knew that animal and
plant breeding would be a familiar concept to his readers, so he began
by pointing out how effectively breeders can develop new varieties of
crops, livestock, and pets. Pigeons were a prime example. Humans have
been breeding or having an impact on pigeons for thousands of years, but
how long does it really take to tame a wild animal? At least one line of
evidence suggests that domestication could happen quickly.
On a farm in Siberia, near the town of Novosibirsk, live the
friendliest foxes you could ever hope to meet. Unlike their wild
counterparts, these foxes enjoy human company. They wag their tails,
roll over on their backs to have their bellies rubbed, and respond to
commands like “sit” or “shake.” In short, they’re a lot like dogs. Which
is exactly the point. In 1959, biologists Dmitri Belyaev and Lyudmila
Trut began an experiment to see if they could breed foxes to become
tame. Part of the motivation was to see if foxes could become
domesticated, just as wolves had been thousands of years ago, leading to
the first dogs. The other reason for the experiment was that across much
of the USSR, foxes were being kept for their fur, but working with them
was dangerous because foxes kept in captivity tend to be aggressive.
Belyaev and Trut bought foxes from several different farms and began
breeding only the those that seemed the least afraid of humans. Those
that were aggressive were never allowed to mate. Belyaev and Trut found
that the calmer a fox was, the calmer its offspring tended to be. After
just a couple of generations they were already seeing calmer behavior on
average, and a few foxes were less aggressive than any of the original
foxes had been. One fox pup in the 4th generation began wagging its tail
when a person approached, a behavior then known only in dogs. By the 6th
generation, a few of the pups were licking their caretakers’ hands and
rolling over on their backs to have their bellies rubbed. Those dog-like
behaviors became more common in each subsequent generation until the
vast majority of descendants behaved like dogs. The foxes began to look
like dogs, too. Some developed floppy ears, curly tails, and white
patches of fur. As was already recognized in Darwin’s time, these traits
are found across a variety of domestic animals. Pigs, goats, sheep, and
rabbits have floppy ears and white fur patches, and domestic pigs have
curly tails. Even though these traits weren’t being selected for in the
foxes, they became more common as the foxes became tamer.
Belyaev and Trut showed that foxes can be domesticated in much the same
way as dogs, and in just a few decades. We don’t know how long it took
the many other species of animals and plants to become domesticated, but
if the 40 generations from the foxes experiment are any indication, it
might have happened quickly.
Understanding the Periodic Table (L21 "Rare-Earth Elements:
Surprisingly Abundant")
Decades ago, the Molybdenum Corporation of America renamed itself
Molycorp and later ran a promotional campaign giving away free samples
of what are known as ‘rare-earth elements.’ Their slogan was:
“These elements are no longer rare. Try them.”
Most people have no idea how much modern technology has been
transformed by including these ultimate team players of the periodic
table.
-Color TV colors got better, thanks to europium.
-So-called halogen lights that actually depend on dysprosium.
-Magnets with neodymium have transformed everything from your
headphones to the commercial turbines that make possible modern wind
power farms.
-Terbium made X-rays became much safer and made solid-state drives
possible for data storage.
-Hybrid and electric car batteries depend on 20-30 pounds of
lanthanum.
In short, largely unnoticed, the so-called “rare earth” elements have
become unsung heroes in a host of modern technology applications.
Although you may rarely hear about these elements, the good news is that
they’re actually pretty common on Earth’s surface. Yes, they are “rare”
compared to the most common elements. But compared to many of the
elements that have been known since ancient times, calling them “rare”
is a bit of a misnomer.
Cerium, which most people, even today, have never heard of, is more
abundant than copper. The top three most abundant members of the group
are more common than tin or lead. And the entire group (with one
radioactive exception) is more abundant than silver—and far more
abundant than gold. And they’re very useful, even though they are not
chemical divas like gold. For example, an estimated 50 to 100 grams of
cerium are used in every one of the millions of catalytic converters in
vehicles around the world. IN catalytic converters, cerium isn’t the
catalyst, but rather part of the heat-resistant oxide mineral that
supports it.
Or consider hybrid and electric vehicles, with their big batteries
sometimes made of nickel-metal hydride. Well, that “metal” in the
battery is mostly lanthanum, with over 20 pounds used in each electric
vehicle. We could call them nickel-lanthanum-hydride batteries! They
save space and weight, and they’re about twice as efficient as
traditional lead-acid batteries. In fact, ‘hidden earth’ elements might
have been a better label for what are still referred to as “rare earth
metals” As this name implies, they’re not truly rare, they’re just
hidden.
The Goldschmidt geochemical classification of these oxygen friendly
elements makes them lithophilic, literally “rock-loving” elements that
combine readily with oxygen. (And the f-block of ‘rock-loving’ elements
is squarely located in between other groups of ‘rock-loving’ elements on
the table— groups 1, 2, and 3. as well as groups 4 and 5.) As the Earth
differentiated, these lithophile elements chemically preferred to float
with the lighter-weight silicate minerals, oxides, and sulfides near the
surface. while iron and the “iron-loving” elements largely sank into the
core.
So, finding a mixture of rare earth metals here on Earth isn’t such a
chore. The bigger challenge is separating them from one another. In
fact, it happened more than once that some of the greatest chemical
minds of their time thought they had produced a pure sample of a new
element, only to later discover that their creation was merely another
mixture of the rare-earth elements.
Unlocking the Hidden History of DNA (L7 "Microbes Manipulate Us,
Viruses Are Us")
The completion of the Human Genome Project in the early 2000s led to
two shocking realizations. First, biologists realized that less than 2%
of the DNA in the human genome actually codes for the proteins that make
and run the human body. That 2% figure looks even crazier when you
combine it with the second shocking realization: that the name Human
Genome Project was something of a misnomer.
It turns out that 8% of our genome is not human at all. A full quarter
billion base pairs of our DNA are simply old virus genes that got
inserted long ago and never weeded out. Some rogue virus infiltrated our
distant ancestor’s sperm or egg cell, which produced a baby. And the
lucky virus got to hitch a ride around indefinitely in all of that
baby’s descendants, including us today. Put the two findings together,
and our “human” genome might look like it’s four times more virus than
human.
So how is this possible? Let’s go back to 1958, when Francis Crick of
double helix fame published his central dogma of molecular biology. It
states that DNA produces RNA, and RNA produces proteins, in that order.
But starting in the 1960s, scientists discovered that nature cares
little for dogmas. It turns out that certain viruses, including HIV, can
manipulate DNA in heretical ways.
Unlike everything else in nature, there are many viruses that use RNA
instead of DNA to store genetic information. Viruses that use RNA
instead of DNA include influenza, coronavirus, measles, polio, rabies,
Ebola, and many strains of the common cold. Because RNA is typically
singled stranded, it’s far less stable than the double helix of DNA. As
a result, the genomes for RNA viruses never get very large. Polio is
just 7,000 bases, influenza around 14,000. Even the COVID 19
coronavirus, which is at the upper limit of stability for an RNA virus,
is only 30,000 bases.
But among RNA viruses, there’s also a more devious group, including
HIV, that infect a cell differently. Most viruses don’t work this way,
but these special RNA viruses can coax the cell into turning the virus’s
RNA back into DNA. This means running the central dogma of molecular
biology backwards, which is why we call these viruses the retroviruses.
Even more scary, the retroviruses then trick the cell into splicing that
new viral DNA into the cell’s own genome
In short, these retroviruses fuse their virus genetic material with the
cell’s non virus genetic material. They show no respect for the line
that we would prefer to draw between “their” DNA and “our” DNA.
Retroviruses re zone our genetic neighborhoods to make room for
themselves and never leave. This is the world of microbes, where viruses
can manipulate the DNA of animals for their own ends. In fact, as we’ll
see, some Machiavellian microbes can even manipulate the minds of
animals to make us do their own bidding.
Endings matter even more.
Major Transitions in Evolution (L11 "The Egg Came First — Early
Reptile Evolution")
Welcome to the lecture where we answer the age-old question of, “which
came first the chicken or the egg?” by using evolution. As we’ll see,
understanding why the egg came first turns out to be an excellent
example of how a deep time perspective can answer many previously
unsolvable mysteries.
Why was the development of enclosed eggs so important? The simple
answer is that it opened more options for tetrapods to live their entire
lives on land and reproduce on land. Furthermore, they could better
protect their eggs when these were laid on land. For example, at the
time egg-laying evolved, egg predation pressures on land were probably
quite low. The only other tetrapods that could’ve eaten eggs would’ve
been amphibians and, because amphibians were mostly restricted to the
water, they would’ve had to work a whole lot harder to get up on land
and find eggs. A fair number of insects today are egg predators,
especially ants, and insects were certainly around during the
Carboniferous period. But none of the insect groups known to prey on
amniote eggs had evolved yet in the Carboniferous. In other words,
terrestrial environments would’ve offered a clear adaptive advantage to
any tetrapods that could cut themselves off from those water bodies. The
development of extensive forests just before the evolution of amniotic
eggs suggests that terrestrial ecosystems were likely a driving factor
in amniote evolution.
This timing demonstrates yet another probable relationship between
tetrapod evolution and land plants. First tetrapods themselves evolved
into the shady forest that held so much food and shelter. Then, these
forests perhaps became patchy and water bodies became more separated
from one another. As that happened with global climate change during the
Carboniferous period, natural selection would’ve favored those tetrapods
that could still maintain body moisture and reproduce without shade or
water. Once freed from the water, this new lineage of tetrapods can move
into and evolve in a wide variety of environments on land. They could
face new selection pressures that sorted out their genes in ways that
produced the many new clades of anapsids, synapsids, and diapsids during
the Permian and Triassic periods from about 300–200 million years ago.
That’s also a broad answer to the chicken and egg question.
The first amniotes producing eggs date from at least 310 million years
ago, possibly much earlier, while the first bird ancestor, as we will
see, dates from about 150–145 million years ago. That puts the gap
between eggs and birds at over 150 million years. That’s a span that
also includes two major extinction events and that’s only about halfway
to the chicken. From the first birds all the way to jungle fowl and then
to the first actual chickens is a second almost equally enormous
interval. This is deep time popping up in an almost mindboggling way.
The first eggs predate the first chickens by about a third of a billion
years. Think about that one and how prepared you’re going to be to
answer the chicken-or-egg riddle next time it’s posed to you. .
Biochemistry and Molecular Biology: How Life Works (L18 "How Plants
Make Carbs and Other Metabolytes")
Caffeine is biochemically a methylxanthine, and that's a purine
compound, similar to the bases adenine and guanine in DNA and RNA.
Indeed, caffeine is derived from the purine nucleotides.
Let's remember that nucleotides contain a base, in this case, adenine
or guanine, a sugar, and one or more phosphates. Both adenosine
monophosphate (AMP) and guanosine monophosphate (GMP) can give rise to a
related molecule, xanthosine monophosphate, XMP. Now, XMP loses its
phosphate group to become xanthosine, which contains the base xanthine
and a ribose sugar. From there, 3 methyl groups CH3 are added, and the
sugar is lost, giving us caffeine.
Caffeine binds adenosine receptors in the brain. This isn't terribly
surprising, given that caffeine is structurally similar to adenosine.
But what does that do? It turns out that adenosine normally accumulates
during our waking hours and binds to those brain receptors.
Adenosine receptors filling up with adenosine is a signal to our body
to slow down and rest. This is why you get sleepy after a long day. When
you sleep, adenosine levels go down, emptying the receptors and
eventually leading to wakefulness, except maybe on Monday morning.
Caffeine binds to those adenosine receptors, but it doesn't make you
sleepy. And because adenosine can't bind when caffeine is occupying its
receptors, you stay alert. If you drink coffee regularly, however, the
brain compensates by making more adenosine receptors. And when that
happens, you need to drink more coffee to fill up the adenosine
receptors and stay awake compared to someone who didn't start drinking
coffee. You may have noticed this...
Big History: The Big Bang, Life on Earth, and the Rise of Humanity
(L48 "Humans in the Cosmos")
Early in this course I said that one way of thinking about big history
might be to imagine a community of people, old and young, around a
campfire. As I say this, I'm actually thinking of some wonderful
holidays that my wife and I spent with our children and friends of ours
who also had children. And we spent them at a lake in Australia, north
of Sydney. Each evening we would build a fire, and because all children
are natural pyromaniacs, we would sit around it till late at night, and
they'd poke sticks into it. And eventually we could look up and see a
wonderful starlit southern-hemisphere sky. I'm sure you can think of
similar occasions. And it's not at all hard to imagine that most people
perhaps throughout most of human history have had similar experiences.
Now, we can imagine that the youngest people in the group start asking
questions, they start asking questions about the meaning of life and
about where things come from. So they say: Why are there so many people
in the world, for example, or, How big is the world really? And imagine
that we try to give them the best and most intelligent answers we can.
Our answers might take the form of the story we've been following
throughout this course, but played in reverse. So here's how it might
sound if I told this to an adult audience.
Let's begin with human history. Today's society is the largest and by
far the most complex human society that has ever existed. Today there
are more than 6 billion humans. And though they live in distinct
societies with different states that are often in conflict with each
other, all these communities are linked through trade, travel, and
modern forms of communication into a single global community.
This modern global community, which is the world we live in today, was
created very recently, just during the last 300 years. About 300 or 400
years ago, human beings, initially in some parts of the world and then
eventually throughout the world, crossed a sort of threshold. Human
societies became more interconnected, and as commerce was becoming more
important than ever before, people in some regions began to innovate
faster than ever before.
Now, these changes are often described as the Industrial Revolution.
What they did was to lay the foundations for today's vast and rapidly
changing societies by suddenly introducing a whole new wave of new ways
of dealing with the environment and getting energy.
For several thousand years before this, most people had lived in the
large, powerful communities that we describe as agrarian civilizations.
They had cities with magnificent monumental architecture. They had
powerful rulers. And these state systems were sustained by large
populations of peasants who lived in the countryside and produced most
of society's resources. Innovation was much slower than today, and
that's why things tended to change much more slowly. The pace of history
was slower. And there were far fewer people than today. Two thousand
years ago, for example, there were probably only about 250 million
people on Earth.
Where did the agrarian civilizations come from? Well, the first
agrarian civilizations appeared in regions such as Mesopotamia, Egypt,
and China. And they appeared about 5,000 years ago. During the previous
5,000 years, an increasing number of humans lived in small, relatively
independent communities of small farmers that were governed by local
chiefs. But there also existed many people who lived not by farming, but
by foraging, that is to say, by gathering the resources they needed as
they migrated through their territories.
Agriculture appeared about 10,000 or 11,000 years ago. Before the
appearance of agriculture, all human beings were foragers. They all
lived like foragers (or hunter-gatherers). What agriculture did was to
greatly increase the amount of resources that human communities could
extract from a given area. And the result was it led to rapid population
growth, and eventually that led to the creation of the very large human
communities that were the first agrarian civilizations. So that's why
the appearance of agriculture counts as the most important threshold in
history before the appearance of the modern world. For the 200,000 years
or so before agriculture, all humans had lived as foragers. That means
they lived in small, family-sized nomadic communities of perhaps less
than 50 people for the most part, sometimes with links with their
neighbors. For most of this time, there were very few humans on Earth
probably little more than the numbers of great apes in the world today.
So that's a period of about 200,000 years.
The first members of our species, Homo sapiens, appeared about 200,000
to 300,000 years ago. We don't know exactly when, but we're pretty sure
they appeared somewhere in Africa. Their appearance counts as the most
important threshold before the appearance of agriculture. What made
these first humans different from all other animals and what accounted
for their ability to explore so many different environments was their
ability to exchange and store information about their environments.
Humans could talk to each other. And they could exchange information
with a speed and efficiency that no other animal could match. And that
means they could store information. In addition, unlike any other
animals, they could ask about the meaning of existence. Humans were
probably storytellers from the very beginning of their history.
Now that's a brief history of how human societies developed into the
remarkable global community of today. But how were the first human
societies created? Humans after all are living organisms. So to explore
the origin of humans, of our species, we must describe the history of
living organisms. And that's the next stage in this story.
Our species evolved in the same way as all other species, by natural
selection: Tiny changes in the average qualities of each community
slowly accumulated over many generations, until the nature of each
species slowly changes. And this is the process that created the huge
variety of species today. Our ancestors evolved from highly intelligent
bipedal ape-like ancestors known as hominines. The first hominines
appeared about 6 million years before the appearance of Homo sapiens,
our species.
The hominines, in turn, were descended from the mammals known as
primates. They were related most closely of all to the great apes, such
as the chimpanzees and gorillas. The primates as a group were
tree-dwelling mammals, they had large brains, they had dexterous hands,
and they had stereoscopic vision. These are all the things you need to
live in trees. And as a group, they had appeared about 65 million years
ago.
The primates were mammals. Mammals were a type of animal that had first
evolved about 250 million years ago. The mammals, in turn, were
descended from large creatures with backbones whose ancestors had
learned to live on the land about 400 million years ago. All amphibians
and reptiles are also descended from these ancestors.
Large multi-celled organisms, like ourselves, had only been around
since about 600 million years ago. That's the first time, during the
so-called Cambrian era, that you get very large animals made up of
billions of individual cells. Before that, all living organisms on Earth
were single-celled. Most would have been invisible to a human eye. The
first living organisms on this planet seem to have appeared as early as
3.8 billion years ago. That's just 700 million years after the formation
of our Earth. They were the remote ancestors of all living creatures on
Earth today, including you and me.
What's remarkable is the speed with which they appeared, just 700
million years after the creation of our planet, and our early planet was
not a very hospitable place for life. So the speed with which they
appeared suggests that life is likely to appear in our Universe wherever
the conditions are right, and that means wherever we find planets that
are bathed in the light from nearby stars, but far enough away for
liquid water to form, because water in liquid form provides an ideal
environment for complex chemical reactions like those that formed living
organisms.
The other crucial ingredient, of course, is a rich mixture of
chemicals. So it seems that wherever you get a hospitable environment
for life in our Universe, it's very probable that life will appear. So
the formation of life itself as we move back in time is the next most
important threshold before the appearance of our species, though we've
also seen lots of minor thresholds as life evolved.
Life, in turn, was only possible where the conditions were right. So to
understand the appearance of life, we need to understand the appearance
of planets, of stars, and of chemical elements. That takes us to the
history of geology and into astronomy.
Our Earth was formed about 4.5 billion years ago. And it was formed
along with all the other planets, moons, and asteroids and comets of our
solar system. All of them were formed as a byproduct of the processes
that created our Sun. What happened was that debris, leftover bits and
pieces, if you like, orbiting the Sun smashed together within the
Earth's orbit and slowly accumulated into larger and larger lumps until
eventually they all aggregated into a single large lump, which was our
early planet.
The formation of our planet, therefore, is the next important threshold
in our story as we move back in time. Now, how common was this process?
Well, solar systems may have formed countless billions of times in the
history of the Universe. But this happens to be the only solar system
whose origins we know much about at present.
Our planet, like the living organisms that inhabit it, is made up of
many different chemical elements. In fact, in our body, you can probably
find traces of elements from across the periodic table. So neither our
planet, neither our Earth, nor you and me could have been formed if the
chemical elements had not been manufactured. They were manufactured in
the violent death throes of large stars, in supernovae, or in the dying
days of other stars. We don't know when the first stars died and
scattered new elements into space. But it's very probable that it
happened within 1 billion years of the creation of the Universe. So this
new threshold, the creation of chemical elements, takes us back more
than 12 billion years, close to the beginnings of the Universe.
Since then, billions upon billions of stars have died, scattering new
elements into interstellar space. Now obviously, stars could not have
died if stars had not been born. Stars were born ”like our Sun”as clouds
of gas, clouds of matter collapsing under the pressure of gravity heated
up in their centers until eventually hydrogen began to fuse, and at that
point they lit up. They turned into stars. Star formation has continued
ever since the first stars appeared” that's why, as I said in an earlier
lecture, there are probably more stars in the Universe than there are
grains of sand on all the beaches and deserts of our Earth.
The first stars may have been formed more than 13 billion years ago
quite soon, within 200 or 300 million years of the origins of our
Universe. So as we move back in time, the creation of stars counts as
one more great turning point in the story. So we've looked at human
societies, life, the creation of planets and stars. And that takes us
back to the beginning.
Obviously nothing could have existed if the Universe itself had not
been created. The Universe, we now know, was created about 13.7 billion
years ago. Stars were formed from great clouds of hydrogen and helium
atoms which, like the force of gravity itself, were created at the
moment of the creation of our Universe.
This is the first turning point of all, the first threshold of all and
in many ways it's the most mysterious. Our Universe began as a tiny,
hot, expanding ball of something that popped out of nothingness like an
explosion and the explosion, which cosmologists call the big bang, has
continued ever since. You and I and the planet we live on are simply
part of the debris. That's the very beginning of the story. And we can't
go further back in time.
So that's a quick summary of the story we've told in this course. The
story I've just told is a highly condensed summary of the best modern
scientific attempts to understand origins, to understand how everything
around us was created. It's our best shot at explaining origins, just as
every traditional creation story also represented the best attempt,
given the available knowledge, to answer all the fundamental questions
about origins.
Outsmart Yourself: Brain-Based Strategies to a Better You (L6 "The
Myth of Multitasking")
It’s often said that the greatest power of the human brain is that it
can perform many different processes in parallel. You open your eyes,
and your brain processes incoming visual information—you don’t have to
choose to do so. While you’re at it, you also touch, hear, taste, and
smell. You do all of these things at the same time, in parallel—the
processing just happens.
The same thing seems to happen for many actions that we perform. You
can walk across the room while searching your pockets for your keys,
while also having a conversation, while also pausing to say hello to a
friend as they pass by. You do all of these things while, of course,
continuing to see, touch, taste, hear, and smell. The human brain has
billions of neurons, hundreds of thousands of circuits, and they can
process information in parallel. It’s an amazing thing. Modern computers
are, in some respects, much faster and more accurate than the human
brain in terms of sequential operations, but those artificial computers
are just starting to scratch the surface of this amazing parallel
processing thing that the human brain performs.
In this lecture, I’m going to argue that you should try to limit the
number of things that you try to do at the same time. My primary tip
will be to explore the thrill of doing one thing at a time—single
tasking or monotasking as it’s sometimes called. I’ll also argue that
there are hidden costs to so-called multitasking, both short- and
long-term problems that emerge when you try to do more than one thing at
a time.
We often engage in one primary task—say, writing something—while also
engaging in a secondary task and a tertiary task as well. For instance,
I might answer the phone when it rings and talk with a coworker. My
computer periodically makes a beep, indicating that an e-mail has
arrived. A little pop-up window appears, indicating who the e-mail is
from and what the subject of the message is. So, I’m also monitoring
this incoming information and making decisions about whether or not I
should stop writing and respond to it. So, it’s a good thing that the
human brain is so good at multitasking, because our modern world demands
it. On the surface, especially, it seems far more efficient to do
multiple things at once. To not do so would be to waste our natural
ability. All we need to do is develop the requisite expertise, and
perhaps we can do the work of two or more people. Thank you, technology.
There’s a problem, however. We feel like we can do multiple tasks at
the same time. There’s actually a feeling of pleasure that many people
describe associated with multitasking. It can be invigorating to push
your mind and body up near its maximum capacity for processing
information. The problem is, when we carefully assess people’s
performance during multitasking, significant reductions in performance
are found. In some cases, the drops in performance can be really big.
The drop in performance is bad, but perhaps the most troubling thing is
how unaware we are of the drop. We can feel like we’re doing our best
work while actually performing pretty badly.
Think about the typical task of writing something while also monitoring
your incoming e-mail. You’re thinking about the topic of your writing,
thinking about the global structure of the document you’re composing,
you’re thinking about what to say next, and then composing that next
sentence. This is a pretty engaging task that pulls from a variety of
different brain resources. While you’re doing that, there is that
telltale bong from your computer that means an e-mail has arrived. You
feel like you’re continuing to write while you glance up and read that
message. You decide the e-mail can wait and continue writing. It feels
like you’re doing those two things at the same time, but what you
actually do is stop the thought processes that go with writing, you
switch to thinking about the e-mail, and then return to the writing.
That switch takes time, and it turns out it requires a substantial
amount of brain resources to accomplish.
Perhaps the most publicized application of this research in the real
world has come in the domain of driving while using a cell phone. I
hopefully don’t have to tell you that it’s a bad thing to drive while
texting or talking on a phone, but, just in case, it is a bad thing to
do. The extra risk of being involved in a car accident associated with
using a cell phone while driving is even a little larger than the risk
associated with driving while legally intoxicated. If you wouldn’t drive
drunk, you should certainly not drive while using your phone.
How does the cell phone create this problem? There are some obvious
things that occur to most people. When you’re reading and typing text
messages on a phone, you have to look away from the road, at least for a
few seconds at a time, right? If something happens up in front of your
car during those few seconds that you’re looking at your phone and not
looking at the road, there’s no way you can react. If the driver in the
car in front of you slams on his brakes, you won’t even start to react
until you look back and see that car’s brake lights. Even when you talk
on a handheld cellular phone, you usually have to look down to dial the
number or select the contact and hit send.
Phone and carmakers have addressed this problem; they’ve created
hands-free cell phones. Problem solved, right? Unfortunately, no.
It is true that a fully hands-free system can enable you to keep your
eyes on the road the whole time, but several studies have found that the
increased accident rate stays almost as high with hands-free cell phones
as it is with handheld cell phones. What’s going on here? Why should the
hands-free phone be almost as bad, even when your eyes can stay forward
at all times? The problem? Multitasking.
The experiments we’ve discussed here apply very specifically to this
situation, in at least two important ways. First, when it comes to
processing sensory information and making a discrete decision about it,
the human brain is limited to one decision at a time. No matter how
expert you are at the other pieces of performing the task, the decision
part remains a single-task bottleneck. When you’re pondering the
statements of someone on the other end of a phone or text exchange,
you’re making a lot of decisions. Should I respond now or keep
listening? What should I say? Should I mention our last conversation or
not? Lots and lots of decisions.
And every time you’re making one of those decisions, you’re not able to
make visuomotor action decisions about driving the car. Should I hit the
brakes? Should I change lanes? Those decisions have to wait until the
bottleneck is freed up. We get very good at alternating between two or
more tasks, but the switching always introduces a little delay. And at
60 or 70 miles per hour, a little delay can translate into the
difference between avoiding a collision and having an accident.
NOTE: This excerpt is part of a terrific example of over-learning that
feels fun and empowering for the entire field, thanks to intensely
well-planned use of visuals throughout, including appropriate
photographs, highlighting, graphics, and on-camera demonstrations.
Understanding the World's Greatest Structures from Antiquity to
Modernity (L24 "Strategies for Understanding Any Structure"
Now that you’ve learned all the major principles of structural
mechanics and examined many of the world’s greatest structures in terms
of those principles, you should be able to analyze any structure you
come across—great or humble—just as we have in this course. In this
lecture, you have the chance to test your newfound analytical skills as
we look at one last group of great structures.
Perhaps the most straightforward approach to understanding a new
structure is by direct comparison with structures you’ve already seen.
For example, the dome of the U.S. Capitol can be understood through
direct comparison with the dome of St. Paul’s Cathedral in London. Like
Saint Paul’s, the Capitol dome uses a 3-part configuration:
nonstructural outer and inner shells concealing a parabolic structural
dome. Both work the same way; the only significant difference is that
St. Paul’s structural dome is brick, while the Capitol’s structural dome
is an open iron framework. In this difference, we can see the influence
of the iron-framed dome of the Paris Grain Market, built about 100 years
after St. Paul’s and about 50 years before the Capitol.
Similarly, when we encounter the spectacular new Tokyo Sky Tree, we
should recognize it as a descendant of the Eiffel Tower. The Sky Tree,
at 2080 ft, is the world’s tallest tower. Its overall shape and truss
construction reflect the same response to wind load that Eiffel used in
Paris, but the Sky Tree goes further, with a reinforced concrete core
and a uniquely varying cross-section: triangular at the base, for
stability, transitioning to circular at the top, for decreased wind
resistance.
But what happens when you encounter structures in unfamiliar
categories? Perhaps you can draw analogies with different types of
structures that nonetheless carry load in the same way.
For example, we haven’t discussed dams in this course, yet you can see
at a glance that the Hoover Dam is just an arch turned sideways, holding
back a wall of water the same way that the Pont-Saint-Martin’s arch
carries the weight of the stone above it.
What about tunnels? I hope you can now see the Chunnel, that 31-mile
tunnel linking England and France, as just another interesting variation
on the arch. Completed in 1994, the Chunnel actually consists of three
passages: two rail tunnels and a service tunnel. All three were
excavated with immense tunnel boring machines, shown here.... The tunnel
lining consists of arc-shaped precast concrete segments placed around
the entire perimeter of the excavation, as shown here. If you’re
thinking that these segments look a lot like the stone voussoirs of a
Roman arch, congratulations: You are learning to see and understand
structure. The tunnel lining of the Chunnel works exactly like an arch,
except its principal loading is soil pressure, which is exerted inward
around the entire perimeter of the tunnel lining, like this; and so the
tunnel lining needs to be a full circle, rather than the semicircle of a
traditional arch.
In that sense, the Chunnel is also quite similar to the Treasury of
Atreus, that ancient Mycenaean corbelled dome, built underground with
its circular layers of stone held in place by soil pressure.
Sometimes you’ll encounter a structure that superficially resembles
one of our categories but is actually something else entirely. For
example, the Qiancheng Bridge in China’s Fujian Province is a rainbow
bridge, a style that dates to the 11th century A.D. Most people refer to
these structures as arch bridges, but the structural system isn’t really
an arch: there’s no lateral support at its base. In fact, it’s a rigid
frame that gets its rigidity from this interweaving of transverse and
longitudinal elements. Its members carry load in bending and axial
compression combined, rather than in compression like an arch....
In fact, it is a frame, similar to the rigid frames that we saw in
iron- and steel-framed buildings. It doesn’t look the same, because the
modern rigid frames we examined in Lecture 18 get their rigidity from
specially designed connections. The rainbow bridge gets its rigidity
from this interweaving of transverse and longitudinal elements. But,
like a modern rigid frame, its members carry load in bending and axial
compression combined, rather than just in compression as an arch does.
By the way, the frame bridge is not just an ancient Chinese
technology. Here’s a typical modern example: the Fahy Bridge over the
Lehigh River in Bethlehem, Pennsylvania. Because we already know how
rigid frames work in buildings, we can understand this analogous
structure without too much difficulty
When you encounter a structure for which there aren’t any obvious
analogies, you can return to the technique of analyzing the structural
system we discussed in Lecture 9....
That said, you can gain many fascinating and rewarding insights about
structures without any sort of formal analysis. Structures often
communicate with us in clear and compelling ways simply through the
shapes and proportions of their elements.
The 12 towers of London’s Millennium Dome tell us that they carry load
in compression by their stout proportions and the orientations of the
attached stay-cables. The array of cables radiating out from the towers
tells us that they carry tension by virtue of their slender proportions.
We might call this the “language of structure,” a language that
structural elements use to tell us how they work. If we can read the
language of structure, if we can discern how members carry load based
solely on their shapes and proportions, we don’t need to do a formal
structural analysis; the members themselves tell us that the Millennium
Dome, for example, is a cable-stayed building.
So my final recommended strategy for seeing and understanding structure
is to learn to read the language of structure; to see the shapes and
proportions of structural elements as subtle messages about their
load-carrying purpose. In this language, there is no more interesting
bit of structural vocabulary than the parabola.
There is something very special about the parabola, and its cousin,
the catenary. In this course, we’ve seen that the parabola is a direct
reflection of the underlying science that governs the behavior of many
different types of structural elements. It’s the natural shape of a
draped cable; it’s the shape of the thrust line in an arch; it’s the
shape of the moment diagram for a uniformly loaded beam. So when we see
the parabolic form in a structure, we’re almost always looking at an
element that was optimally designed for load-carrying. We’ve seen the
parabola in the main cables of great suspension bridges; in the vaulting
of the Persian imperial palace at Ctesiphon; and in arches, from
Eiffel’s Garabit Viaduct to Calatrava’s Campo Volantin Bridge in Bilbao.
But the parabola is found in more than just cables and arches. It’s
also in the profile of optimally designed beams, like the graceful
box-girders of the Raftsundet Bridge in Norway; and even in trusses,
like Brunel’s Royal Albert Bridge at Saltash, a particularly rich
example because we get two parabolas for the price of one: a top chord
that works like an arch, and a bottom chord that works like a cable.
Now that you understand why parabolas are used in so many different
kinds of structural applications, you’re going to start noticing them in
all sorts of unexpected places.
For example, you’ll occasionally encounter them in the facades of
buildings, as you can see in Marquette Plaza, the former Federal Reserve
Bank Building in Minneapolis. Here, that parabolic shape on the facade
corresponds to a steel tension element that works like a draped cable to
support the upper floors of the building, allowing an open, column-free
first-floor lobby.
Here’s that very same concept applied in reverse: the Broadgate
Exchange House in London. That huge parabolic arch carries the weight of
the building in compression; and because it’s supported only out at its
ends, the arch system allowed the building to be constructed directly
over a set of train tracks running below ground level. The tower of the
cablestayed Denver Millennium Bridge is a compression member with a
parabolic profile; and the tower of the Reggio Emilia Bridge in Italy is
an immense parabolic tube.
I could provide hundreds more examples like these, but that might
spoil all of your fun. Now that you know how to look for the
characteristic shapes and proportions of the language of structure,
you’re going to be amazed at what you find,
Let’s conclude this course with a look at my personal favorite
structure, Pier Nervi’s Palazzetto dello Sport in Rome, built in 1957...