[00:00] (0.24s)
Mathematics is amazing because it
[00:03] (3.20s)
reveals truths, unambiguous, provable
[00:06] (6.56s)
truths that defy expectations. Today, I
[00:10] (10.32s)
want to tell you my favorite mindblowing
[00:12] (12.80s)
math facts. Starting with number 10, the
[00:16] (16.88s)
piodic numbers. We're used to working
[00:19] (19.28s)
with real numbers where we have some
[00:21] (21.68s)
digits before the point and then
[00:24] (24.32s)
possibly some after the point and all
[00:26] (26.56s)
the way to infinitely many digits. In
[00:29] (29.36s)
the real numbers, 0.999
[00:32] (32.88s)
and an infinite number of nines is equal
[00:35] (35.04s)
to one. But you already knew that. Did
[00:37] (37.92s)
you also know though that there's a
[00:40] (40.56s)
completely different way to do addition
[00:43] (43.36s)
with what's called the paddic numbers?
[00:46] (46.64s)
These exist for any integer p. I just
[00:49] (49.84s)
want to give you an example for p= 10.
[00:53] (53.12s)
Pi numbers have expansions that go to
[00:55] (55.68s)
infinitely many digits to the left. So
[00:59] (59.20s)
to larger values rather than to the
[01:02] (62.16s)
right to smaller values. This leads to
[01:05] (65.12s)
the following stunning addition law.
[01:07] (67.68s)
Suppose you have 999 and so on all the
[01:10] (70.88s)
way to infinity to the left. Now we add
[01:13] (73.68s)
one that's 1 0 0 and so on to the left.
[01:18] (78.00s)
What do we get? When you add the
[01:19] (79.92s)
rightmost 9 + one that gives 10. Write
[01:22] (82.80s)
down zero carry one. Add to the next
[01:25] (85.28s)
nine that gives 10. Right down zero
[01:27] (87.36s)
carry one. and so on. The result is 0 0
[01:30] (90.24s)
0 all the way to infinity to the left
[01:33] (93.04s)
which is well zero. This means that this
[01:36] (96.48s)
infinite string of nines to the left is
[01:39] (99.76s)
actually minus one. I'm not making this
[01:43] (103.04s)
up. This is actually how it works. Nine
[01:46] (106.48s)
Gabriel's horn. The surface obtained by
[01:49] (109.68s)
rotating the curve 1 /x for x larger
[01:52] (112.88s)
than one about the xaxis has finite
[01:56] (116.24s)
volume but infinite surface area. In
[01:59] (119.44s)
other words, you could fill it with
[02:01] (121.20s)
paint but could never coat its surface.
[02:04] (124.16s)
Eight. The most optimal packing for 17
[02:07] (127.68s)
squares. Imagine you have a set of n
[02:10] (130.40s)
square tiles. What's the smallest larger
[02:14] (134.32s)
square that they'll all fit into without
[02:17] (137.60s)
overlapping? The answer is obvious. If n
[02:21] (141.04s)
is a square number and for most other
[02:23] (143.28s)
numbers, the results look reasonably
[02:25] (145.68s)
enough. For 17 squares, the best known
[02:29] (149.28s)
result is this. This is the best known
[02:32] (152.48s)
arrangement. It's not been proved that
[02:34] (154.56s)
it's actually the best one. The lesson
[02:37] (157.20s)
here is that even simple maths problems
[02:39] (159.68s)
can be surprisingly hard to solve.
[02:42] (162.32s)
Seven, metalological contradictions.
[02:45] (165.68s)
This sentence is false is a classical
[02:48] (168.80s)
example of a contradiction caused by
[02:51] (171.28s)
using a language to make statements
[02:53] (173.44s)
about itself. If the sentence is false,
[02:56] (176.40s)
then it's true. And if it's true, then
[02:58] (178.24s)
it's false. So what is it? Another
[03:01] (181.12s)
example is the barber paradox. The
[03:03] (183.84s)
barber cuts the hairs of all people who
[03:06] (186.08s)
don't cut their own hair. Does he cut
[03:08] (188.16s)
his hair or does he not? If he does, he
[03:10] (190.96s)
doesn't. If he doesn't, he does. The
[03:13] (193.52s)
mathematical version of this is the
[03:15] (195.52s)
question of whether the set of all sets
[03:17] (197.84s)
that don't contain themselves contains
[03:20] (200.48s)
itself. You might have heard of these
[03:22] (202.56s)
already, but maybe not this one known as
[03:25] (205.92s)
Barry's paradox. What is the smallest
[03:29] (209.92s)
positive integer not definable in under
[03:32] (212.88s)
60 letters? The problem is that this
[03:35] (215.76s)
phrase itself has 57 letters. So if you
[03:39] (219.84s)
could find the number, it wouldn't
[03:41] (221.44s)
fulfill its own definition. Does the
[03:44] (224.08s)
number exist? These logical problems are
[03:47] (227.20s)
all related to Good's theorem. Six. The
[03:50] (230.72s)
monster group. Mathematicians use groups
[03:53] (233.52s)
a lot. These are basically sets with
[03:55] (235.92s)
elements that can act on each other and
[03:58] (238.16s)
obey certain relations. A typical
[04:00] (240.40s)
example is the rotation group in three
[04:02] (242.64s)
dimensions whose elements are the
[04:04] (244.56s)
rotations in the three directions that
[04:07] (247.36s)
you can then combine. These rotation
[04:10] (250.00s)
groups exist in any number of
[04:11] (251.92s)
dimensions. In fact, most groups exist
[04:14] (254.48s)
in these infinite countable series and
[04:16] (256.96s)
are reasonably well behaved like the
[04:19] (259.12s)
groups in the standard model of particle
[04:21] (261.04s)
physics U1, SU2, and SU3. However,
[04:25] (265.60s)
besides these infinite series of groups,
[04:28] (268.64s)
there are also 26 so-called sporadic
[04:31] (271.76s)
simple groups. The largest of them is
[04:34] (274.64s)
the monster group. The number of its
[04:37] (277.28s)
elements is exactly known and it comes
[04:39] (279.68s)
out to be this.
[04:42] (282.40s)
In case you don't feel like counting
[04:44] (284.72s)
digits, that's approximately 10 to the
[04:47] (287.60s)
54. The stunning thing is that this is
[04:51] (291.36s)
provably the largest such group. It's
[04:55] (295.12s)
not just the largest known one, it's the
[04:57] (297.76s)
largest one, period. That number is a
[05:01] (301.12s)
fundamental truth. Five, the logistic
[05:04] (304.16s)
map. The logistic map is defined as a
[05:06] (306.88s)
sequence of numbers and looks entirely
[05:09] (309.28s)
unremarkable. It has only one parameter
[05:12] (312.56s)
that I'll write as r. You start with
[05:15] (315.52s)
some number between zero and one and
[05:17] (317.68s)
then you calculate the next number as r
[05:20] (320.24s)
* your starting value times 1 minus the
[05:23] (323.52s)
starting value that gives you a new
[05:25] (325.92s)
number and then you do it again. For
[05:28] (328.32s)
example, if you take r= 3.5 and start
[05:32] (332.64s)
with the initial value 0.5, then you get
[05:37] (337.52s)
Then you do it again and get 0.382.
[05:40] (340.64s)
And you do it again and get 0.827 and so
[05:43] (343.60s)
on. Looks simple enough. But where does
[05:45] (345.68s)
this sequence end? Well, here's the
[05:48] (348.64s)
amazing thing. For most values of R, it
[05:51] (351.12s)
doesn't end anywhere. You can keep track
[05:52] (352.96s)
of the values that the function visits
[05:55] (355.04s)
after many steps and plot them as a
[05:57] (357.60s)
function of R. At low R, you see a
[06:00] (360.32s)
single branch. This is where the
[06:02] (362.40s)
sequence settles. It's a fixed point.
[06:04] (364.72s)
But when r becomes larger than three
[06:07] (367.68s)
that splits into two meaning that the
[06:10] (370.56s)
sequence ends up going back and forth
[06:12] (372.80s)
between two values. Increase r further
[06:16] (376.00s)
and you get more values and then at some
[06:18] (378.56s)
point that's approximately r= 3.57 you
[06:22] (382.88s)
have the onset of chaos with occasional
[06:25] (385.68s)
windows of periodic orbits. The amazing
[06:28] (388.56s)
thing here is that such a simple rule
[06:30] (390.72s)
can give such a complex result. Four
[06:33] (393.92s)
wild singular limits. A singular limit
[06:36] (396.96s)
is a case where the behavior of a
[06:38] (398.96s)
sequence suddenly and unpredictably
[06:41] (401.20s)
changes. A particularly stunning example
[06:43] (403.68s)
is this sequence of integrals over the
[06:46] (406.24s)
sign function where you add more factors
[06:49] (409.12s)
under the integral. The first gives pi /
[06:52] (412.40s)
2, the second gives pi / 2, the third
[06:55] (415.12s)
gives pi / 2. These are exact numbers,
[06:58] (418.08s)
not approximations. Yet, when you take
[07:00] (420.96s)
15 factors, it stops working. Three, the
[07:05] (425.28s)
birthday paradox. Suppose you're at a
[07:08] (428.00s)
party attended by two dozen people.
[07:10] (430.64s)
What's the probability that two of them
[07:13] (433.20s)
share the same birthday? It's more than
[07:16] (436.24s)
50%. You can calculate the probability
[07:18] (438.96s)
of that happening for any number of
[07:20] (440.80s)
people, and it makes a surprising jump
[07:23] (443.12s)
at about 23. If you have a group of 60
[07:26] (446.56s)
people, the probability that two of them
[07:28] (448.96s)
share a birthday is larger than 99%.
[07:32] (452.64s)
Two, we don't know most numbers. Most of
[07:36] (456.08s)
the real or complex numbers we work with
[07:38] (458.88s)
are algebraic. This means there are
[07:41] (461.60s)
solutions to some polomial with rational
[07:44] (464.72s)
coefficients. The roots of something
[07:47] (467.44s)
basically. However, there are numbers
[07:50] (470.00s)
which cannot be written that way.
[07:52] (472.88s)
They're called transcendental numbers.
[07:55] (475.60s)
The most famous transcendental number is
[07:58] (478.16s)
pi. And we've all heard of pi, but it
[08:01] (481.12s)
seems rather special. Yet, fact is that
[08:04] (484.40s)
almost all real numbers are
[08:06] (486.64s)
transcendental. We just can't use them
[08:09] (489.28s)
because we can't write them down. Think
[08:11] (491.28s)
about it. We can enumerate all possible
[08:13] (493.76s)
algorithms to compute numbers. Yet,
[08:16] (496.64s)
there are more transcendental numbers
[08:18] (498.80s)
than possible algorithms. They're
[08:21] (501.04s)
everywhere. and yet in some sense
[08:23] (503.28s)
unusable. Bonus fact, you might have
[08:26] (506.64s)
heard that every random sequence of
[08:28] (508.96s)
digits eventually appears in the digits
[08:31] (511.52s)
of pi. But actually, this has never been
[08:34] (514.56s)
proved to be true. It's an open
[08:36] (516.64s)
question. And finally, the Bonaski
[08:39] (519.68s)
paradox. You can decompose a solid
[08:43] (523.36s)
sphere in three-dimensional space into
[08:47] (527.44s)
finitely many disjoint pieces and then
[08:50] (530.88s)
reassemble them using only rotations and
[08:54] (534.32s)
translations into two spheres each the
[08:57] (537.76s)
same size as the original. What even is
[09:00] (540.88s)
space? How many of those did you know?
[09:03] (543.36s)
Let me know in the comments. If this
[09:05] (545.76s)
video inspired you to brush up your
[09:08] (548.32s)
mathematics knowledge, I recommend you
[09:11] (551.52s)
start with Brilliant. All courses on
[09:14] (554.16s)
Brilliant have interactive
[09:15] (555.68s)
visualizations and come with follow-up
[09:18] (558.00s)
questions. What you see here is from
[09:20] (560.24s)
their newly updated maths courses, no
[09:23] (563.28s)
matter how abstract the topic seems.
[09:26] (566.00s)
Brilliant courses have intuitive
[09:28] (568.08s)
visualizations that really click into my
[09:30] (570.72s)
brain. And Brilliant covers a large
[09:32] (572.72s)
variety of topics in science, computer
[09:34] (574.96s)
science, and maths from general
[09:37] (577.12s)
scientific thinking to dedicated
[09:39] (579.20s)
courses, just what I'm interested in.
[09:41] (581.60s)
And they're adding new courses each
[09:43] (583.84s)
month. I really enjoy the courses on
[09:46] (586.08s)
Brilliant, not just because they keep my
[09:48] (588.24s)
brain active, but also because it's a
[09:50] (590.88s)
great way to systematically build up new
[09:53] (593.76s)
knowledge to higher levels. If that
[09:56] (596.56s)
sounds like the right thing for you, use
[09:59] (599.12s)
my link brilliant.org/ org/zabini to
[10:01] (601.76s)
give it a try. First 30 days are free
[10:04] (604.72s)
and with this link you'll get 20% off
[10:07] (607.28s)
the annual premium subscription. It's a
[10:09] (609.52s)
great way to learn more and to support
[10:11] (611.76s)
this channel. Thanks for watching. See
[10:14] (614.00s)
you tomorrow.