[00:00] (0.00s)
Hey everyone, welcome back to the
[00:01] (1.36s)
channel. My name is John and this is
[00:02] (2.72s)
your modern tech breakdown. Today I'm
[00:04] (4.72s)
looking at the EU's latest attempt to
[00:06] (6.88s)
regulate AI. Let's jump into it.
[00:20] (20.24s)
All right. So, I did a video yesterday
[00:22] (22.00s)
on California SB53,
[00:24] (24.64s)
which is the latest attempt by
[00:26] (26.56s)
California politicians to regulate AI.
[00:29] (29.12s)
And today I have somewhat of a similar
[00:31] (31.12s)
story from the EU, but they are way
[00:33] (33.92s)
ahead in their regulation regime
[00:36] (36.00s)
building effort. They have passed a law
[00:38] (38.64s)
already called the AI act. And now the
[00:41] (41.76s)
bureaucrats in Brussels have issued
[00:43] (43.68s)
their first decree on it called the code
[00:45] (45.92s)
of practice. And I skimmed through the
[00:47] (47.84s)
40page document that focused on safety
[00:50] (50.40s)
and security. And it was clearly written
[00:52] (52.40s)
by government officials. We have
[00:54] (54.24s)
flowcharts and processes and endless
[00:56] (56.40s)
definitions. Great reading if you're
[00:58] (58.08s)
having trouble sleeping. But here we go
[01:00] (60.24s)
again with a government effort to
[01:01] (61.92s)
supposedly protect the public from those
[01:03] (63.84s)
super dangerous chatbots. But let's just
[01:05] (65.92s)
take a look at one section here about
[01:08] (68.00s)
the capabilities that they are worried
[01:09] (69.84s)
about. They're worried that a chatbot
[01:11] (71.92s)
could potentially cause quote persistent
[01:14] (74.08s)
and serious infringement of fundamental
[01:16] (76.64s)
rights unquote. I mean can someone tell
[01:18] (78.96s)
me how Google Gemini is going to
[01:20] (80.56s)
infringe on your fundamental rights? How
[01:22] (82.56s)
could that happen exactly? Also, they're
[01:25] (85.04s)
worried about a model manipulating,
[01:26] (86.72s)
persuading, or deceiving people. That
[01:28] (88.96s)
seems rather broad to me. Almost any
[01:30] (90.72s)
model could be found to be guilty of
[01:32] (92.40s)
that if it responds with some facts that
[01:34] (94.40s)
a government official doesn't like. And
[01:36] (96.40s)
I think this is intentional. This code
[01:38] (98.48s)
of practice includes so many things that
[01:40] (100.24s)
if they want to attack an AI company for
[01:42] (102.40s)
violating it, they can. It's not really
[01:44] (104.24s)
a law that you can follow. It's open to
[01:46] (106.32s)
so much interpretation that it's
[01:47] (107.84s)
basically not possible to comply. And
[01:50] (110.24s)
that's a feature, not a bug, because it
[01:52] (112.24s)
empowers the people in the EU
[01:54] (114.16s)
bureaucratic machine with some very
[01:56] (116.40s)
significant power over the industry. So,
[01:58] (118.72s)
I'm concerned about the over 15% of my
[02:00] (120.88s)
audience that is in the EU. If these
[02:03] (123.36s)
government officials have their way, I
[02:05] (125.44s)
think you guys are going to have a
[02:07] (127.12s)
completely different experience with AI
[02:08] (128.80s)
than the rest of the world. This
[02:10] (130.48s)
situation reminds me of when Google was
[02:12] (132.32s)
trying to compete in China with its
[02:14] (134.00s)
search engine. The Chinese government
[02:15] (135.52s)
made Google censor search results. You
[02:17] (137.76s)
can see here how different the search
[02:19] (139.36s)
results are for Tanaman based on the
[02:21] (141.92s)
censored Chinese version and the rest of
[02:23] (143.92s)
the world. Obviously, the EU hasn't been
[02:26] (146.64s)
able to enact that level of censorship
[02:28] (148.64s)
yet, but this code of practice feels
[02:30] (150.24s)
like the first step in that direction to
[02:32] (152.00s)
me. And I'll just wrap this up by
[02:33] (153.76s)
acknowledging that it's pretty clear
[02:35] (155.04s)
from my accent that I'm American. The EU
[02:37] (157.44s)
is not my continent, and I'm rather
[02:39] (159.52s)
annoyed when folks from other countries
[02:41] (161.36s)
try to tell me as an American how my
[02:43] (163.28s)
country should work. So, I'll just say
[02:44] (164.64s)
this is my opinion. You guys can do
[02:46] (166.48s)
whatever you want over there, but for me
[02:48] (168.56s)
here in the United States, I don't want
[02:50] (170.00s)
any of this type of government
[02:51] (171.52s)
interference anywhere near our
[02:53] (173.04s)
technology industry. But hey, if you
[02:55] (175.12s)
disagree, that's cool. Drop a comment
[02:56] (176.72s)
below and we can discuss it. But
[02:58] (178.08s)
regardless, thanks for watching. Please
[02:59] (179.76s)
like, comment, and subscribe. and I will
[03:01] (181.20s)
catch you next time.