[00:00] (0.08s)
Hey everyone, welcome back to the
[00:01] (1.36s)
channel. My name is John and this is
[00:02] (2.64s)
your modern tech breakdown. Today I'm
[00:04] (4.72s)
covering another attempt by the
[00:06] (6.32s)
California legislature to regulate the
[00:08] (8.72s)
emerging AI industry. Let's jump into
[00:22] (22.64s)
All right, to start off, let's just
[00:24] (24.08s)
recap from last year in case you missed
[00:26] (26.48s)
it. Last year, SB 1047 was passed by the
[00:30] (30.08s)
California legislature, but was
[00:31] (31.68s)
ultimately vetoed by Governor Gavin
[00:33] (33.60s)
Newsome. That bill was an attempt to
[00:35] (35.36s)
regulate AI companies in the state of
[00:37] (37.28s)
California. Now, the same legislature is
[00:39] (39.36s)
back, this time with Senate Bill 53 that
[00:41] (41.76s)
would require AI companies to publish
[00:44] (44.00s)
safety and security reports for chat
[00:47] (47.44s)
bots because apparently these things are
[00:49] (49.76s)
super scary. But let's get into the bill
[00:52] (52.00s)
details a little bit. As it stands now,
[00:54] (54.32s)
and I should point out this bill has
[00:55] (55.60s)
been amended multiple times already, so
[00:57] (57.20s)
it's likely to change again. But as of
[00:59] (59.68s)
now, the bill would create a new
[01:01] (61.52s)
reporting requirement for AI companies,
[01:03] (63.68s)
including documentation of what data was
[01:06] (66.24s)
used to train the model. As an aside,
[01:08] (68.64s)
this alone might kill the bill. AI
[01:10] (70.32s)
companies definitely don't want to give
[01:11] (71.68s)
away any shred of information about how
[01:13] (73.76s)
they train their models. It also
[01:15] (75.76s)
requires disclosure of safety and
[01:17] (77.60s)
security protocols and testing
[01:19] (79.20s)
procedures. Basically, it's the start of
[01:21] (81.68s)
the construction of an AI bureaucratic
[01:23] (83.76s)
machine. It starts with reports, but we
[01:26] (86.08s)
all know it's not going to end there.
[01:27] (87.52s)
That much is obvious. It would also
[01:29] (89.68s)
create a new group within the California
[01:32] (92.00s)
Government Operations Agency that would
[01:34] (94.48s)
be responsible to create an AI called
[01:36] (96.80s)
Cal Compute that is safe, ethical,
[01:39] (99.92s)
equitable, and sustainable. I think they
[01:42] (102.00s)
hit all the buzzwords right there. What
[01:43] (103.84s)
do any of these things actually mean
[01:45] (105.20s)
with regard to the development of AI?
[01:47] (107.04s)
Your guess is as good as mine. These
[01:48] (108.64s)
terms are so vague that they can be made
[01:50] (110.24s)
to mean basically anything that
[01:51] (111.52s)
bureaucrats in Sacramento want them to
[01:53] (113.52s)
mean. Honestly, it sounds like a
[01:55] (115.60s)
politician's kickback scheme to me.
[01:57] (117.28s)
Spread out some of the public's money
[01:58] (118.72s)
over some groups that are favored by the
[02:00] (120.48s)
people in charge. And lastly, the bill
[02:03] (123.44s)
also protects whistleblowers of AI
[02:05] (125.44s)
companies when they think their
[02:06] (126.80s)
employers products pose a quote critical
[02:09] (129.04s)
risk. And we've already seen employees
[02:11] (131.36s)
leave some of these large AI companies
[02:13] (133.04s)
and make silly statements about how
[02:14] (134.64s)
they're worried about AI. And which
[02:17] (137.20s)
again, I will mention that these AI
[02:19] (139.36s)
products are more or less just chat bots
[02:21] (141.44s)
at this point. They're not going to hurt
[02:23] (143.44s)
anyone yet. I think we're putting the
[02:25] (145.60s)
card before the horse just a little bit.
[02:27] (147.20s)
But all this talk about safety in this
[02:29] (149.04s)
bill got me thinking, where have I heard
[02:30] (150.96s)
about AI safety before? Isn't there a
[02:32] (152.88s)
company out there that talks non-stop
[02:34] (154.48s)
about AI safety? Ah, yes. It's
[02:37] (157.12s)
anthropic. So, that got me digging a
[02:39] (159.12s)
little bit. Who is donating to the
[02:40] (160.96s)
sponsor of this bill, Scott Weiner?
[02:43] (163.28s)
Well, I noticed a company in the list
[02:45] (165.04s)
here called SV Angel LLC. So, who is SV
[02:49] (169.04s)
Angel? It's an angel investor vehicle
[02:51] (171.52s)
for Silicon Valley companies founded by
[02:53] (173.92s)
Ron Conway, who just so happened to have
[02:56] (176.64s)
been an early investor in Google. And
[02:58] (178.64s)
his company SV Angel seems to be pretty
[03:01] (181.68s)
active campaign donator, but
[03:03] (183.60s)
conspicuously only on one side of the
[03:05] (185.92s)
aisle. And who else has SV Angel
[03:08] (188.40s)
invested in? Oh my, lookucky there. It's
[03:11] (191.36s)
Anthropic, the company that always seems
[03:13] (193.36s)
to be popping up in all these silly
[03:14] (194.96s)
discussions about AI safety. Wow, what
[03:17] (197.44s)
are the odds that an angel investor in
[03:19] (199.68s)
Anthropic also donated to the campaign
[03:22] (202.24s)
of the California senator that
[03:23] (203.92s)
introduced this AI safety bill? I mean,
[03:26] (206.08s)
what a coincidence. But in all
[03:27] (207.76s)
seriousness, this is just gross to me.
[03:29] (209.84s)
Anthropic investors are manipulating the
[03:31] (211.84s)
California government to enact AI
[03:33] (213.44s)
regulation on their behalf. They're
[03:35] (215.12s)
trying to build barriers to entry so
[03:36] (216.88s)
that they can have a monopoly on this
[03:38] (218.32s)
emerging industry. It's just really
[03:40] (220.48s)
gross. Uh but I'm not surprised. This is
[03:42] (222.48s)
par for the course. But what do you
[03:44] (224.56s)
think about it? Leave a comment down
[03:45] (225.92s)
below. As always, thanks for watching.
[03:47] (227.68s)
Please like and subscribe and I will
[03:49] (229.04s)
catch you next