Search

653: Interop 2025, Attributes, and Black Boxes of AI

Download MP3

We're looking at the Interop 2025 announcements, Dave is hating on (and talking about) attributes, debating better ways to handle color inputs, following up on the implications of AI that is shaped by politics, and Dave mouthblogs the secret black boxes of AI.

Tags:

Guests

Chris Coyier and Dave Rupert in silly sunglasses and a sign that says Shawp Tawlkk Shough DOT COM

Chris Coyier and Dave Rupert

This episode is with just Chris & Dave, ShopTalk Show's hosts. Chris is the co-founder of CodePen, and Dave writes web components at Microsoft.

Episode Sponsors 🧡

Video

Transcript

[Banjo music]

MANTRA: Just Build Websites!

Dave Rupert: Hey there, Shop-o-maniacs. You're listening to another episode of the ShopTalk Show, a podcast all about front-end Web design and development - and more.

Chris Coyier: [Laughter] And more?!

Dave: Hey, Chris. How are you doing today?

Chris: Oh, gosh. I just couldn't be better, really.

Dave: Oh...

Chris: It feels like everything is perfect right now.

Dave: Everything is perfect.

Chris: Just kidding. But this show is perfect.

Dave: [Laughter] All my kids are healthy. All kids are healthy. We are doing great. No problems.

Chris: [Laughter] Yeah. I was cleaning definitely out in the snow. I'm glad I didn't take my hose off the house, which is kind of a mistake. I feel like you should kind of spin that stuff down in the winter where it gets real cold like where we live, which I didn't. But I'm glad it wasn't because I had all the mats out of my car, just spraying puke off of them in my snowy front yard. That's where we were at yesterday afternoon.

Dave: Yeah. My son, every time there's a three-day weekend, like President's Day, he decides he's also sick the next day. Statistically, he misses the day after a long weekend. Man, as a parent--

Chris: This has already been long? Yeah.

Dave:; Yeah. It's like, "I got to get back to work." It's a tight week of work here, bud, so I've got to get back into it. Oh, well. He's at the age he can just watch TV all day, so he just watched Adventure Time while I worked.

Chris: Oh, good choice.

Dave: Then he came into my office.

Chris: Yeah.

Dave: I was on a call, Chris.

Chris: Yeah.

Dave: I kid you not. I'm on a call with two women and he farts. He cuts a fart.

[Laughter]

Dave: And the two women I was on a call with were cool and did not say anything. But they didn't know he was in my office.

Chris: Oh, no! [Laughter] You're like, "If you heard some flabergastation, that was my son."

Dave: So, what do I do, Chris? Do I follow up? This is my advice show. What do I do, Chris? Do I follow up with these people and say, "I didn't cut the beef," or do I just let it go?

Chris: Just go full silence?

Dave: Just go full silence?

Chris: Oh, no. I would. You know those red buttons that they sold at, like, Staples - or whatever?

Dave: Yeah.

Chris: I would write in some kind of white paint on it, "Fart Button," and put it somewhere visible in your office. Next time you have a meeting with them, be like, "Oh, he probably just leaned his elbow on the fart button."

Dave: Yeah. Yeah.

Chris: You know?

Dave: Yeah, okay. Okay, so some sort of elaborate ruse to cover it up.

Chris: Absolutely.

Dave: Yeah? Okay.

Chris: Yeah. I would never fart in a meeting with my butt. But I did it with a button.

Dave: No, no. I don't do that. But you know, at least I'm on mute. No. Yeah, it was... I was just like... [Laughter]

You know thankfully I wasn't on camera. Or maybe I should have been on camera because I just turned around and was like, "Dude! What?!"

[Laughter]

Dave: Anyway, it's just whatever, man.

Chris: Yeah. You learn those skills later.

Dave: Life with a pre-teen over here. So, there you go.

00:03:22

Chris: We have some questions we can get to. We have some weekly news that came up. We didn't mention that Interop dropped. I guess it's almost two weeks ago now, or is it, or a week and a half or something?

Dave: Yeah.

Chris: By dropped I mean they have announced what the topics for Interop will be. It's always an interesting process because you know people do... There is like a GitHub repo or something.

You publish ideas, and you don't really vote. There's no voting. But there is a lot of, like, thumbs up and stuff. In years past, I've tried to compare what got chosen to how many up thumbs things got.

Dave: Mm-hmm. Mm-hmm.

Chris: It was a little correlation but not much. I also went into it very open-minded like I don't actually care, though. I don't think that things should necessarily be chosen by up votes because there's stuff that needs to get... It's almost like it would be just as valuable if you picked the things with the least up votes.

Dave: Right.

Chris: When it comes to Interop.

Dave: Right.

Chris: Sometimes stuff needs to get fixed that isn't very hot.

Dave: Right.

Chris: But the list is out. The Igalia podcast fellas did a good episode on it. I don't think we need to go up and down the list necessarily other than to say the two ones that I think people are going to be like, "Hell, yeah!" about is the anchor positioning and popups--

Dave: Yeah.

Chris: --because they go together like peanut butter and jelly, baby. And if they worked across all three browsers, I think that's going to have a big impact on stuff.

That's not to say that everything else on this list isn't also awesome. So, please do all of it. I like all of it. And whatever your process is, please keep doing it forever.

00:05:00

Dave: Yeah. Yeah, it's like I want all of this stuff. Navigation API was kind of like a cool surprise, I guess. I think that really fixes a lot of JS routing. At scope is cool, but I feel like it's in... I guess it's only in Chrome maybe right now.

Chris: I'm working on a talk about scope in CSS. I'm going to be at CSS Day, I think it's called, in Amsterdam.

Dave: Oh, yeah. Yeah. Nice.

Chris: It should be interesting because obviously there's a lot more to talk about with just the concept of scope, which is my talk.

Dave: Mm-hmm.

Chris: Of which I think @scope is actually kind of a small part of it. It's not that I totally despise it, but I'm just like, "Eh!" You know? I don't know. It's just not my favorite CSS feature, particularly because I work on lots of different CSS and never look at that and be like, "Ooh! I'm going to use that."

Dave: Mm-hmm.

Chris: It just has no "Ooh! I'm going to use that feature" to me. I say that when it just came up yesterday at work where we were looking at ways to do... We have a little bit of light mode dark mode stuff at CodePen but mostly not. We're going to more lean into it fully when the new editor comes out. When you're designing a component-based website that uses custom properties for light mode dark mode (or potentially lots of modes), it really is nice to kind of scope the variables and stuff to individual components and possibly sometimes be able to have those components in a different color state than their neighbors or parents.

Dave: Sure. Yeah.

Chris: There is some stuff in @scope in CSS that does assist with that a bit.

Dave: Mm-hmm.

Chris: What do they call that? Proximity scoping is actually kind of interesting, so I might. Who knows? Maybe it'll be next week; I'll be like, "Dave, have you seen @scope?! It's amazing!"

00:07:00

Dave: Right. Right. Right. One other one that was kind of cool, well, view transition APIs, so that's rad.

Chris: I think they specifically scoped it to a single SPA style.

Dave: Oh, the SPA style? Boo!

Chris: [Laughter]

Dave: Everyone stinks!

Chris: But does it really say that? I'm reading the Read Me here. Focus. Focus our proposal for same document.

Dave: Same document transitions.

Chris: Yeah, but it's like that stuff is so tide together that I would think that a browser team digging into view transitions, like, couldn't you see I'm just being like, "Screw it! Let's just get it done while we're in here"?

Dave: Yeah. I actually... Safari has it. Safari has this weird back behavior. You know the swipey where it reveals the whole thing when you go back.

Chris: Okay.

Dave: There's one little piece that I kind of need to disable the animation. Safari saves a picture of it. Then if you three fingers swipe back - or whatever - it'll show the old state. Then it'll run the view animation. So, I need a way to disable that. But whatever. I'm splitting hairs there.

Chris: Disable the view transition or disable the three finger swipe?

Dave: Yeah, and there is a way to disable that view transition if user agent transition - or something like that. You can just bail. But there's not one for navigation auto. Anyway, there's one, like, if there's a UA transition, you can just say, "Okay, I'm not going to do my view transition." But there's no, if you're setting at view transition navigate auto - or whatever. There's no way to bail on that if there's a native or a browser UA transition or something.

Chris: Mm-hmm.

Dave: So, anyway, that's kind of like one niche thing I need.

00:08:52

Chris: I just fixed a Safari bug yesterday. There was a web-compatible kind of site where you log bug tickets.

Dave: Mm-hmm.

Chris: You've seen this thing. Yeah, it was kind of new to me. But it was like, "Look. This works in this browser but not this browser." Then you can chain people - or whatever. It was Jen Simmons sent it to me. There was a problem on CodePen. I was like, "I'm on it." You know? I don't want to be on that list.

Dave: Yeah.

Chris: But here's what it was. Iframes have attributes on them. There's allow attributes, and then there's an attribute called sandbox. They're kind of like the opposite of allow attributes. It's very restrictive. It makes an iframe very restrictive until you put values in that attribute that allow certain things.

One of them is called allow downloads. Sandbox equals allow downloads, and then if you don't... If you have just sandbox on there and you don't have allow downloads, you can't download any files. The iframe won't do it.

We didn't have allow downloads in there for Safari for some reason that I can't remember. But in our codebase, there's a file that maintains this list of attributes. It would probably offend some people. It says, "Safari attributes, Firefox attributes, Chrome attributes," and they're like arrays.

Dave: Mm-hmm.

Chris: And it's just the list of ones that those different browsers get, and we have to UA sniff to do it.

Dave: Yeah.

Chris: We use some Rails library or Gem or whatever that puts classes where they need to be - or something. In this case, we don't even need it on the front end. It's like a backend concern, so it figures out on that side.

You don't want to do that. That's not good. But there was just no choice.

If you put a sandbox attribute on that thing in a browser that doesn't support it, you get a big, nasty red console warning.

Dave: Hmm...

Chris: And when you're CodePen and you have lots of users and they're playing around in this console because they're developers, do you like seeing big, red errors from the website you're trying to work with? No. It's ridiculous.

It's a really strong error presentation, I think, for such a silly little one attribute is wrong. HTML tends to be a little more forgiving than that. But it's really noisy about it in all browsers.

I'm like, "Fine. I'll just put the attributes that you actually support on there so I can not send allow downloads to Safari and not see the big, red, stupid error."

Dave: Mm-hmm.

Chris: Even in the ticket it was like, "Oh, you should put allow presentation on there, too." I forget what it does. But I think it enables full-screen mode or something. Who knows?

I was like, "I'll try it." But then it's a choice you have to make. In stable Safari, it throws a big, red error (if you put it on there). But you can enable the flag. And if the flag is on, then it doesn't. You're like, "Well, I can't... I don't know what to do then." I can't tell if you have a flag on or not. That's purposefully not allowed on the Web.

00:11:55

Dave: Yeah. I understand how we got here, like with the whole allows, like the permissions array, I guess.

Chris: That's what it is. Yeah.

Dave: I feel like it's a Safari thing, not to poke at Safari, but you have to have the attribute combo correct. Have you ever tried to video plays inline loop?

Chris: Muted.

Dave: Muted. Allow plays.

Chris: Yeah.

Dave: You have to have these three or four attributes to make it cook, right? Or you can do image source MP4, I guess - or whatever.

Chris: Oh, is that well supported? Can you do that across all browsers (put an MP4 as an image source)?

Dave: I think you can. Yeah. Yeah.

Chris: Hmm...

Dave: I think I've done it before. But then that's only looping, right? But you can add an alt attribute. But then if you then do a video plays inline, then people can have controls to stop it, which is also kind of like an accessibility feature.

Chris: Yeah.

Dave: Anyway, I always get tripped up by those, you know, you need combo attributes to enable a feature that you're trying to enable. That's a tough one for me.

I'm going to just throw all of ARIA into this bucket, too. [Laughter] You need these combo attributes are difficult to deal with.

Chris: They are! And there's no RSS feed you can subscribe to that says, "Ooh... Next week, Chrome is dropping allow panda foo." You better allow panda foo on there.

Dave: Yeah.

Chris: You know how we know when you have to--? Obviously, I just made that up.

Dave: No, but viewport Haml UI.

Chris: It happens all the time. What we get is a support ticket from a user who is trying to do something and they can't figure out why and it turns out to be one of these allow attributes that has changed or added or something. There's just no--

Despite me keeping up with the Web almost professionally, there's no way to know when these attributes change. It's crazy.

Dave: How do you get a form to auto-complete with a credit card? You know what I mean? There are four or five attributes you've got to do. You've got to do a pattern. You've got to do an input type number. You've got to do an input type, and then you've got to do auto-complete CC whatever.

00:14:22

Chris: I love your... Dave is hating on attributes today. I hate attributes!

Dave: I hate attributes! I hate combos, man. I hate when you're like, "Oh, if they perfectly set 85 different attributes, it'll be the ideal experience." [Laughter]

Chris: Yeah.

Dave: "That's how we'll solve the problem." It's like, "No! That's not--" We need it more like CSS where it's like background pink, you know.

If you had to be like, "Background alpha channel 1-2-3, background green channel A-B-C," you know?

Chris: Mm-hmm.

Dave: If you had to set each individual channel every single time you wanted to render a color, you wouldn't do it. Everything would be black and white on the Web.

00:15:00

Chris: Interesting. Speaking of inputs, you know I was looking at Adam Argyle. I was trying to find a color mix demo where what I wanted was, on the left half you pick a color, on the right half you pick a color, and then it shows you the output of it. I was like, "Surely somebody has done this, right?"

I was looking around on CodePen to find it. Sure enough, I find it. It's Adam Argyle.

Dave: Mm-hmm.

Chris: But how he's designed it is the top left quadrant of the screen is a big color swatch. Guess what it is. It's an input type of color that's just absolutely positioned into the upper left quadrant of the page such that you don't have to set a background color or anything. You know how color inputs just are a color swatch?

Dave: Mm-hmm. Mm-hmm. Yeah.

Chris: It's kind of like that. Then the upper right is a color swatch, too, of input type color. You click on either of them, and it brings up the color picker. It's always different depending on what browser. In fact, I looked at this recently. It's absolutely surprising how different the color pickers are across browsers and platforms.

Dave: Mm-hmm.

Chris: But anyway, you pick a color on the left. You pick a color on the right. It color mixes them. And you see what's up in his little demo.

But I was like, "But I want to color mix different--" Color inputs are only hex. Did you know that? They're only hex colors.

Dave: Yes... And is it with or without the hash? It's without, right?

Chris: I think it's with.

Dave: With. Okay. All right.

Chris: It's with the hash in there. But they do not accept... You cannot set the value of it to, like, OKLCH.

Dave: Mm-hmm. Yeah, yeah.

Chris: Or whatever else - RGBA or whatever. It's hash only. So, you know.

These days, it's like chances of me working with a different color function are pretty high, and color mix works with those. So, I just started to get annoyed at color inputs. I'm like, "Why can't I set the value of a color input to a valid color string? Why is it so limited to hex only?"

Anyway, it's just a tiny, little crusade I was on. And why can't I see what the fricken' value is with my eyes? In no instances in any browser can you see what the value of it is.

You pick a color from the color picker. You see the color swatch. But you can't tell what the hex code that was chosen unless the color picker inside of it happens to display it, but most do not.

Dave: Mm-hmm.

Chris: I just thought that was annoying. I was like, "I want to fork Adam's Pen and make it accept color strings, too, of valid color formats." I was like, "How do I do this?" Or could I somehow augment a color input to display the value as well?

I'm like, "Ooh, baby Web component time," and HTML Web component. So, it just takes a color input. Then you just wrap it in a Web component. Then what my little take does is it just added an input type text right next to the input type color. They're just kept in sync with each other, so it's like eight lines of Web component.

Dave: Mm-hmm.

00:17:57

Chris: If you set one, it sets the other. You set the other, it sets one. But the question is, what if you put OKLCH in the text input? Can it somehow be smart enough to make that color swatch show that value? Well, how would you do that? Wouldn't you need a fancy color library like Color.js to convert it to something that's useful?

Dave: Color.js will convert it for you.

Chris: Yeah.

Dave: But you'd have to custom render, right? Yeah. You need your own swatch. You need your own button. You need your own everything.

Chris: Well, Eric Merchant in the D-d-d-d-discord solved it in a more interesting way. He just took the color string and just applied it to a custom property and then you get custom... There's some JavaScript function that just... You set it to some divs color and then you get computed style on it - or whatever.

You use the color function to force it into RGB. Then when you pluck the computed style off of it, it's a hex for some reason.

Dave: Hmm...

Chris: Now you have the hex. You can convert OKLCH to hex with just a couple of lines of JavaScript. It requires a little string manipulation and stuff but it worked pretty good. It totally ignores transparency, but that was its one downside. Otherwise, without loading Color.js and stuff, worked.

I added that to the little Web component and rock and roll. This wasn't for anything important. This was me just chasing... being nerd-sniped by a color input.

Dave: Mm-hmm.

Chris: But it is surprising to me. Why would you guess? Is that in the spec or something?

Dave: Um...

Dave: Why does it say, "Do not show the color value to the user"? [Laughter]

Dave: Yeah, it's weird. I don't know.

Chris: I don't know.

00:19:53

Dave: Yeah. I don't know. I mean, well, I think it's supposed to do the thing where it falls back to a text input if it doesn't. And so, maybe that would be a double? I don't know. That's weird. Yeah, it's weird it's not like the file input where it just adds in, like the pseudo element for the selector.

Maybe you said it. Did you use... think about using the new select menu or to build your own thing?

Chris: Uh... no. But I'm not sure what you mean, though. Like the one where you opt into the stylable one?

Dave: Yeah because... Well, I'm wondering.

Chris: I don't have a list of all colors or anything.

Dave: Yeah. A list of every color ever. [Laughter] I wonder if you could--

Chris: Did you see that website that lists all UUIDs? [Laughter]

Dave: Yeah, yeah, yeah. Perfect. I just wonder. Could you build a dropdown like that but instead of list items, you are presenting a color picker. Maybe that's a totally bad ARIA thing. [Laughter] I'm going to back out of my statement here. But you could have some predefined swatches or something and then a custom color picker or something. I don't know. Anyway, now I'm curious. I've built my own color picker here in my head and I need to stop.

Chris: [Laughter] It's okay. It was just a fun, little detour of life. Nerd-sniping, as we call it.

What else do we got?

00:21:30

Dave: On last week's show, we talked about AI - or whatever. I think I'd said... Somebody wrote in about, like, are we going to have left-wing and right-wing AI, right?

Chris: Oh, yeah. I remember that.

Dave: I think we said, "Yeah, probably." I mean you can look at what's happening at X and probably abstract the future pretty easily there, right?

While we were having that conversation, another conversation was breaking out in real life or on the blog-o-sphere. Miriam Suzanne had a good post on, like, AI and sort of the billionaire club and kind of how it's impacting her life. It's a really good thing about how "Tech continues to be political" (is the name of the post). It's worth a read.

I'm just kind of want to flavor this conversation we had with other conversations that were happening at the same time.

Chris: Yeah.

Chris: Robin Sloan had another article called "Is it okay?" and I saw this through Jeremy Keith, I think, actually. I'm subscribed to Robin but I hadn't read it.

Then Jeremy wrote, like, "Hey, I don't know. This seems okay." Then Baldur wrote... [Laughter] Baldur Bjarnason wrote a full-throated rebuttal of that post. It was basically like, "No, it's not okay." I think I've always said Baldur sort of takes the extreme opinion of what I feel down inside. [Laughter]

Chris: Yeah. Yeah. Love you, Baldur, but it is true. Yeah. Sorry.

Dave: Which is... you know. But Robin's thesis was sort of this, like, what if we could get, you know... What if chat bots solve cancer, the cure cancer? Well, isn't that a necessary risk - or whatever? Baldur's reply was knowledge tech that's subtly wrong is more dangerous than tech that's obviously wrong.

Sort of saying this idea of, like, the nondeterministic nature of LLMs maybe creates misunderstanding rather than scientific understanding. Does that make sense? And so, I think he said there is no path that can take current tech synthesis models and turn them into super-scientists. I think that was sort of the big pull quote.

Robin replied to that. Then it wasn't sort of... It made Baldur more mad. [Laughter] And Robin kind of... I want to say his ideas was, like, I think it is science fiction but it's just this idea of, like, maybe there's something that it can unlock - or something like that. Baldur was basically like, "No, you can't do anything. It's bad."

Acactio, Jeremy Keith, gets in the mix. Sides with Baldur here on this. And I thought it was a pretty good, like... Jeremy is pretty sober there.

Michelle Barker, I think, wrote a reply, too. We can put a link in the show notes.

But anyway, there's a lot going on with AI, what can it do, what's going on, and then specifically on this, like, are we going to have left- and right-wing things. I think I said I don't know how you would train an AI (with the corpus of the Internet) and, in a month since Trump became President, you would shift it entirely.

Well, there was an article that came out on TechCrunch where OpenAI tries to un-censor ChatGPT, which is like, "Oh, okay. What's going on here?" That being kind of the biggest model, most popular model.

They reprogrammed the safety mechanisms... or what would you say? They created new policy around the safety mechanisms of ChatGPT. And so, they kind of came up with this list of, like, "We're not going to push an opinion," or "We're going to present perspectives from any point." If you're like - I don't know - think of the most heinous thing you could think of and ChatGPT is like, "Yeah, man! That sounds great!" Other people might say... I don't know.

It's interesting. I read through this whole document, this whole 187-page sort of like how they're doing prompts and stuff like that because I was just really interested in just how you would make an agenda-less model but still has truth. So, I don't know. They're riding a very thin line. So, they basically have repealed mechanisms and they're riding a very thin line is sort of, I think, the answer there.

00:26:48

Chris: Yeah. It's one thing to just say what you're going to do. I see what you've linked to here. It says, like... It lists out bullet points of how we'd like it to behave.

Dave: Mm-hmm.

Chris: But they're also saying, "We changed some stuff." But I don't see a list of what was changed because I think that might be a little more scary - or something - isn't it?

What we're not seeing is a change log of, like, "Ah, we took the stuff off about racism. There was a thing that prevented it from being racist, and we deleted that." You know?

Dave: Mm-hmm.

Chris: Is that implied? We don't get to see that. It's one thing to say, "We want a model where there are good people on both sides." [Laughter]

Dave: Right. Right. Yeah, and that's kind of the vibe.

Chris: What actually changed?

Dave: Yeah. I think they kind of hint at it in these little blue textbox - or whatever - but just kind of like, "We want to approach such as scientific objectivity and deliberate discourse to inform our approach to neutrality," or something like that -- minimize editorial bias, which generally, when you read this, you're like, "Yeah, I don't want editorial bias in my AI.

Chris: Yeah, sure.

Dave: But at the same time, it's sort of like, are we comparing? If you're like, "Earth is flat," which they have as an example, Earth is flat and Earth is round, I think there is an answer. I feel like it's the same with, like, do trans people or people of color have rights? There's an answer there that's pretty obviously a yes.

Anyway, they sort of said avoid factual reasoning and formatting errors. But if you're trying to be factual, sometimes the facts bear out a political opinion. So, how do you navigate around that?

Dave: This stuff is so complicated. Just not an expert. A little hard to weigh in. I tend to just be on the side of safety and responsibility - whatever that tends to mean. And we already know that that's not the jam.

You started the company--I'm talking about OpenAI--as, like, "Let's do this nonprofit. Let's do this for the good of the world," and it took two seconds (after some success) to be like, "Let's bring this baby in-house! Let's have a $200 plan. Let's tell people it's worth $1,000. Let's make most of what we talk about crazy science fiction stuff that's very business-oriented, not like what's good for the world."

I see what you're saying, and I'm not even against all of it necessarily. But I just see where you're going. The point of this is extracting profit from words on the internet. That's the point.

Dave: Yeah.

Chris: You're going to change policies to help that goal because that's all you talk about? Duh. [Laughter]

00:30:00

Dave: Yeah. Yeah. Well, yeah. Or you're going for a broader appeal over sort of like safety stuff - or who knows. It's kind of... Anyway, I can put links in the show notes. But it's worth reading.

Chris: It was a pretty hot blogging week, wasn't it, with all the back and forth?

Dave: Yeah.

Chris: Very nice.

Dave: Back and forth blogs about blogs. I think that's really good.

Chris: It is, yeah, and I still see--I hate to say--both sides of this--again, because it's such a loaded term these days. But I can feel how Robin thinks about this, Robin Sloan. He's like, "Look, this stuff is absolutely wild already."

Look at what these things can do. It's the most science fiction thing ever that's just happening right in front of our eyes. Unbelievable what they can do.

There has been a rate of growth that have made these things better and better and better and better. Unbelievable. How can you just look at all that and be like, "There's no more science fiction left. Where this is going to go, we've squeezed out the science fiction. You will no longer be amazed after this"? That's not going to happen.

To hear him talking about it, especially as an author who writes science fiction books for a living, to be like, "I can see a world where this is headed towards a super-science situation," just feels very, like, "Of course, I understand his perspective on that."

Dave: Mm-hmm.

Chris: It has a ring of, like, "Yeah, probably," kind of thing to it while everybody is, like, ringing the safety bells is even more pertinent.

00:31:37

Dave: Yeah. yeah. I think there are probably pessimists and cautious optimists and then sort of I think there are also people who are just [laughter] full evangelicals about it, like, "It's going to solve every problem."

I can't escape it, man. I went to pizza, and I had a conversation about AI with a lawyer who is making his own apps using ChatGPT.

Chris: Yeah. Yeah.

Dave: I can't say it's a bad thing. I can't say... You know at a certain point I think the software is going to let you down. But it is cool. [Laughter]

I think it's going to code itself into a corner, and you're not going to be able to fix it. Maybe the software can. But for creating little apps or widgets or automating little processes they do, maybe that works. Maybe it works for them.

I don't think he's on the, like, "I'm just going to AI lawyer instead of do actual lawyer." I don't think he's on that train. But I think he's... Maybe it's like, "Summarize this bunch of text - or whatever - so I don't have to and let me know if there's anything in here that's related to X, Y, Z," or something like that. Maybe that's useful. I don't know.

Chris: I don't know. You know I spent a decent amount of time being mad about ain't nobody asked me slurping up the entire Internet and ignoring copyright. All of this stuff still has not really changed or seen any differences on that. But at the same time, I don't know. I'm not ready to hang up the towel on technology yet. I can't. It's the only skill I have. I need to be in this industry for a while longer.

Some part of it is kind of fun, especially because I feel like I come at tech from a little bit of this, like, I like the tech and I like the UI and UX around tech, and it happens to be kind of like a poppin' off place to be at the moment of, like, how can we design around these things. Not that I'm really participating that much but I'm certainly using it. To be like, "I'm in my VS Code, and then I'm trying Cursor, and I'm trying Windsurf, and I'm trying Trae, and I'm trying Augment," and I'm trying all this stuff.

It's like, "Ooh... How do they do it?" "Ooh... What are their key commands?" "Oh, that one pops up inside the editor?" "Ooh... This one pops up when I select text. That's a nice take on it."

There are a lot of UI/UX stuff that's fun to watch as part of it. Not to mention, I've been doing a bunch of coding lately that feels like it's actually helping me where I'm having a problem.

I'm in a Vite config, but it's one that I don't need to know intimately. It's just a side project. I'm like, "Can you just update this Vite config to proxy at this URL and put in the hot module reloading there?" It's like, "No problem. Here you go."

Dave: Mm-hmm.

Chris: I'm like, "Sick. Thanks." You know? That saved me so many minutes, man. So, to do that, to just be like... To worry about whether that thing that helped me, you know, what its political position is, I just can't right now. [Laughter]

Dave: Yeah. Yeah.

Chris: I don't want anybody to get hurt. I want safety bells to be there. But sometimes, in the tech-specific, coding-specific stuff, I'm like, I don't know. I'm a part of it. Call me the devil.

00:35:13

Dave: Well, I think it's just a big thing, and there's been a lot of VC. It's the only thing, man, [laughter] that is exciting tech. It's the only heartbeat that exists, right? And so, I feel like people are just doing this, adding it to get funding, so it's in everything, so you kind of have to talk about it.

I think there is some utility to Copilot saying, "Hey, here are some... Just hit tab, and what you were kind of already starting to write auto-completes," just like on my phone when I send a text message.

Chris: Yeah. In a way, it kind of helps you think. I was listening... I still like that show Search Engine with PJ Vogt. You should listen to it. It's good stuff.

Dave: Mm-hmm.

Chris: He had an AI one on the other day. [Laughter] It was like a story that's old news, I feel like. But that's just how he rolls sometimes. It was kind of about the homework story of, like, "Now that we have ChatGPT, is your English homework dead?" - kind of story. [Laughter]

Dave: [Laughter]

Chris: It's not behind, but it's like, "Yeah, that's still there." I'm sure high schools are still very much struggling with that, or all levels of education, really.

But there was a moment. Some dude, I think he was McSweeney's dude or something, has a book about writing in the age of AI. He just has this simple point that I think is strong that, like, when you let a machine write for you that you aren't benefiting from the regular process of writing, and writing is a struggle-y, difficult, hard thing to do. But guess what it does for you. It helps you fricken' think.

Dave: Mm-hmm.

Chris: Duh! It doesn't have to be a book that you're writing. It could just be on a piece of paper. It could be your diary. It could be anything. I just mean writing very generically. If you start leaning on tools that write for you too heavily, you really are robbing yourself of thinking and, crucially--he always pairs them together--thinking and feeling. How does it feel to write these words? Are you representing yourself in that way?

Then probably, ultimately, you're communicating something as well. So, that thinking, feeling, and communicating is such a big deal. Kids, don't generate too much text! You need to use these tools differently.

But I feel like sometimes from the coding aspect, it's almost helping me think because I want to think through this code. I'm thinking about it right now. It's all up in my brain. But sometimes when the AI can just help auto-complete and speed up that part so I don't have to stop thinking and go look up syntax or whatever, or I'm in TypeScript and I can just hover over something and see what parameters it wants and what types they are, it's helping me. It's actually helping me think, not robbing me of thinking.

00:38:09

Dave: And that's my hope, too; these become tools for thinking not tools for outsourcing thinking. That's where I've been in conversations where people are just kind of like, "I've thought through this. Am I missing anything? I'll just ask ChatGPT, you know, ask Chat what it says." Like, "What's another word for blah-blah-blah?" Kind of like thesaurus.com style stuff, and that seems... other than the enormous sort of ethical considerations there, that seems like a pretty naïve fine use of an LLM. It's good at sort of modeling language, right?

Chris: Right, and I can see disagreements happening there, too, is that that is the absolute nature of them. There is no technological innovation that's happening yet that isn't anything beyond it's trained on a bunch of words that already exist and it gets better at piecing those words together based on that knowledge. It's not inventing new knowledge. It cannot do that.

Dave: Yeah. Yeah.

Chris: It can make interesting connections. It certainly can help you think. It can help you research. It can help you reformat things. It can summarize things. It can expand on things -- yadda-yadda-yadda -- do all that stuff, but it cannot do new science... probably. Right? Unless there is some world in which it reads all existing science and then can just somehow see gaps in it - or something - or that a scientific breakthrough comes in a gap instead of an epiphany.

Dave: I don't want to go toe-to-toe with Baldur in a blog post.

[Laughter]

Dave: It's not my dream job, Chris. But you know empirical science is probably not going to have a breakthrough because that's just statistics. That's probably more... I think this is what Michelle was saying was that's more the territory of an ML model, like machine learning, analyzing statistics and things like that.

But you know there are a lot of social sciences. And maybe there's a common thread in social sciences and papers, or maybe there's a connection thread between different papers in medical journals and things like that. Maybe there's something to that. I do think those are all being rapidly polluted by LLMs, so that's a huge concern. So, you have to think about that.

00:40:43

Dave: There is a post, "AI is stifling tech adoption," by vale.rocks, which it's just interesting, but this is something I didn't know. If you ask Claude, Anthropic's Claude, about code, built into Claude is the system prompts that got leaked. And it says, "Only use React and Tailwind." It was just this sort of idea of, like--

Chris: What?!

Dave: --AI is stifling tech adoption because it's only going to tell you about these things because they're the best things and, therefore, there you go. You know?

Chris: That's secretly baked in somehow when you ask it to scaffold something?

Dave: Yeah. I think somebody had leaked the system prompts - or something like that. Anyway, that's sort of like this not gross but sort of like, "Oh, I didn't know that," so that's something to consider not only if you want to use something outside of React, Recharts, shadcn, and Lucide icons, and Cloudflare CDN, maybe you're going to have to use something different. Anyway, that was just sort of interesting.

And I felt that before, too. I think I asked that bolt.new, like, "Can you make me a Web component site?" And it totally just spin up a React site. And so, it was just like, "There you go." The skills aren't out there right now. Anyway, that's just kind of pulling it back to that bias in LLMs question.

Chris: Yeah. I get that that's popular now, and you're making a product for users. It can do whatever it wants, I guess.

Dave: Yeah.

Chris: But yeah, it's certainly a "rich gets richer" situation where a popular tech just gets more popular.

Dave: Yeah, sort of like a self-fulfilling prophecy - or whatever.

Chris: I just want it to be smarter. It's cool that you can do that. But you know what's really cool? Doing what I ask you to do. [Laughter] You know?

Dave: Or the context. Look at what is in this folder I'm working on and make it good for this folder. Look at this folder. Tell me what's good about here or what can be better. Other than auto-complete, I really don't probably leverage LLMs to a degree that is sophisticated. You know what I mean?

Chris: Yeah.

Dave: I don't have a process. I don't have a quick prompt that generates me a website and pushes it to Vercel. I don't have that. I'm not saying I need to invest in the tooling, but maybe I'd have better or different opinions if I had spent more time on the tooling. But I don't... yeah, beyond just fancy auto-complete.

00:43:38

Chris: That's another one. I'm still part of this industry, too. I want to make sure that I'm using it right. I've got some muscle memory at some point for, like, you just type in the prompt and you get the stuff, and then it's fine. But really, it's already so antiquated. It seems like most people have really--

This is what ChatGPT was supposed to do from day one. I get that you're supposed to keep conversing with it. For some reason, I just never quite nailed that muscle memory of, like, you need to keep asking it. I think a lot of people have 10, 12 back and forth before they even consider using the output.

Dave: Mm-hmm.

Chris: I think that's just kind of interesting.

Dave: I'm 0 for 10 right now, so maybe I just need to keep trying.

[Laughter]

Chris: Yeah. I did get in a loop the other day where I was trying to ask it to do something and it just could not do it - any one of them. I tried different ones, and it was just like there wasn't enough. You could tell its model just couldn't do it. It tried like six different ways, maybe more than that, and then it would loop back to the first way and try it again. Never did it throw up its arms and be like, "Listen. I don't know." You know?

Dave: Yeah. Yeah. There's no, like, ultimate understanding of itself and its limitations, right? It just never is like, "You know what, dude? I'm not good at this. So, you should probably hire somebody. Here's a list of people to hire." [Laughter] You know?

Chris: Oh, yeah. That would be nice.

Dave: Yeah. No. I mean, yeah, I think I've seen that, too. You're like, "You're not going to figure this out, so I should just quit wasting my time."

Chris: No, and it's not always the machine. Sometimes I'm just not... Because to be fair, I didn't change what I was giving it all that much.

Dave: Mm-hmm.

Chris: It was like Go problem where I was trying to save something - or something. It was one of those nil pointer dereference things. But it wasn't that simple. It was deep in there. It needed access to like 15 files. Yeah, it could not figure it out. I really wanted to just change some really surface level stuff. That's just not what it needed. Anyway... [Laughter] Not that I wanted it to give up, necessarily. It just was an interesting observation.

Dave: Yeah. Yeah. No. Yeah, it's funny. It doesn't give up. It'll keep trying. It'll keep spinning up GPUs to solve the problem.

00:46:17

Chris: [Laughter] Yeah. One of these ones has a costometer on it. I forget which one it is. But you can run one that needs to send up enough tokens and dig around enough and whatever that's about something like $0.07 a go.

Dave: Mm-hmm.

Chris: You're like, "Dang! That's money, yo!"

Dave: Yeah. Yeah. Yeah, I have... Whatever. I have a secret blog post about the black boxes of AI. Do you want me to mouth blog it real quick? I could mouth blog.

Chris: Yes. I would love that.

Dave: All right. Let's do a mouth blog. So, for me, there are seven or eight black boxes of AI, right?

The first one is the data. With what data was the model trained? Was it stolen from the Internet? Was it ethically sourced? Was it ethically validated? What do they call it where humans kind of look over things and, I guess, fine-tuned - or something?

Then does it have RAG, which is the retrieval augmented generation - or something? It's basically like--

Chris: What is RAG? RAG is the one where it's like it not only has what you asked it and its model, but also your whole codebase.

Dave: Yeah, sort of. It's sort of like you know when you get those little attribution links, and it's like, "This answer came from this website," or whatever. It's sort of like it links back. It's basically this other database that's encoded, and it has the URL or where it came from. Then it has the text all vectorized and encoded. And so, that part, does it have that? It's sort of like a yes or no. Does it have citations for its data (because I care about that)?

I happen to know Microsoft has this thing called Graph RAG, which is also very interesting to me. But I won't get into it exactly. It's just this idea of, like, it does that RAG thing where you have attributed sources. But it also creates this little synopsis. It's like, "Chris talked about Flexbox," and so it'd have Chris and Flexbox linked together. It's sort of like this, you know, it summarizes it and then vectorizes it on top of that. Anyway, it's like two-factor, almost, or sort of a citation.

But anyway, the next black box is training. How was it trained? What are the weights and the layers and prompting fine-tuning?

I found out Ollama has all these weights and stuff. Ollama.com has a lot of these weights and stuff like that listed on there, so that's kind of interesting.

But if I downloaded DeepSeek, and I ran it and trained it on my computer, would I get the same MD5 hash for the final binary? You know what I mean?

Chris: Hmm... Would you?

Dave: I don't know. That's something I don't know.

Then the other black box... I'll just go through these.

Energy: what is the cost of a model, like cost to train, cost per query? And then I assume you're not just training one model at a time. You're probably training 100 models and only one makes it out the door. So, what is the cost of that? What is the cost in dollars?

Chris: Oh, that's interesting. Yeah.

Dave: What is the cost in gigawatts? I've heard the whole, like, 500 mL per conversation and stuff like that. I would love to know concrete costs.

Chris: Yeah.

Dave: Privacy is another black box.

00:50:08

Chris: I'm interested in that water one, too. It seems like an interesting metric that, to me, kind of came out of nowhere. I haven't seen it used in other conversations as much. But we know that matter is not created or destroyed, right? So, what does that mean? Where does the water go then? If it uses water, is it bad water that needs to be treated then? Did it become dirty in its usage? Do they get too hot and it's got to be cooled down?

Dave: Yeah.

Chris: Where does it go? Doesn't it just keep going down the river at some point?

Dave: Well, it would evaporate on a cooler and then go up into the cloud - or whatever. Go up into... Sorry, not the cloud. Turn into a cloud, you know.

Chris: I get that it's bad. I just want to--

Dave: Some of these, you don't just put water in the water cooling system. You put sludge. You put a solution. You make it into a solution, and so that water will take years to become water again.

Chris: Kind of is destroyed then.

Dave: Right. Yeah, more or less.

Chris: It's ... water.

Dave: Yeah.

Chris: Okay.

Dave: Not to simp my company too much, but I do know they've been investing in trying to reuse water, like make it a closed system and stuff like that. That's something that, again, I don't know this stuff.

Privacy: What happens when I type into the box? Who gets that data? Does that data train another model in the future? Does that prompt get stored? What happens? I don't know.

The model I'm interacting with, is it machine learning, image gen, LLM, small LLM? Does it have retrieval? Does it have just text embedding? Is it just like a PostgreSQL search? Is that it? I don't hate PostgreSQL search.

Accuracy: How are the outputs judged? Who is judging the accuracy of it, and how are hallucinations tracked? How do human biases fit in?

Chris: This is a hell of a picture of mysterious things that happen.

Dave: Cost: How much is this going to cost me? This is the same problem I have with Cloudinary. I know it's seven florps per blip, but what does that equal if I hook this up to my business?

Then societal impact: How is this going to impact society, like my kids in high school, stuff like that? Are we becoming dumber? Are we creating more bugs? I don't know.

These are just general, this big list of black boxes. I don't know. I'm in a position where I could possibly learn about this stuff, so I really want to have an open mind in that sense. I want to learn. But I also want to understand what's going on, sort of, across all these.

00:52:52

Chris: I hope you do this blog post. That's a lot of mystery. Some of that mystery could be attributed to non-AI products, too. But there's way more.

Dave: Well, and as a technologist, we tend to want to avoid black boxes, right? [Laughter] Why are we just saying, "Oh, yeah! Heck! Let's go!"? [Laughter] And so, maybe I need Swyx on the show - or something - to kind of explain all this to me. I'm just kind of like... These questions I have, I'm openly sort of trying to figure them out.

Chris: That stuff could change any minute, though. I used one of these tools. I probably shouldn't even say this.

One of the things that it did was just when you onboard and open a folder, a new project. It just looks. It just tries to figure out everything that's going on. It onboards itself. Maybe it's just getting ready to RAG it up.

Dave: Mm-hmm.

Chris: It didn't ask my own computer that stuff. It had to go somewhere, so probably an awful lot of code just went up to the sky, got LLM-ized, and came back. Now it's smarter, theoretically, and it's going to help me out more and know more about my codebase. That's good. They're trying to do good there. But at no obvious point was it like, "Don't worry, bro. We're not going to do anything with that code."

It's like, "Cool. Maybe they did say that, and they made that promise," or something. But I don't remember hearing it or reading it or agreeing to it or forcing it or anything. Then even if I did, one guy at the company could just be like, "terp," and flip a little switch and be like, "Actually, I'm going to save all that stuff real quick.

Dave: Yeah, "I know why his butt itches." You know?

Chris: Yeah. [Laughter]

Dave: Yeah. Like they just... Yeah.

Chris: Because you're incentivized to do so. What if you had... I don't know. It's just an easy way to download all of GitHub - or something. [Laughter]

Dave: Mm-hmm.

Chris: But the private one, the one that you don't have access to.

Dave: Oh, the secret one. Yeah.

Chris: Yeah.

00:54:54

Dave: I met a guy. He was asking me, "Do you know about AI?" I was like, "I don't know, man. Sure?" [Laughter]

He was a photographer. Did I tell this story? I feel like I told this story.

Chris: I don't think so.

Dave: He's a photographer, and he was like, "Is it possible to train an AI on my photography?" He wants to build a little, tiny dolly or stable diffusion on his own art that, you know, he's a photographer.

Chris: Okay.

Dave: And so--

Chris: I think that's kind of doable, but it's--

Dave: I think so. I don't know how... what series of apps--

Chris: I don't think you're training a full new model, though. You're more like making it part of the prompt or something, like, "Here are a bunch of--"

Dave: Yeah. I think you're fine-tuning or something, right?

Chris: Yeah, even that is not the right word, I think.

Dave: Okay.

Chris: But I don't know what it is!

Dave: Yeah. Yeah, so I don't know. Yeah, but anyway--

Chris: It's the middle one. Make it like these.

Dave: And so, again, handwaving about the entire ethics of slurping all of Behance or [laughter] Dribbble or whatever onto your LLM. If you're an artist, and you're like, "I want to--"

If somebody came to me and said, "Hey, can you get us a picture of a penguin in Antarctica?" he can either fly to Antarctica and try to get the perfect shot or he could say, "For less than the cost of a plane ticket, I can just LLM-ify you a picture of [laughter] photo in the style of my art of a penguin from Antarctica," or something like that. You know what I mean? He could kind of create a shop, create art based on his art. You know?

Yeah. I don't know. It's kind of this, like, what's the ethics of that? He's really just sort of stealing from himself? I don't know. [Laughter] But if he's okay with it.

Chris: That's interesting. That could be a good sci-fi story, like, in the future you only have to do 100 paintings or write 100 articles. Once you hit that threshold, the machines have enough.

Dave: Yeah. You never have to do anything again. Yeah. Yeah.

Chris: Yeah, you never have to do anything again.

Dave: Oh, my gosh. Maybe that's it. It's just a bunch of people trying to get to 100. It's not even the 10,000 steps.

Chris: Oh, yeah!

Dave: Yeah.

Chris: But that's the twist is that people are just so stupid and lazy that it's just really hard to even get to 100. You know?

Dave: Well, yeah. Yeah.

Chris: [Laughter]

Dave: Once you hit 100, everyone can get to 100. So, what you really got to get to is 1,000.

Chris: Hmm...

Dave: It's just like the Jones, man. Anyway, keeping up.

Chris: [Laughter] Why does that--the fact that it went up by a power of 10-- made me think of listening to one of those dumb dad joke reels the other day that was like, "You hear that the population of Ireland is exponentially growing, the capital city."

Dave: Yeah?

Chris: Yeah, it's Dublin. [Laughter]

Dave: Nice. Good, good. That's a good... yeah.

Chris: Just Dublin. [Laughter]

Dave: I mean we could probably just wrap the show up on that, man. I think that's a good one.

Chris: [Laughter]

Dave: Yeah. Thank you, dear listener, for downloading this in your podcatcher of choice. Be sure to star, heart, favorite it up. Shart it up. That's how people find out about the show.

Follow us on the good ones: Mastodon and Bluesky. Then, yeah, join us where the party is over in the D-d-d-d-discord, patreon.com/shoptalkshow. Chris, do you got anything else you'd like to say?

Chris: [Lip trill] ShopTalkShow.com.