617: Economic & AI Vibes with Jason Grigsby

Download MP3

We're chatting with Jason Grigsby about what a white-collar recession means, how the sources and methods of consuming news shape our perspectives, whether the current economic conditions represent a market correction and if a rebound is imminent. We explore the critical decision of whether to embrace AI advancements or risk being left behind. We also talk about AI-generated voices, large language models and ethics, and the impact of social media signals in an AI world.



Jason Grigsby

Web · Social

Co-Founder of Cloud Four. Author of Progressive Web Apps from A Book Apart.

Time Jump Links

  • 01:04 Introducing Jason Grigsby
  • 06:21 The white collar recession
  • 12:11 Where and how we get our news affects our perspective
  • 15:51 Is this a market correction and will things bounce back?
  • 19:56 Should we dive in to AI or are we going to be left behind?
  • 27:05 Environmental promises plus new tech
  • 36:02 Voice generation by AI to build voices
  • 43:37 Are there any large language models that check all the boxes?
  • 49:02 Google Zero & social media signals

Episode Sponsors ๐Ÿงก


[Banjo music]

MANTRA: Just Build Websites!

Dave Rupert: [Speaking in an ominous voice] Hey there, Shop-o-maniacs. You're listening to another episode of the ShopTalk Show. I'm Dave--in the shed--Rupert. With me is Chris--in the booth--Coyier.

Chris Coyier: I think that's Dave's "I can tell the future" voice, or he's going to take some guesses.

Dave: I'm soothsaying, over here, you know, just saying some sooths. You know? [Laughter] Living my sooth, Chris. Living my sooth.

Chris: How is the--? I mean if you had to guess, how is the economy doing?

Dave: Oh, stonks is up, I believe, is what I hear. And then stonks is up, but also it's really hard to get a job in tech.


Chris: Oh... Good to talk with you, Dave. We also have a guest on today that I don't believe we've ever had on - insanely. It's a pleasure to have on Jason Grigsby. Hey, Jason. How are ya?

Jason Grigsby: I'm doing well. Thanks for having me.

Chris: Yeah, sure.

Dave: We've really never had you on?

Jason: Yeah. No.

Dave: Is that true?

Jason: No.

Dave: Wow.

Jason: 600-some-odd episodes.

Chris: We know.

Jason: And this is the first time.

Chris: We've crossed paths many, many, many times.

Jason: Yes.

Chris: We are kind of in the same circles, as it were.


Jason: For those who are listening and not able to see this, they are not wearing Lederhosen. The last time I saw them actually do a ShopTalk Show, they were both dressed in Lederhosen.

Chris: Oh, my God.

Jason: So, I had unrealistic expectations.

Chris: Right.

Jason: I brought my lederhosen, but they didn't.

Dave: Yeah, you're dressed up. Yeah. I appreciate--

Chris: Wow.

Dave: I should have sent a notice.

Chris: Just like Patagonian-ing it up over here and Jason is in full gear.

Dave: That was the ultimate An Event Apart, I believe.

Chris: Is that? I was like, "Where were we?"

Jason: Was it? There were a couple in the pandemic, and I feel like that was--

Dave: I think it was the second to last one, if I'm not--

Jason: Was it?

Dave: Yeah.

Jason: Okay.

Dave: Because they had one in San Francisco after that - or something.

Jason: Yeah, so it was the one in Denver? I attended one during... you know. I mean the pandemic... COVID is still here and everything, but I remember I went to one conference during that period of time.

Chris: Well, that was it. Good stuff. Jason, you are kind of owner and proprietor with partners at an agency called Cloud Four out of Portland, yeah?

Jason: Correct, yeah.

Chris: Yeah. Right on.

Jason: Yeah, been doing it since 2007. Yeah.

Chris: Wow, really?! Oh, nice.

Jason: Yeah.

Chris: That's cool. That was the year I started CSS-Tricks, I think, so it was the same trajectory that way, too. No wonder we're kind of in the same cohort, as it were.

Jason: Yeah, exactly. And we write a lot about the Web there, in addition to doing work for folks.

Chris: Yeah. A must subscribe, I'd say. You have a stable of excellent writers/partners, I guess, mostly. Right?


Jason: Yeah, I mean people who work at Cloud Four, we do projects for clients. And what we learn during the projects, we tend to write about. And so, we try to give back to the Web.

Chris: That lends it a little bit of extra credibility, I'd say, generally. There's a lot of tech writing that doesn't do that. I try to sprinkle it in when I can, into my writing, to give it that level of credibility, but yours is just extra there because you're like, "No, this is what we literally did for this literal client that I'm telling you about." Whereas my experience, I don't really do client work, so I have to draw from other experiences.

Jason: Well, I mean sometimes it's that. Sometimes it's just somebody decides that they're really curious about something. They go off, and they research it and learn it.

Chris: Yeah.

Jason: But it's been a throughline for Cloud Four. In addition to the work we do, we're trying to teach other people and share what we learn. We do that both in the projects where we're oftentimes--

We have people at other companies who embed inside our team to learn, or we will do workshops or sessions. It's just part of the culture of the organization is that we're doing that, and then writing becomes a big part of it as well.

Dave: It's really rare for a company blog to be as good as yours.

Jason: Thanks.

Dave: The post quality is through the roof, and yeah. I think every company is like, "We're going to start a blog," and it just is very bad.

Jason: Yeah.

Dave: But yours is consistently maintained, consistently updated. Very--

Chris: There's a "Here's our new hire." "I like hiking." Then "Here's what five employees' favorite books are." Then never post again.

Dave: Yeah.


Jason: When we were... I wrote a book on progressive Web apps and, for a while there, I was... I had a Twitter search - or something - going for anybody mentioning progressive Web apps and links related to it. Over and over, I would see the same articles, particularly on agency websites, which were like, "Hey, have you heard about this new thing called PWAs? Here's what it is." Then they'd repeat what Alex and Francis wrote at the very beginning. Then they'd say, "Could this be the next big thing? Come talk to us and find out how we could work with you on it."

Chris: Yeah, the point of it was marketing.

Jason: Just over and over again, and it's gotten worse with AI, right? All of the just generated content stuff.

Chris: Mm-hmm.

Jason: And so, I'm not saying that everything we write is unique. I'm certain that there are things that we've written that have covered topics that other people have already written about. But everything is human, and everything is something that we spend some time on and are based on things that we've been researching or learning. We try to bring something, some new perspective to whatever we end up sharing.

Chris: Well, that's nice. We had... We've been talking for a while now about this show and, like, what are we going to get into? You know all kinds of stuff about the Web. I want to talk about some Web stuff, but there's this other lingering thing. There's a little bit of cloud over this. It's so interesting.

By some economic indicators, the United States is doing just fine. The economy isn't a big disaster. Then there are some of us who are feeling like, "But yes it is! It's a total disaster! Ack!" You know?

We talked about Dave finding this new job and having it not be the world's easiest thing in the entire world. And you're like - whatever - projects aren't exactly flying in the door the fastest they ever have.

Jason: No.

Chris: And there's me trying to run a business based on advertising and have, for a long time, watched that market go to crap and have to pivot and think about other things that we can do because that industry seems like in the crapper. There are just a lot of stories like that.

There are a lot of people really struggling, especially post... What's their fancy word for it? The Great Resignation - or whatever. It used to be cool to quit. That's fricken' over. You know?

It's not cool to quit anymore. You've got a job now? You better hold onto that sucker because there are people getting fired right and left. Not exactly walking into other jobs.

What's going on? You sent us an interesting article called "The White Collar Recession," and it might just be because of who we are (a little bit). Could you tell us what's going on there?


Jason: Yeah. Yeah, so we've been talking a little bit about how it's been a struggle in tech, in general. It's been a challenge finding projects. I think it's been more of a challenge over the last few years than it has been in the past. And it's been hard to figure out what's going on there.

I don't know that we definitively have an answer. I'm sure that there's some combination of things that we're doing, things that are going on at a macro level, divided attention, stuff of that nature.

One of the things actually my business partner Megan had shared was this article in Business Insider called "Welcome to the White Collar Recession." And in it, they talk about Vanguard looking at national hiring rate broken down by income level and showing that people who make less than $55,000, that the hiring rate has held up well. But that for those who make over $96,000, that hiring has slowed to 0.5% (less than it was in 2022). So, it's the worst that it's been for that group since 2014. And so, because of it--

Chris: Which we happen to be in, I'm afraid.

Jason: Yeah.

Chris: We're lucky.

Jason: Yeah, the tech industry tends to be higher paid. Yeah.

Chris: Yeah, and so yeah. It's possible that it feels bad if you're doing well, which there's a little irony or something to that. But I could see it. It makes me... I don't know. It explains it at least a little bit to me.

Jason: Yeah. I think that and the interest rates being higher and the lack of capital going into the industry, things of that nature. That's my suspicion. Everything seems to be belt-tightening unless you are in tech, unless you happen to be doing something with AI, in which case you can get billion-dollar valuations with no rational reason.

Chris: Well, that's a thing, right? If you happen to have got on that wagon, probably was smart. At least from "I want to make money out of the world" perspective.

Dave: Economic security. [Laughter]

Chris: Yeah. Maybe it won't last, but hey. You got yours while the getting was good. You know? Yeah. From your perspective, everything is going great.

Jason: [Laughter]

Chris: You know? Not for me personally. You're talking about the hypothetical person.

Chris: That person probably feels like the tech industry is popping off right now because we have a tendency to extrapolate what we're experiencing to what everybody is experiencing (at least to some degree). Although, you did point out the exact opposite of that. That some people, despite them personally doing pretty well, can have an overall opinion that the economy is doing poorly.


Jason: Oh, I was going to say that was something that has been a little sort of at the macro level again. There's been this dissonance in survey respondents saying that they themselves are personally doing well but that the economy is poor. And again, I think that that is when you're surveying at a national level, you're getting people across a broad segment of sectors. And so, it's not just in tech.

I think what the Business Insider article was looking at was, like, "What are the vibes in white collar versus what are the vibes generally?" It is a bit of sort of strange dissonance at the macro level. And there's just a sense, like, this survey thought... Unfortunately, I don't have it in front of me, but it was like people believe that unemployment was worse than it was. People believe that GDP growth was worse than it was.

It was like all of these things that have actually been good indicators are things that aren't getting through, for some reason. Then I think tech is it's own separate thing.

Dave: Social things, too, like who you voted for, for president, probably impacts your feeling on whether or not--

Jason: Oh, gosh. Yes.

Dave: --the current president is doing good or bad. You know? What news station you watch probably impacts how you feel about the economy as well, or just what you're hearing about it, right?

I watched a Daily Show thing, I think this morning, and it was just lamenting. He made a joke, but it was just like we get all our news from dudes in cars on TikTok now, so maybe we don't have the best perspective on the economy.


Dave: You know it's just guys in sunglasses yelling on TikTok in a car.


Jason: I had... Sorry to go on a little tangent here.

Dave: Yeah, go for it. Do it.

Jason: I was researching somebody yesterday, and in the process of... A professional CEO of a company. In the process of researching them, I came across their TikTok. They were doing the same thing, lots of these little clips in their car.

As somebody who has spent a lot of time over the last four years working from home, staying from home, I'm like, "Why are all the TikTok videos in cars?" Why aren't they in people's homes or whatever? I have no... It never occurred to me until I looked at this wall of... This person has a nice office. Why are they always doing their TikTok videos in their car?

Dave: Huh. Yeah. I wonder if it's just a trope you can't escape? It's like that's the format. You know?

Jason: Yeah.

Chris: Maybe. Have you seen the ones where they have a microphone like this, but they don't have a podcast? You know?

Jason: [Laughter]

Chris: It's like the fake podcast talk. I love that.

Dave: I wonder... Yeah. I wonder if it's this "authenticity". I'm doing hand quotes for the audio listeners.

Chris: Oh, definitely.

Dave: An authenticity thing where you're like, "Oh, this is just... I'm just spitting off the dome in my car, man. I was getting some Chick-fil-A, and now I'm just going to hit you with some economic facts."

Chris: Yeah.

Jason: [Laughter]

Dave: Not like--

Chris: Like, I'd do it in my shower if that was cool.

Dave: Yeah.

Chris: But that's--

Dave: But that'll get me tagged or barred.

Chris: Yeah.

Dave: Demonetized. So, I wonder if there's this attitude of, like, "This is just so off the dome, I haven't been to college for this." You know? [Laughter]

Chris: Or maybe it escapes criticism, too, because if somebody is like, "What are you talking about, man?" I'd be like, "Man, I was just poppin' out. I just got a Big Mac. Don't worry about me."

Jason: Yeah. It was one of those random thoughts where, you know, it's been years of watching people do TikTok videos in their cars - or whatever - and then, for the first time, I was like, "Why are they all in the car?"

Dave: Yeah. Yeah.

Jason: What's up with that?

Dave: Now you've got me thinking, and I will spend the rest of my day thinking about--


Jason: Sorry.

Dave: I wonder if, as Americans, we spend a lot of time in cars? But then you're like, other countries, they're doing it, too. Anyway. Yeah, interesting.

Back to the vibe session, I think is what the article called it.

Chris: Yeah, the vibe session. Yeah, they did.

Dave: For me, that's ultimately what it comes down to. I know. I've looked at the data. The data is good. Stonks are up. We're doing good.

But I think that critical point in the data, you're saying, white collar above $95,000 is down, and so you're probably feeling it more. For me, it all comes down to a feeling, and that feeling is mobility. If I needed a new job tomorrow, could I get one?


Chris: Well, if you wanted a $55,000 job, you absolutely could tomorrow. I would also think that the chart gets worse. I mean there are only two points we have so far on this chart, so I don't know. Grain of salt here, but we said six-figure-ish, you know, $95,000 and up.

Jason: Yeah.

Chris: What does it look like at $200,000 and up? Is that even worse? What about $300,000? I hate to... I don't want anybody to feel like we're just a bunch of rich jerks or whatever, but those are actually kind of the numbers when you've been working many decades at a high level in this industry. It's not too far off from that. You know? Is it even worse there? Yes?

Dave: Well, and then what is--? Yeah, is that worse? I would love to know that data, if that number goes even farther down. But then the question I have is are we just caught up in a huge market correction? Is this just, we got way too high on our own supply about big ol' tech salaries and now the reaper man is coming.

Jason: [Laughter]

Dave: You know? Is that what's going on? I don't know.

Chris: Well, AI is looming over the industry, too, being like, "How useful actually are you, big boy? Can a computer do your thing?"

Dave: Oof.

Chris: Yeah. I don't know.


Jason: Yeah, I tend to be an optimistic person, so I'm cautiously optimistic in this regard. I think that there has been a sort of hold, as people try to figure out what they want to do next, as companies try to figure out what they want to do next. I'm hopeful that as people tend to see the overall economic numbers and, hopefully, the Fed will reduce interest rates, and that'll free up some money as well. Maybe we get through this election as a country. That would be really nice. And then maybe we can start building some stuff again.

Chris: I like it. Optimistic Jason. I'll take that. It feels good.

Jason: [Laughter]

Chris: Dave has been optimistic about it, too. Every time I brought up just kind of the length of how this one has felt (even though perhaps it's just for us here at the top in our ivory towers). But still, things tend to have bounced back, right? "They always have," were your words, Dave. You know?

Dave: They always have, but you know what chips away at my armor there is the layoffs keep happening. Indeed is a big company here in town. Famously, "We're going to hire 3,000, 5,000 people," or something, like two years ago. And now they just laid off like 1,700 people - or something like that. And it's just like--

Chris: Oh, my gosh.

Dave: Brutal. I've got more than a few friends caught up in it.

Chris: Do they post their job opening on their own site? Is that a little too high on your own supply stuff?

Jason: [Laughter]

Dave: It might be. Yeah, yeah. But that just goes to show. The job company is having trouble [laughter] keeping jobs. So, I'm just like, "Ugh! This is tough." But again, I don't know. I kind of just cling to that believe it turns around eventually. But you know it was always... It took something to turn it around, and maybe this is where we pivot to talking about Jason's explorations into AI and stuff.

But we got the iPhone. Boom. Mobile Web. Oh, man, that's a whole heap of work to do.

Oh, people don't like those m-Dot websites, so whoop, let's make responsive websites. Oh, that's a whole heap of work to do. You know?

Maybe is AI the next thing? I don't know. Jason, you kind of started poking at it, huh? Or seeing what it's good for?


Jason: The full credit, actually, is to my business partner Megan Notarte who she started AI Portland. She's been really exploring this space quite a bit. I've been kind of on the opposite side holding back against it trying to figure it out.

The big question in my mind was, "If we use AI, is there a way to use it in an ethical way? Is there a way to do so that is consistent with Cloud Four's values? And to do something that doesn't create greater environmental harm down the road."

It's been, I think, something that I've wrestled a lot with. It's something that we've wrestled with at Cloud Four, trying to figure out how to strike a balance there because one of the things... So, we had these three conclusions that we came to.

The first was that no matter what we do... There are a lot of people who actually argue that AI is inevitable, and so we should go... We've got to just get on the train or we're going to be left behind. Right?

Chris: I would think that's the vast majority of people, even just off the street. They've heard so much about it. You ain't going to stop it now.

Jason: Yeah.

Dave: It's like an inevitable-ism, right? Just like it's going to happen.

Jason: I find that argument to not have enough merit from a personal ethics perspective just because - I don't know - maybe your country is tending toward fascism. Maybe you should become a fascist? I don't think that just because something may seem inevitable doesn't mean that you don't fight against it. Right?

The fact that AI may seem inevitable doesn't mean that it's necessarily right or moral. So, the first part, though, was that there is a piece of it that is inevitable, which is that we are going to have projects and clients, in our own work, opportunities to use AI. We're going to get asked questions about it. We have to come to some conclusion about when does it make sense to do so.

The second thing is actually not all AI is bad. I like the fact that my robot vacuum uses AI to recognize dog poop and doesn't run over it. I like the fact that my iPhone will use AI or machine learning to take better photographs.


Chris: There's infinite nuance there, isn't it? I can like that, too. I love that your vacuum cleaner doesn't smear crap all over your living room floor. But at what cost? It's probably low - just as a guess, in this industry.

Jason: Right.

Chris: But what if you learned that it had to make a server request to do that information, and that server request cost as much electricity as it takes to power a home for a year? Your mind would change. You'd be like, "Ooh, screw it." You know?

Jason: Yeah. Well, and the examples that I'm giving, part of my assumption, actually, is that, like you, the amount of energy consumption for the AI features in that robot is much less than actually probably the battery consumption of the robot itself doing its primary job. It's not a huge cost for that.

One of the things that I'm looking for across the board are places where AI gets pushed to people's devices as opposed to going back to these large data centers. And you see that, actually, in the iPhone example with the machine learning on photographs.

Chris: Yeah. We've seen this more and more. Shipping little tiny models to devices, it just is more efficient and such. Right?

Jason: Yeah. Yeah, so the things that we started asking ourselves was, okay, if we're going to do a project, how do we evaluate AI models to try to determine... Can we determine, between two models, which one might be better for us to use? Then how can we actually make sure that the way that we use AI, the way that we create the UX for it inside of applications, is actually reducing the risk, reducing the environmental impact (if we can)?

It's not perfect, and I don't think that you can succeed fully in that, but it might give us a little better outcome than not asking those questions ahead of time.


Chris: Right. I like the environmental one. That's a pillar of evaluation of it.

My other personal pillar, though, is the "How was it trained?" one, which is a big old secret all around. Nobody really wants to tell you how it was trained.

For somebody that's created an awful lot of content out there, it feels just especially raw to me to know that absolutely nobody--

Jason: But there are models and services that differentiate on this. This is exactly what they're promising, right?

Chris: Right.

Jason: I don't think Adobe is getting everything right in this space. But Adobe, in providing image-related AI, is using--

Chris: Yeah, said that we trained it on our stuff.

Jason: Right, exactly, things that they already own or that they license.

Chris: Well done. Big ol' golf clap on that one.

Jason: Yeah. [Laughter]

Chris: I also hope it's true. You know? You never know.

Jason: Like Codium, I think is the name of the product that is an alternative to CoPilot.

Chris: Love Codium. I don't know what you're going to say, though. Did they train ethically? I use it, so I kind of want you to say something nice next. [Laughter]

Jason: Well, so--

Dave: Be nice to the things I like.

Jason: So, my understanding, right, the promise that they make is that they haven't trained it on copyrighted or copy-left code. And so, they could be lying about that. I can't say for certain. But you have less worry about both it using code that it shouldn't have used and also less worry about it hallucinating its way into inserting copyrighted material into your projects.

Chris: Right.

Jason: Yeah, I do think... Not for all of them that you can make those decisions. But if you have the choice between them, does the AI model actually acknowledge bias? And if it doesn't, than that's probably pretty suspect.

I don't know how much the sustainability reports that these companies provide are worth when they're... You know like Microsoft announced recently that their energy consumption has gone up. I don't know how they're going to get to their sustainability goals. But at least they're providing the data.

Chris: Yeah. Isn't there some--? You've got to poke at that a little bit. Isn't that funny how a company can make a big pronouncement of where they intend to go with energy consumption, and then a new tech comes along that's a little flirty, a little hot, and they're like, "Eh, forget all that"? I'm not saying that's exactly what they're doing, but it does feel like that. And I'm sure they're far from the only one. Don't get mad at Dave. This is me talking at the moment.

Jason: [Laughter]

Chris: You know?


Dave: I like my job. Please don't....


Chris: I don't know.

Dave: Chris, bleep this whole section out. Chris Enns, bleep this whole section.

Chris: Ah!

Dave: Go ahead. No. Finish.

Chris: That's all. You see where I'm going with that, right? I don't know. It's fun to throw out. Can anybody just say anything? We're just off the heels of all this Scarlett Johansson stuff, too. We've seen all these documentaries somewhat recently of how Uber operated and how the different big companies are like, "You know how you get stuff done in tech? You just absolutely ignore all laws and governments entirely. That's the only way anybody has ever done anything great." God dang it! You know?

Jason: Yeah.

Chris: You can just say anything. Nothing matters anymore! Ack! [Laughter]

Dave: This is great. We've got optimist Jason versus nihilist Chris. This is great. This is good. This is a good dynamic. Here we go.


Jason: Well, I don't know that I can be qualified as optimistic on this. What Megan and I tried to do in this series of articles was to make sure that we're at least asking the questions. Right? So, we're asking questions as we pick AI models, and we're asking questions, like, as we pick a usage, what processes are in place to vet answers from the AI? Because we know that it will hallucinate answers, and so we've got to make sure that there is some way that humans are in the loop and that they're checking things.

Asking ourselves, like, "What happens if it gets something wrong? What are the ramifications for that?" Not at writ-large. Writ-large, I have much less control over that. But what we do have control over is within the scope of the projects that we're working on, within the scope of the things that we choose to use in our day-to-day work.

We have the ability to make sure that we ask these questions of ourselves as we're choosing to implement it. And so, what happens if it gets it wrong? Well, it's a very different thing if what happens is that it generates a summary and the human looks at it, one person based on their own content, and they look at it, and they're like, "Oh, this is wrong." They can make a decision about it before it goes out to other people. Versus examples where it's providing... Like Air Canada where it provided false information about fares to a traveler and invented a policy out of whole clothe that then Air Canada was forced to honor in court.

That sort of scenario, right? Both of these are possible scenarios. But if you've got... If the risk is low if it gets something wrong, then maybe you could use it in a safer way than if it's... I Don't know. There are places where they're using AI where somebody's livelihood or freedom is affected. That would be... I think that is a place where, if it gets something wrong, it's terribly wrong.

Chris: If you're Open AI, you just put... you have an input where you type in the prompt. And then, in little gray letters below it, you just say, "ChatGPT can make mistakes. Check important info." Then you wipe your hands and go about your day.

It's easy to make fun of it, but I wonder if they are kind of right, or I wonder if there's a cultural expectation, as this stuff advances, that we're like, "Yeah, we know it's wrong sometimes."

I'm already getting a little overloaded on the, like, media pointing out. They're like, "Look, there's glue in the pizza," or whatever. Any time there's an AI mistake, people circle it in red crayon and put a picture of it, and we all laugh about it on social media. There's only so long that will go on until we're all like, "Yeah, yeah. yeah. I know it's wrong sometimes."


Jason: But again, what is the cost of it being wrong? One of our clients, Champion Power Equipment, provides generators. It's one of their core products.

They help a lot of people who are dealing with power outages, hurricane season, things of that nature. If for example AI was being used (and as far as I know, it's not -- but if AI was being used) to summarize user manuals and things of that nature, and somebody whose power was out was trying to troubleshoot a problem and was communicating with an AI chatbot and getting erroneous information, they're wasting the limited energy they have on faulty information being provided by AI in what could actually be a pretty critical situation for them. That seems pretty bad, right? Versus other use cases where I asked AI recently to respond to every question as if it were a duck. That's probably pretty low risk.

So, I do think that those types of questions end up being important. Then there are use cases where AI is useful. Hopefully, you can find ways to do it in a more ethical fashion.

Chris: Let's say... You have blogged this all kinds of interesting, good questions about evaluating these things. I imagine there's... I don't know. It's not like you wrote it like this, but maybe there's a score. They're not all yes or no questions. They're like, "How good did you do on this, a little bit, one through ten, let's call it?"

Jason: Actually, there was a version of this where that was actually what I was going for was, like, you'd fill it out, and you'd answer specific questions. Then you'd get a certain number. Then at the end, you'd add them all up. Then there'd be ranges based on that.

Chris: Sure. Didn't feel good in the end or what?

Jason: You know the problem that I had was that the questions needed to be explained. You couldn't just ask the question without context about why you were asking this question and why the answers mattered and what it meant. Like, what does it matter whether we're trying to figure out how much of the processing that AI can do can happen on the device versus happen in the cloud? What are the privacy implications of that? What are the environmental implications of that?

Chris: It's not exactly a one through ten kind of answer, right?

Jason: Yeah. Yeah. I do think, though, that you could... One of the companion pieces we've bandied about is to actually build a worksheet like that, still, to go with this that somebody could then print out and answer the questions and - I don't know - print out, fill it out online, something like that.

Chris: Will people disagree? Is it objective or subjective, though? One given model, does it just have a score that is like, "That is the score that it is," or could two people disagree on what that score is?

Jason: I'm sure that they could disagree.

Chris: Well, yeah.

Jason: I'm not talking--

Chris: Now that I said it like that. [Laughter]


Jason: Yeah. Yeah, and I'm not certain that you can... Again, one of the realities in writing these questions is that some of them will end up unanswerable for different models. You can't get energy usage information. You can't get... Like you mentioned, you can't get information about training data from a lot of the models.

Chris: Right. Maybe that should be at the top of the worksheet then, like, "If you don't know, assume it's not good."


Chris: You know? Nobody hides information that's really good. Everybody is trying to market stuff. If they're like, "We're the lowest energy usage model in the world," they would tell you that.

Jason: We could find ourselves in a situation where we've got something that could use AI to provide a tremendous, tremendous value, like help.

Have y'all seen Apple's personal voice? Apple released this feature where somebody who is potentially going to lose their voice can record their voice and use AI then to recreate the voice. The ad they have for it is incredibly moving.

Chris: It's already a thing? You could just do it?

Jason: Yeah. Yeah.

Chris: Oh, really?!

Dave: You can reconstruct voices, too, so if somebody already lost their voice, you could use old videotapes or something.

Chris: Wow!

Jason: Yeah.

Dave: Reconstruct their voice, yeah.

Jason: Yeah, so the ad for it is amazing. It's really well-done and very moving of a real person who as lost his voice or losing his voice (reading a storybook to a kid, to his kid).

Chris: Nice.

Jason: I immediately thought, like, my father, for the last few years, had a stroke and ended up with a trach. We took him for quite some time with the pandemic until he passed in October. But I spent three and a half years without him being able to speak. All I have is the old recordings of his voice. I thought how amazing this would have been for him (during that period of time) to be able to actually type out the things that he wanted to say and have those things conveyed in his voice instead of us trying to read his lips for three and a half years. Then how nice it would be now to be able to have his voice read old letters to me or something of that nature.

When I see something like that, I'm like, "Okay, well, this is a compelling and transformative use of AI, right? Something that could make a really big impact in the world."

Maybe we've got a project where we've got something that's at that level of good but that, in order to do it, we have to pick between two models where we can't look at their training data.

Chris: Exactly! That's what I mean. There are all these... Not to make a joke of it, but it feels like sci-fi. You're like, "That's incredible! I wouldn't even have thought of that when I was a kid. What an amazing thing it can do." But then if it was a sci-fi book, it would have some dark ass angle.

Jason: [Laughter]

Chris: It'd be like, "Yeah, if you want it, though, you have to kill a llama with your bare hands," or some crap. You know? It wouldn't just be like, "Oh, this technology exists for free. Everybody gets it." There's going to be some tradeoff.


Jason: My hope is that if we're fortunate enough to be in that position, to be in a position where we have a project that might have that level of impact on people's lives, that in asking these questions, if we can't answer every question, we can answer enough of them that we can get a good sense that using model A would be better than using model B for these reasons. It's not going to be perfect, but that I think is better than not asking any of the questions at all.

Dave: Yeah, a lot of the "Is it good?" centers around number of tokens and size of model right now, and then that is sort of a predictor of accuracy (for large language models specifically). There are other kinds of AI. But it would be neat to see all these other things kind of baked in to, like, "What is the - whatever - cost per query?" or whatever. Energy cost per token - or something like that. That would be interesting. Or data sourcing, ethical sourcing of data, how does that happen?


Jason: One of the things that I've been... I've been excited to see what Apple does with AI for a few reasons. One is that they have MPUs in phones and devices for years, right? They tend to want to run things on device. They tend to try to differentiate based on privacy. I'm really curious how far they can get with that versus there are rumors that they're going to license Open AI or license Google Gemini.

Chris: It does seem like a natural extension, right? People have been telling them forever, Siri sucks. Maybe if they're sick of hearing it enough, they'll just buy the Scarlett Johansson thing and throw that in there.

Jason: Well, my hope is that they put as much possible on the device. At Google AO, one of the things that I was interested in, both from I'm weary of it and also I'm glad to see more stuff going towards devices, the fact that they're doing Gemini Nano in Chrome, so it'll be baked in.

Some of the use cases that we're finding most interesting on a current project are related to suggesting tags and summarizing text and things of that nature. I'm not sure I need to go out to a large language model API to do that. Maybe I could just have that smaller AI in the browser and ask questions of it. Of course, that then ends up being Chrome-specific.

Chris: I downloaded some app the other day called Mac Whisper because somebody recommended it as being a nice, local tool for, like, "Hey, throw an MP3 in here and get it all transcribed up and whatnot." It really looks like a nice app, but when you download it, it's whatever, 20 megabytes. But then the first thing you've got to do is pick models for it to go get. Then it uses those on device to do what it's going to do.

There are three of them. It's like, "Do you want the tiny one?" that's a little bit accurate but is only a couple of megabytes or something, up to bigger and bigger and bigger ones. Then they put a pro gate at the really, really big ones that are super accurate that you pay for.

Jason: Yeah.

Chris: I guess that's this playing out, right? Small models equal cheap, easy, efficient, whatever. Sometimes that's just good enough. Picking those models is fine for some applications.

Jason: It's fun. Web LLM, you can go there, and you can see demos, and it'll download the models into the browser. Then you spend a long time waiting for them to download. But then you can run that stuff locally as opposed to having to go out to the cloud for every sort of response that you want.

They also support stable diffusion. It doesn't work as well as the larger ones, but again, pretty interesting stuff happening in that regard.

I don't think we can continue to expand data centers at the rate we've been expanding them. So, if we can use the CPUs and GPUs of people's devices, maybe we can reduce the amount of energy consumption that these models consume.


Chris: To circle back to that, going through the Cloud Four blog post wringer, you were writing these questions. You probably had models in mind. Have you ever seen a model that basically checks every box that you set out or answers every question pretty satisfactorily decently?

Jason: No. [Laughter] But I would also say that I was coming from a much more skeptical perspective in trying to find a way to look at AI that was something that I thought I could use. And so, I haven't looked as closely.

I do feel like, within certain specific use cases, that there are AI models that differentiate on this stuff, on pieces of it, which I think is really useful. But do they have every piece of it? Are they addressing the environmental impact? Are they addressing the training data? Are they addressing the privacy?

One of the things that I think is really important is how segmented... What sort of assurances can we have that the things we put into the system aren't going to get hallucinated out by the AI to other people? I haven't seen one, but I would also say that I haven't gone and spent a lot of time looking for that.

I did spend time trying to figure out how to evaluate the environmental impact because that's a thing is, I think, the hardest to look for. And I did look around at all the sort of major AI models on that. And that was the one where there was the least amount of information. Your best hope was that the company has some sort of sustainability report that they release annually and that, within it, they talk about AI's contribution to it.

You do see that from Microsoft and Google. But that doesn't tell you, okay, well, what the hell are you going to do about it? [Laughter] How are you going to make it better? So... Yeah.

Chris: Yeah. That's a good one to ask. Google has Gemini. There's a box. You type stuff into it. We know that a huge part of Google's business (all of it, maybe) is advertising-ish. You type into Gemini, "What are the best shoes for wide feet?" they're going to be like, "Gotcha, bitch! I'm putting that on your profile. You want to see some ads for wide shoes? I got you."

There is no way they're not doing that. Or if they're not yet, it's definitely coming.

Jason: Yeah.

Chris: Come on.

Jason: Yeah.

Chris: Use your brain, people.


Jason: Yeah. Also, one of the big things from last week also was the search now doing the AI overviews. You all were talking earlier about our blog and how we're writing things and sharing things. Part of the reason we write that stuff and share things is because people read them and then they think, "Oh, these are people we'd like to work with." Right?

Dave: Right. Right.

Jason: It's not all altruistic. I would say that a lot of it is altruistic because we kind of write because we can't help it. But also, our hope is that people read the things we write, they think, "Okay, these are really smart people. We'd like to work with them. We'd like to learn from them. Let's bring them in on a project." That only works if people find our stuff.

Chris: And believe it. Eventually, it'll be like... Couldn't I take every word that you've ever written, which is probably south of a million words -- maybe it's getting there, but probably less -- and ask a model, "Write me a blog post just how Cloud Four would write it." You know?

Jason: Yeah.

Chris: I don't know.

Jason: Yeah, possibly. Although, I do think that people have different voices on it, you know, on our site.

Chris: Yeah, they do.

Jason: There are people who write much more briefly or with a lot more brevity than I do. [Laughter]


Dave: Hey. Hey. I've read your nine-part series on how to put an image on a page. We're good.

Jason: Yeah.

Chris: [Laughter]

Dave: It's good. But to your point, if an engineering manager, CTO, or something, "Ah, man. My engineers keep telling me we should just make a PWA. I don't know what a PWA is. I look too stupid if I ask, so I'm going to Google "What's a PWA?""

Jason Grigsby has written a book on progressive Web apps, so hopefully, that shows up in the top five, and Cloud Four content shows up there. But to your point, if Google can make a pretty compelling article on the fly, what does that do? What does that do to us?

Jason: Yeah. I think that is probably less of a concern for us than advertising-based businesses.

Dave: Yeah.

Jason: But it is absolutely concerning. I find myself wondering what's the incentive. Traffic to our site has been down, and I don't know why. I feel like we're writing really, really great content. I suspect it's kind of that social media has been completely fractured.

Dave: Yeah? Yeah.

Jason: But--


Chris: Have you heard that phenomenon? I think it was in a Verge article. They were talking about Google Zero. That's the term they're throwing around, which is that the worse and worse Google search engine results pages get and the more AI they shove in there and all that stuff, the less people click on anything else.

There are lots of reasons for this. It's whatever. But there's evidence of some companies and businesses that can watch their Google Analytics, ironically, trend down, down, down, down, down, and hit the bottom. They hit absolute zero. They get no traffic from Google anymore. That's what happened to them. Not because they don't have good content or aren't producing it or there are any problems. But because whatever small niche they're in - or whatever - the search result pages just do not surface them anymore. They have hit Google Zero.

I have no doubt that's at least partially responsible for what's happening to you. I will say that on the air.

Jason: Yeah. I suspect that it's at least a contributing factor. But I think the social media fracturing has probably been bigger just from--

Dave: That probably cut the signal, right? That cut out the, like, "Oh, this is a good article," or reputable one.

Jason: Yeah, people sharing stuff that they have come across that's useful and sharing it in a way that then gets to other people. I suspect that that's the case.

Dave: I love Mastodon, but there is... You know. There's a lack of sharing. There are not enough people putting links to cool things they found.

Chris: Yeah. It's not pushing traffic like the old days of social media, for sure.

Dave: Not doing numbers.

Jason: [Laughter]

Chris: Right.

Dave: I did hear -- putting on the optimist hat -- I read somebody's take that Google making search results on the fly, it's not going to impact us who write authentically. It's going to hit the content farms and link mills first. It's going to evaporate their sort of "content strategy," quote-quote. Strategy of just blasting out articles for whatever thing they think of first. That is maybe a future I would like.

Jason: Yeah. Yeah, I would like that, too. I'm tired of reading through the same articles with a bunch of fluff just to answer one simple question.

Dave: Also, wikiHow is awesome. [Laughter]


Chris: I guarantee. It's 10:00 a.m. where I am today. I could go to GoDaddy after this show, buy a domain name called, like - I don't know - I could sign up for the API for Open AI and just blast that API, just hammer it to give me information about tropical plants.

I could get over 10,000 articles written into a WordPress database and published online before I leave work today -- 10,000, easily. Isn't that wild? If you had no ethics at all, that's what people are doing right now.

Jason: Yeah.

Dave: Mm-hmm.

Chris: Dave was trying to put on his optimist hat and I'm just ripping it off of his head.

Dave: Thank you, Chris.

Chris: [Laughter]

Dave: Nihilist Chris strikes again.


Chris: Dang it! I actually feel generally positive.

Jason: I know, and I feel... The first time on the show, 600+ episodes, and I'm coming in for, like--

Chris: I know. We're the worst.

Jason: --the "Everything is bad" episode.

Chris: No. CSS is amazing. We should talk about that.

Jason: I will say, too, one of the things that Google claims is that, in their data, they actually see an increase in clicks when they're doing AI overviews.

Chris: [Snickers]

Jason: But they won't share the data. [Laughter] I'm like, okay, like, seriously.

Chris: Mm-hmm.

Jason: If there was ever a time to just share the data, now is the time to share the data. If this is true, show us.

They won't distinguish, right now. So, this is stuff from the Decoder podcast last week.

Chris: Yeah, listen to that, too.

Jason: Yeah, yeah, yeah.


Chris: He asked him, "Will you commit to putting a URL param at the end of links that come from one of the blue links below or one of the AI links above so we can look?"

Jason: Yeah, so that we know, so that we can validate, like, yeah, we're still getting traffic from these AI overviews and stuff.

Chris: Right.

Jason: And he wouldn't. He wouldn't commit to it. To be fair, I think that the CEO committing to that on a podcast is probably not the right--

Chris: It's a little weak. You don't want to be seen being like, "Yes, I agree to do that," on a podcast.

Jason: Well, and what would your team say? They could have very... He talked a bit about the amount to which people game the system based on understanding how Google ranks things, so they need to do some research on that. But this is too big of a change for Google to be completely closed vested about it. And to the degree that they are, I just have to assume that it's not good and that the predictions are right.

What was it? The Wall Street Journal had somebody saying that it would be a 40% decrease in inbound traffic from Google by the end of this year - or something. Absent Google showing us something that says that all of our suppositions and intuitions about this are wrong, I assume that that's actually where we're headed.

Chris: I mean I just got done saying about how untrustable things that people say are, especially one like that, which seems so contrary to what's right in front of our eyes. You just took the top third of the page to spit this out. In what world are you sending more traffic to links that aren't that then? Prove it then.

Jason: [Laughter]

Chris: It just doesn't make any sense. How can you say that?

Jason: [Laughter] Yes.

Chris: Am I going crazy?

Jason: Well, I mean there have been... There are things that are counterintuitive. You know?

Chris: There are. That exists.

Jason: Yeah, so I am open to the possibility that my previously held beliefs are incorrect on the way that this is going to impact search results. But you really do need to prove it here. [Laughter]

I am not trusting this. This isn't even trust by verify. This is like, "I don't trust, so prove it."

Chris: What I do believe is that they have a strong incentive to not eat the traffic of the Web. They are very incentivized to do that. They need other websites to put ads on to make the money that they continue to make. If they stop sending any traffic out to the Web, that's a negative that's not good for Google. So, I don't think that they're going to behave in that way, and I like that. Good. Thanks. Websites are cool.

Jason: Yes. It's weird to think that the advertising, which has such a bane for the Web in terms of its performance and just marring the experience.

Chris: Oh, it might be its savior in the end?

Jason: Yeah. Yeah.

Chris: Oh, I love you, Jason. That's amazing.

Jason: [Laughter]

Chris: [Laughter]

Jason: But then, I mean, Google has that incentive. But none of the other AI models do, right?

Chris: That's true.

Jason: Will they feel the competitive pressure to cannibalize that business in favor of competing with Open AI?

Chris: Too big.

Jason: Yeah. I don't know. I just want to be able to write things and share them and do good work for people and be able to look my kids in the eyes and say, like, "Yeah, we tried to leave the world a better place for you." These are the challenges right now, I guess.

Chris: They're like, "Dad, you drove a Toyota Takoma." You know?

Jason: [Laughter]


Chris: Whatever. Didn't nail it.

Dave: From my F150, I will AI generate them an image of a dinosaur riding a skateboard over a mountain, and I feel like that's like, "Here, kids."


Jason: Here. Let me record a TikTok. I'll show you. [Laughter]

Dave: Yeah. From my truck.

Jason: Yeah.

Dave: I'm recording a TikTok from my truck.

All right, well, on that bombshell, we should probably wrap it up just for time's sake. But Jason, you're like, "I came on, and I did the sad show." I don't feel like that at all.

I feel like it's the sober show. It's just like, "Let's get in. Let's talk about what's going on. Let's look at the data."

Chris: Yeah. Me, too.

Dave: Let's have Chris's intense nihilism. We'll suffer through that.


Dave: Then I feel like you did a good job. [Laughter] But for people who aren't following you and giving you money, how can they do that?

Jason: You can find our writing at I am grigs@front-end... Oh, my gosh.


Jason: Yes, thank you. I totally blanked on that. On Mastodon and then @grigs most places, g-r-i-g-s. I am not on Twitter much for the obvious reasons.

Dave: It's called X now.

Jason: No, it's not.


Jason: There's also a reason why I don't drive a Tesla. Yeah, it is still Twitter to me and will always be.

Dave: Awesome. Well, thank you so much. Yeah, and thank you, dear listener, for downloading this in your podcatcher of choice. Be sure to star, heart, favorite it up. That's how people find out about the show.

Follow us on Mastodon, the same, shoptalkshow there. And then join us over in the D-d-d-d-discord, Chris, do you got anything else you'd like to say?

Chris: Oh! handcrafted by my own fingers and Dave's fingers and our mouths.