Brian Leroux chats with us about building modern web apps using Begin and other cloud services like it including a deep dive on AWS Lambda.
Time Jump Links
- 01:32 Guest introduction
- 04:45 Deployment back in the day
- 11:42 What is Begin?
- 17:34 Walking in to Amazon's services page
- 21:56 Sponsor: Cloudways
- 23:28 Managing all the different apps in Amazon
- 27:51 2nd tier cloud providers
- 34:42 Running Rails on Lamda
- 37:57 Sponsor: AWS Amplify
- 39:42 Anticipating parity in cloud providers
- 49:12 Are load balancers needed in Lambda land?
Cloudways - a managed cloud hosting platform that takes care of all hosting complexities to let you focus on building great websites and business.
Cloudways offers you the flexibility and choice of (AWS, GCE, DigitalOcean, Linode, and Vultr) with unlimited websites, pay-as-you-go pricing, free SSL, built-in CDN, 1-click server scaling and many more.
With this you get complete peace of mind because the platform is fully secure and 24/7/365 expert support is there to help you out round the clock.
Want to try out Cloudways? Use the promo code SHOPTALK30 and get 30% OFF.
AWS Amplify is a suite of tools that enables you to build full stack serverless and cloud-based web and mobile apps using your framework or technology of choice on the front end. Using Amplify you can quickly get up and running with things like hosting, authentication, GraphQL, serverless functions, APIs, machine learning, & file storage.
Amplify is built especially in a way to enable traditionally front-end developers to be successful because they can use their existing skillset to build real-world full stack apps that in the past would require deep knowledge around back end, dev ops, and scalable infrastructure.
MANTRA: Just Build Websites!
Dave Rupert: Hey there, Shop-o-maniacs. You're listening to another episode of the ShopTalk Show, a podcast all about front-end Web design and development. I'm Dave--failed deploy--Rupert and with me is Chris--CICD--Coyier.
Chris Coyier: Yeah.
Dave: Hey, Chris. How's it going?
Chris: CCCD Coyier.
Chris: I'm doing fantastically, really, you know, in the face of national crisis. My computer continues to work, so that's great.
This is a show all about building websites and we've been talking about browsers so much. You know, maybe we will. Maybe we won't. This might be finally a show where we dig into some technology and--I don't know--computing and even servers and stuff in a deep way because we have an awesome guest who is very knowledgeable about that entire world, Mr. Brian Leroux. Hey, Brian.
Brian Leroux: Woo-hoo! Hey. I'm stoked to be here.
Chris: Hey. Yeah. Yeah, thanks for coming on. There's so much to start with. Brian has kind of an interesting past and worked on some interesting big projects in the past. Let's start with now and then talk about the past and then circle back around, maybe. [Laughter] We'll see how that goes down.
But right now, you have an interesting startup. You're a co-founder of an app called Begin. Is that right?
Brian: Yeah. Yeah, my day job is working on Begin.com. We do serverless CICD, continuous integration and continuous deployment, which is a mouthful. I think that the TLDR, we get you up and running on AWS within ten seconds or less.
Chris: AWS. Okay.
Chris: It doesn't say CICD on the homepage. Do you avoid it on purpose?
Brian: Well, this has been a thing for us. We're still trying to figure out exactly what the messaging is. We love what our tech is and we thought we knew who our audience would be. We're learning quickly that it wasn't who we thought it was, so we're probably changing that homepage pretty soon to be a little more webby focused.
Initially, we thought we were more of a DevOps tool but it turns out that the DevOps crowd is a little bit, let's just say, trepidations about getting onboard with serverless.
Brian: But the front-end dev--
Brian: Yeah, there's a bit of hesitation. I think it might be a step too far for them, but the front-end developer community is stoked. They're all over this serverless thing.
Chris: Stoked. Yeah. Yeah.
Brian: We're trying to figure out what that messaging looks like and appeals to them but we're doing a really terrible job of marketing, if I'm being honest. We've been largely focused on the tech.
Chris: Oh, come on!
Brian: No, we are. [Laughter] We could be doing a better job. There's room to improve, let's say.
Chris: Yeah, well, CICD, I know what CICD is. We've had shows on it before. It stands for continuous integration and continuous deployment. It feels like it's almost become an expectation lately of how things are. I bet a lot of people listening to this show work in an environment where, whether they know of it or think of it in that way, it does it that way anyway.
Brian: Right. Yeah.
Chris: It's kind of like--I don't know--you commit some code and some stuff runs automatically. That's the CI part, I guess. That's how I would think of it.
Chris: Then if all goes well, it just goes live or at least continues along the pipeline and that's the CD part of it.
Brian: That's right. Yeah, and this is kind of like testing was and revision control before it. It's one of those things that, once you have it, you can't live without it. But before you've done it, it seems like extra work.
Brian: I think it's kind of maybe downstream of Heroku, the first people to really start doing getopts and removing friction from deployment. It used to be an entirely isolated step where I would write my code, I would test my code independently, and then I would throw it over the wall to some QA team and they would throw it over the wall to some ops team. This is all about removing friction and lead time interruption.00:04:21
Chris: Yeah. Was Heroku the first that had kind of like a command-line tool? You'd type "Heroku deploy" and it would go out. You'd be like, "Oh, cool! That was quick."
Brian: The first one I remember, and I also remember setting up my own Rails apps before that and, after I saw Heroku, I never wanted to do it again myself. [Laughter]
Chris: What was the early days? I'm sure Dave remembered. If you were working on a Rails app many years ago--
Brian: Oh, god.
Chris: --and you wanted to deploy it, was it largely something that somebody did at the command line, like a configured--?
Chris: You'd do it from one individual computer, you'd do it?
Dave: Yeah, you'd merge it all into one computer, spin it up, and make sure it all worked. Then rsync it, the files over to Linode.
Dave: Which was cool. I mean you get down to it. All these things -- it's a server. It's a CPU in the cloud or CPU somewhere on the planet.
Dave: But you'd just push it over to Linode and then hope that your computer and Linode were close enough in configuration. Yeah, I think Heroku solved a lot of problems.
Chris: Yeah, it'd go live. For some reason, it felt okay that one computer was doing that. I don't know. But at some point, it started to feel a lot better if a cloud computer did it. We got used to the idea that there was a connection between a repo and stuff going live.
Chris: Yeah. I don't know what was in charge of that but, in my world today, I would say Netlify has done a lot towards that goal of pushing. Pushing, I mean Git pushing is the thing that takes me live.
Chris: That working in branches is the thing that I can still do but I'm not going live then. That's just another way to work because people -- the Git workflow is everywhere. Everybody does that, so why don't we tie that to the process of deployment too?
Then if you can get a little bit more out of that, that'd be cool too. Like if a merge request was also automatically a stating environment. Oh, wow! Fancy. You know?
Brian: Right. Yeah!
Brian: That's the kind of thing that I'm really excited about. The old way, I think the predicate for it was databases. We used to run our databases on the same box that we ran our Web server. We would even read and write files on that box.
Then we realized, "Oh, no. We need two boxes." Uh, oh. Wait. How do we connect to the database? It was like, "Oh, we'll separate those things." Once we teased those apart, that created a whole slew of new problems.
Being stateless was a bit insight into how we write our code and how we deploy it too. Serverless kind of pushed that to its logical maximum or maybe local maximum, so we have these tiny, little stateless things that are usually functions. Because we split it up into all these constituent, tiny little functions, we can deploy them in parallel. Better, we can deploy only the ones that change when they change, so we get really massively increased speeds to deployment, which means we get an increase in our speed to lead time to production.
Chris: These are only Lambdas we're talking to, right? Right?
Brian: Yeah. Well, Lambdas and maybe S3 and, maybe for the persistent side, DynamoDB is where I've landed anyways.
Brian: I think there are tons of options. I'm never going to say there's one way to do things. There are lots of ways.
Brian: This is the way I found that works pretty good. I'm risk-adverse, so I like Amazon and I'm hosting with Amazon because I'm pretty sure they're going to outlive me, so that'll be good for my timeline.
Chris: Ah, yeah. [Laughter]
Dave: That's nice because, when you get into these functions or Lambdas, I guess, is kind of what AWS calls it, right?
Brian: Right. Yeah.
Dave: You're writing a function like post-edit or something like that or post-get or something. Those are two separate files. Maybe or maybe not, but those are two separate files. Your post-get function should -- I don't know. You're not going to mess with it that often. [Laughter]
Brian: Right. No, that's right, and there's a whole bunch of other side benefits to this type of architecture. We get, for free, not only that parallelism on deploy and startup, but we also get the strong isolation post-deployment. We can secure these endpoints to the least privileged possible. Your get request, for example, probably doesn't need to write anything to the database, so you can lock it down to just reads.
That seems like not that big of a deal, but if you think about a monolith, the most secure it can be is the least secure thing it does. [Laughter] Whereas, with a serverless app, we can really lock it down on a per route basis.
Chris: So much more isolation?
Brian: Yeah, and so this can get bonkers because you can run on trusted code now from the userland.00:09:35
I built a demo app a little while ago called deno.town. The official deno is runtime by the guy named Ryan Dahl who did Node. He's kind of trying to -- he's doing a second system, as it were.
Dave: The second album.
Dave: The second album! [Laughter]
Brian: It's great because it's Node's Greatest Hits and a lot more stuff, basically. I'm bullish on deno. That's a whole other conversation, but I wanted to get going with it and there was no playground. There was no Web playground for this thing.
I was like, "Well, to teach myself how to use it, I'm just going to build a little playground. I did and it was useful to me." I was like, "Well, maybe I'll just throw this on the Internet."
How wild is that that I could just throw a brand new, untrusted runtime mode on the Internet? I have no concerns about what someone could do with that thing because it's so locked down. This isn't a capability that we've really had before.
Chris: Serverless is obviously of great interest to us at CodePen. We run a lot of our infrastructure that way, more and more all the time just because of reasons like that. Security is a big deal.
Chris: People tend to know that. It comes in the box with some of this serverless stuff. But also, some speed and some isolation is great and the fact that it's $0.20 doesn't hurt either, you know.
Brian: Yeah. The cost is kind of the -- maybe it's the thing that gets people really excited. You get a million invocations for free and then I think you pay something like $0.10 per million after that.
Chris: Come on. That's whack. That's wild.
Dave: That's a lot.
Brian: It's so cheap.
Chris: It's a lot.
Dave: It seems like a lot.
Brian: Yeah, and so we put them behind CloudFront.
Chris: They get invoked even less.
Chris: We've gotten kind of deep in the weeds already here, so maybe we should talk about Begin a little bit more--
Chris: --so we can figure out what you build with it and why and stuff. I did it. I think Dave did too, kind of kicked the tires kind of thing.
Brian: Cool. What'd you think?
Chris: You log in and -- well, oh, first of all, it's a little mind-blowing. You've got to wrap your mind around it first. Let's just see. I log in. I auth with GitHub. Then there's kind of a builder kind of thing like whatever, a wizard - nerds like to call it.
Chris: Which is kind of cool. I've seen another one.
Chris: You know what I mean.
Dave: Good. Good. Yep. Yep.
Chris: Step one. Step two. Step three. But it's not designed like that. It's more like, click the options. It's all nicely designed like, in the future, we have nice wizards.
Chris: It reminds me of stack bit where that's like a JAMstack version of the same thing where you're like, "I want this framework and this data store and this theme," and then you click the button and it wires it all up together. It gives you a repo that's ready to go like that. It reminds me of that a little bit because you're like, "I want the GraphQL starter with deno instead of Node or whatever. You pick some stuff and then, real quickly behind the scenes, it made me a repo, which I only realized when I looked. I think I got an email that's like, "Here's your new repo. Congratulations."
Chris: Not from you, but even from GitHub, I think does that. Anyway. Whatever. But what's compelling is, right after that, then you see this series of things that it's doing. This is the clutch moment where you see little yellow, orange lights that are like, "I'm doing this now," and then it turns green when it's done. That's all the CICD stuff happening, right? Like, "I've got to provision some stuff. I've got to deploy it."
Chris: It's letting you know as it's doing those things. That's the moment where you're like, "Oh, I get it."
Brian: This thing is so interesting because you can learn these lessons over and over and intellectually know them, but you still never know until you ship. We shipped Begin in March of last year. It was really focused on getting Lambdas up on AWS as fast as possible. A lot of people tried and a bunch of people gave us feedback. The feedback was, "Neat. I don't know what to do."
Brian: We were like, "Oh, yeah. Okay," so we've got to build onboarding paths. We built these example apps and onboarding paths. You don't need to use anything or you can use whatever you want. Really, all we're doing is wiring up Lambda functions to respond to HTTP events.
Brian: Yeah, so--
Chris: Lambda, if you wanted to use it raw, it's kind of a pain in the butt, right?
Brian: Oh, my god, yeah.
Chris: If you just are like, "I want to make a Lambda," you go to aws.com and find it in their sea of services they offer. I think there might be some kind of online editor for them, but you definitely wouldn't want to do that, right? That's an odd way to manage code. You need some way to push these things up to it. I remember there was like Claudia.js would kind of help with it.
Brian: I love Claudia. Yeah.
Chris: These days it's be Claire.
Chris: Yeah, we still have a bunch of stuff on Claudia on CodePen but have since moved a little bit more towards serverless.com because serverless, it's a little bit deeper. It can do your Ruby ones too.
Chris: I think a lot of us just assume that a Lambda is Node.
Chris: I would bet a lot of them are, but it's not the only language that Lambdas support.
Chris: There's more to it.
Brian: Yeah, you can do a custom runtime. We've built the Deno custom runtime, which I'm hoping the Deno project takes over from me, officially.
Chris: Oh, so Deno is not totally a--?
Brian: Not a first-class citizen.
Chris: First-class citizen, no?
Brian: Not yet, but the way they've built it, extending it is so trivial now, so you can bring your own runtime. Many people are doing that themselves at this point. If you wanted to run Node but with ES Modules, you're out of luck. But if you bring your own runtime, you can do that.
Chris: That's funny because it's not like they're little dockers. That's what they're not, right?
Brian: Yeah. No, they're a thing called Firecracker. It's a micro VM. It's the same idea as a container but it's a lot more low-level and just a lot less stuff. It's actually a VM. It's a micro VM I think is what Amazon calls them.
Firecracker is written in Rust, which doesn't imbue it with any natural properties that make it good, but it does have better properties for security and isolation done right, and I believe that team is doing it right. Their cold start times are approaching zero and probably will approach zero with enough time. Yeah, you can bring your own runtime, run whatever you want. You're right.
Chris: Meaning like, oh, PHP isn't one they offer just out of the box but it's not particularly difficult to make that work.
Brian: No, and I'm actually pretty big on it myself. I'd like to see Ruby, PHP, Python, all of those be first-class. There's a company called Stackery.io that has a PHP runtime that's pretty popular and people should check it out.
Yeah, it's an exciting time because we can take these little tiny functions and use them for the job they're best at. If you want to have one endpoint in Ruby but another one in Python and another one in PHP, you can do that now.
Chris: Yeah. The same repo. Same everything.
Chris: Just different. Yeah.
Chris: That's pretty wild -- pretty wild. Okay, so the point is you need something to help manage your Lambdas. You just do. You just have to. Begin is one of those things that, yeah, you're going to want it to help you get them there and get the permissions all right and--I don't know--help you with staging versions of them and all that stuff, right?
Brian: All that stuff, so our hypothesis for Begin, and I'm sure a lot of the other layer two cloud providers would agree, that Amazon continues to get bigger and bigger. They continue to grow faster and faster. They will continue to add more and more features. They're not going to get more attractable to use. You're going to still walk into the consult and go, "Whoa! There are 370 services."
Brian: "How do I compose these things?"
Chris: I did it today.
Brian: [Laughter] Yeah.
Chris: I did it today. This is what happened today. I woke up and there was a problem with a user-facing thing. I was like, "Oh, gees. What the hell?" You could kind of see in the console that something would go wrong and it gave an error in the console that was like, it found an angle bracket in JSON.
Chris: It was one of those errors that was like, "Oh, I tried to parse this JSON, but I found this angle bracket. That's weird." I couldn't really figure it out, but you could kind of see. I need to see some logs. I, unfortunately, didn't have a lot of deep knowledge of who set this up, where it is, and how to find those logs, so I had to log into the stupid AWS console, find the right--
Brian: Oh, my gosh.
Chris: I even had to guess at the region it was deployed in--
Chris: --because I went to the region I thought it was and there was nothing in there. I'm like, crap! Oh, I guess we used us-west-2 not 3 or whatever. Then I found the stupid Lambda and then had to dig around in there to see if logging was set up, which it was, I guess. I don't know if you get that automatically or not, but it was set up. What do they call it, cloud watch or something?
Chris: Then found the right function, which I just had to guess at what it was. Then found the logs. Then opened the logs and the logs had some gobbledygook, but I found the kind of error and I found the URL. I was like, ooh! So, something between the Lambda. It was calling out to another service. While it was calling out to another service, the other service was intercepted somehow and it was getting a captcha page.
Chris: It turns out that in between a Lambda talking to a server, Cloudflare got involved and threw up a captcha.
Brian: The HTML tried to parse it. Yeah. [Laughter]
Chris: Parse the captcha page. Thank God I was able to get into Cloudflare and put a rule up. I was like, oh, what a hard thing to do. I don't know if Begin necessarily helps debug that, but maybe, right? Because there'd be some logging that I could see without it being so obtuse.
Brian: Yeah, you're only one click away from your logs, at least.
Brian: This is kind of a distributed systems problem too, you know. We're combining all these small pieces that are decoupled or loosely coupled. This is just sort of the craft now. We need really good login and error reporting.
The DevOps-y crowd would say this is not what you want. What you want is observability, so you catch the bugs before your customers do. Either way, you need something that helps you sift through this massive amount of data that you're probably creating with these logs to identify these problems.
Brian: Root cost them if you can. Yeah, this is--
Chris: I mean this one would have been production-only anyway. It would work fine in dev because you wouldn't have this firewall in between, probably.
Brian: Yeah, right.
Chris: Anyway, not to make this about my problems.
Brian: No, no. It's relevant and this is the issue because I don't think this gets necessarily easier. We probably just build ourselves better tools for the discovery of these types of issues and the mitigation of them. It kind of is what it is. They're not at a point yet where we see the tooling running locally and giving you all the kind of feedback that you want in real-time. It is all about these short feedback loops.
Remember when mobile was just getting going. Everyone would say, "Oh, you'll never be able to do this on your machine. You're going to need to build your phone in order to test." That was just nonsense. Of course, eventually, the emulators and the simulators were going to get good enough. They just weren't in 2008.
It's the same state right now in the cloud. There's a lot of people saying you can't run this locally and there's no way you could replicate this stuff, but it's just because--
Brian: --our environments are crummy right now and it's wild west, which is kind of awesome because I love this space. You can get into it and start building these tools yourself and finding these problems and fixing them for a pretty large group and growing group of people.00:21:56
[Banjo music starts]
Chris: This episode of ShopTalk Show is brought to you in part by Cloudways. Literally, cloudways.com. It's hosting that is backed by different cloud providers. Let's say you want to spin up your Laravel site on AWS. You know you want it to be on AWS because it's got the power and reliability and stuff that you want.
But let me tell you, ASW is a smurfing company. I don't want to hand do that myself. Me personally, I don't want to. I'm much happier letting Cloudways do it, buying that through them and letting them be my support and help through all that stuff. It's a manage cloud hosting platform. It takes care of all the complexities. It lets you just focus on building your great website, your great business, you know.
You got your choice. There's AWS, Google Cloud Engine, DigitalOcean, Linode. There are others to choose from. Unlimited Websites, pay as you go, free SSL, built-in CDN, that kind of stuff that I definitely want. I definitely want other people handling that stuff. Just everything that I want on a posting that I know I need it; I know it's important, and don't want to deal with.
With this, you get peace of mind, fully secure, 24/7/365 expert support around the clock. Very cool. They have a discount for you. If you want to try out Cloudways, the discount code is SHOPTALK30 and you get 30% off your Cloudways plan. Generous of them. Check them out, Cloudways.com.
[Banjo music stops]00:23:42
Dave: I think this is cool. There is so much power in AWS. I think we were talking before the show here. One of the hardest things I did last year was set up an AWS with load balancing, automatic scaling, and things like that. I think the second hardest thing I did was set up an Azure to do stuff like that too, last year.
Chris: Setting up all these, these are very powerful tools. But, too, I think, Brian you had said it's like different teams manage different products.
Brian: Oh, yeah. Yeah, yeah.
Dave: Route 53, the domain routing tool, the S3 storage bucket tool, the DynamoDB tool, and what was the one, the Lambda function tool, those are all different tools. The log tool is a different tool.
Dave: Different team inside of Amazon, right?
Brian: Yeah, they've got the two-pizza team thing. I always thought that that was just a nice thing the managers say. It turns out it's true.
Brian: This company with literally something like, I think, tens of thousands of employees are all in these two-pizza teams. As soon as you realize that it's true, the console makes sense. All of a sudden, all these completely disparate products that have totally different UIs--
Brian: --with completely different conventions make sense because these are small teams moving really fast. Yeah, this isn't lost on Azure. Microsoft knows this stuff and so they're kind of doing the same playbook but it's also got some predictable results. We're getting a lot of sprawl, a lot of difficulty in the implementation, a lot of inconsistency in the implementation side and the integration side. Yeah, it's kind of just how it goes.
Dave: Yeah. You log in. [Laughter] Azure, God bless them, no, but you log in and it's nice. But then, man, I'm just lost.
Brian: Oh, I'm the same way.
Dave: Then I try to find my logs and I'm just lost. I think it's a hard problem to solve. You have so many things and you're trying to put them all together. Managers of different departments are fighting for visibility.
Dave: It seems like what you're doing with Begin, and you have this other kind of open-source C thing called Architecture, which I'd love to talk about.
Brian: Hmm. Yeah.
Dave: But what you're doing with this is trying to make this whole DevOps portion of it go away, kind of just sort of diminish. But Begin is kind of more angled towards serverless, I would guess, right?
Brian: Yeah. No, that's right.
Chris: It's kind of only serverless, right?
Dave: Only serverless.
Brian: We're trying to push the complexity down into the invisible layer of the value chain and imbue anybody with these superpowers that the platform offers. This barrier to entry is largely usability at the moment. It's not really technical. It's more about patience.
I used to joke about this a long time ago that there are two kinds of hard problems. There's a hard problem that takes just bruit force to solve like I'm going to move a beach one grain of sand at a time. Not very intellectually taxing. Then there are hard problems that are intellectually taxing like I'm going to understand the fundamental laws behind gravity. That's going to take some math.
Dave: Are we interviewing for Google right now?
Brian: No, no, no!
Brian: The previous one is my kind of problem because the way you solve that beach problem is much easier than the other one. [Laughter] I can't make myself a theoretical physicist overnight but I can move a grain of sand every day.
Chris: Yeah, you can chip away at it, right?
Brian: Yeah, exactly. Yeah. I think this is one of those problems. I also think that the cloud providers know this. They're not unaware that they have this issue. They're just trying to race to get as much market dominance on the capability side right now.
Chris: Right. They've already had so much success that it's like, well, they're doing something right, so it's hard to be that critical.
Brian: Yeah. No, for sure.
Dave: Well, and even Heroku. It's kind of just a fancy AWS GUI. [Laughter]
Chris: Is Heroku a second-tier cloud provider too? I hadn't heard that term before, but that's what you think of yourself as, huh?
Brian: Well, we consider ourselves -- we're definitely layer-two but we're CICD, so we deploy AWS and, in our paid tier, we deploy to your AWS. We don't care about that part. We care about removing the friction part.
Brian: Layer two, like a Heroku or anybody who is doing hosting themselves, basically. If my code runs on Heroku it, in theory, will run on AWS, but that'll take a minute for me to port it over. With Begin, our goal is that you can leave any time and you're one command away from running on your own AWS.
Chris: If somebody comes to -- it might feel like you should just build your whole app here, but I don't know. You don't necessarily think that, right? You don't have to. You can do whatever you want here. This can be a little part of it. You could just run a couple of services over here. This is just to help you get some stuff.
Brian: Yeah, it's just to help you get on there and I think you will find it to be a useful place to build. Begin itself is built on Begin itself with architects. You can build a pretty -- well, a very sophisticated app on this platform that deploys really fast.
We want to make sure, though, that we are being honest with ourselves here on what we're building. We're not building a hosting platform because I think trying to compete with Amazon at their own game would be a bad idea for a small startup and, also, not something anyone wants.00:29:37
Brian: I loved this product and I'm not going to name it out because I loved it so much and I was hosting with them for quite a while. I was at my previous incarnation of this startup. We got a letter or an email in May that we had one month to migrate because they were shutting down.
Brian: That was actually the predicate for me moving us, a lot of our stuff, over to AWS directly because it was unplanned work. It was a friend's wedding that I had to miss to get through that unplanned work because, if I didn't, then we would have had downtime and I just don't want to ever go through that process again.
It's pretty clear to me that there's a de facto winner in this world and it is AWS. I'm sure that Azure is going to catch up and I'm looking forward to that day. Where I am now, this is just the pragmatic thing to do.
Chris: You can build whole apps here because a Lambda can return HTML too. Sometimes I think of these Lambdas as, like, not Web servers.
Chris: Because they have other jobs, but they can do that too.
Brian: Well, this is the weird thing, one of the weird things, one of the many weird things we're doing, I should say. [Laughter] We realized this. We were using AWS API Gateway and Lambda for a JSON API, like most people.
Brian: It occurred to us that JSON is text just as much as HTML is.
Chris: Indeed it is.
Brian: Yeah, so we started to do our server render and everything through it and I'm never going back. It's great for that. There's no reason not to, in my eyes.
Chris: Particularly with CloudFront in front of it, right?
Brian: Yeah, and this doesn't opt you out of the JAMstack pattern. You can prerender and Lambda was literally designed to work with S3, so you get crazy throughput, even if you prerender and you push that HTML content through a Lambda through CloudFront. You can do literally gigs a second if you want. We're server rendering a Preact app and that's in a Lambda function. We actually do it right now on every request, believe it or not, and it's plenty fast for us.
Chris: What do you think, like right now, usage of Begin users? Is most people just doing the whole kit and caboodle over there or is there a lot of just like, "Oh, I'm just using this as the home for my serverless stuff"?
Brian: We have seen all kinds of weird stuff. As usage has picked up, it's getting quite diverse, so we've seen government. We've seen a lot of personal website kind of stuff, people kicking the tires building their first GraphQL endpoint, that kind of thing.
Brian: Then there are unexpected ones that are just way out in left field and, not surprisingly, we're seeing a lot of people building stuff related to COVID-19 this week. Yeah, it's all over the map. You've got an infinitely scaling little function thing and you can deploy it in ten seconds, so people are building things and then they realize how quick it is, so they iterate quite a bit. They're experimenting, doing all kinds of weird stuff.
Wes Bos, last week, built a hit counter, a Lambda function that returns an image. The image increments are auto-increment itself.
Brian: Which is like a total GeoCities throwback thing. Of course, he tweeted it and then, of course, a whole bunch of kids were like, "Oh, I'm going to DDoS Begin." They tried and they failed.
Brian: They hit us with, like--I don't know--the thing had like half a million hits by the end of the day. It didn't even touch our….
Chris: Half a million. Come on, people.
Brian: It was cute. It was cute and it definitely--
Brian: They brought it down because we lock three-tier accounts to only a smaller amount of concurrency that I'm not going to share on the podcast. [Laughter]
Brian: It's definitely discoverable. [Laughter]
Chris: Okay. It didn't go down because the tech went down. It went down because you took it down because that's the nature of--
Brian: No, no, no. Well, we just have it -- there's a natural concurrency limit on Lambdas of about a thousand and then you can get that up, I think, probably, infinitely. Well, not infinitely. You can up it as high as your credit card limit allows.
Brian: But these things are cheap too, remember. If you rate limit it with a WAF, it becomes unaffordable for the person trying to attack you. We can definitely scale transparently to whatever kind of capacity demands that you want to meet.
Chris: So, we're talking concurrency now, like a thousand, it being hit a thousand times at the same time, which is an awful lot for even a pretty large app.
Brian: Exactly. Yeah. Yeah, so these things scale up and down and the database does too. Yeah, it's a new model and it's kind of how it should be. It's how I wanted it to always work. I don't want to care about load balancing and IP tables.
Brian: I just want to write….
Chris: Amazon has EC2, too, right? That's probably maybe their biggest product. Maybe? I don't know. It seems probably a pretty big one. You're spinning up more traditional servers. So, if you want to run your Rails app somewhere, Rails has to run on EC2. You can't run Rails on a Lambda, I don't think. Is that weird? I don't think you could.
Brian: No, people are. You can.
Brian: Yeah. [Laughter] Yeah, I don't think it's advisable but you can.
Brian: The one thing that we learned is, over five megabytes payload size, you will have greater than one second cold starts. If you're under five megabytes, you will have sub-second cold starts.
Chris: Five megabytes of what, of just stuff?
Brian: Just stuff. Yeah, that Lambda function.
Brian: Actually, I tested this with a giant GIF and it still was a slow cold start.
Chris: Oh, just really literally stuff, just bytes.
Brian: Yeah, just bytes.
Chris: You can kind of -- like for one thing, you can get a headless Chrome in there, but just barely, right?
Brian: Yeah, people are doing that.
Chris: That's a pretty common one I find because it's so appealing to be like, hit this endpoint and have it make me a PDF or whatever. You kind of need a headless Chrome thing to do that but that's fairly, fairly big, but it's still cool that it's possible.
Brian: Maybe you don't care. If you're building a PDF, if it takes a second, whatever. As long as the homepage loaded fast. That's kind of the thing.
Brian: This is another thought experiment I like to share with people. When Lambda launched, you had a three-second window. Then it shut down. You had 125 megabytes of memory.
Brian: I can't remember what the temp directory was.
Chris: Let me guess. It's gone way up.
Brian: Yeah, so now you get 15 minutes of execution time.
Chris: Fifteen minutes!
Dave: Whoa! I'm going to compile video.
Dave: That's great.
Brian: Actually, I'm not kidding you.
Brian: People have figured out how to parallelize transcoding video.
Brian: This number keeps going up and the other number, the cold start number, keeps going down. There is no reason to believe that those don't hit zero with enough time.
Chris: Yeah, right. Well, imagine if you can run headless Chrome. Then you're like, "Well, why don't I run Puppeteer Jest?" Right?
Chris: To control it but run my -- but instead of running the whole test suite, run one test per Lambda. You know?
Brian: Right, in parallel. The whole "put Rails in a Lambda function" seems like inadvisable and bad and you should never do that but, on a longer timeframe, that actually makes a lot of sense to me. I can see that things will go that way. I'm not sure when, but I'm hoping it will go that way, anyways. [Laughter]
Chris: Sure. Yeah. Of all those services in that huge dropdown you open on the AWS console, the big future one is that Lambda tab.
Brian: Yeah, it's an interesting one and this is also not lost on other industry players. Compute is a pretty great place to be if you're a cloud provider. Microsoft has a functions product. Google has one. Alibaba has one.
Chris: Yeah, let's talk about that. Alibaba does, really?
Brian: Yeah. No, yeah. Yeah.
Brian: Tencent has one, so China is going to be a big cloud player.00:37:57
[Banjo music starts]
Chris: This episode of ShopTalk Show is brought to you in part by AWS Amplify. You know AWS, right? Amazon Web Services, it powers most of the Internet, it feels like. There are a ton of things that go in the AWS bucket like EC2 allows you to spin up servers of your choice and it has all kinds of configuration, S3 is for file storage, and Lambdas is for running cloud functions - all kinds of stuff that, individually, you can set up, use, and are great. here is so much more than that. There are a ton of different things AWS does.
AWS Amplify is kind of a package of tools to help you build full-stack apps for the Web. It's like--I don't know--just give me the stuff that I need that usually, you need to build an app. Amplify is hosting. You need Web hosting. It's got that. It's got authentication for logins for your users. It's got GraphQL as a first-class citizen of it. It's got serverless functions like I need the Lambda thing. I want to run some code in the cloud to hit APIs and do whatever else I need to. And it's got file storage if you need it. It's got some machine learning stuff in there if you need it.
Amplify is this easy to use, full-stack framework for getting started quick with building Web apps. It's really cool. The auth stuff alone is cool. It's just a few lines of code in there.
GraphQL has taken over the world of how to get things from a database, put things back in a database, really front-end development-friendly way to do database stuff. Love GraphQL. It's just built-in as a first-class citizen. It's this scalable API. You don't have to provision your own servers. It just does it up for you. Pretty cool.
AWS Amplify is really cool. Definitely worth checking out, especially as a front-end developer. Check all that out.
[Banjo music stops]00:39:53
Chris: If I write some code, I might write module.exports, blah-blah-blah, return Hello, World or whatever. I write my little business logic and I put it in. That's mostly what I do with these functions.
Chris: Or more likely, it has a couple of require statements at the top because I'm pulling some crap off NPM that does something cool for me. You know? I don't know what it is. On the case of CodePen, I'm sure you can imagine. It's stuff like code formatters, code linters, and code processors that do all this stuff. IT's very, very, very useful for us. It's fast, secure, and all that stuff. But you know -- all kinds of stuff. Who knows? Little renderers too and whatever. Lambda is sweet.
Brian: Yeah, the use cases are bizarre.
Chris: I don't care where you run it. Okay, it runs on Lambda. That's good to know. But really, I just wrote a Node function. When I run it locally and run all my tests and stuff, I'm just running Node. I don't care where you run it.
I would think, if I was you, I was running on Lambda and then--I don't know--Google came knocking on my door and said, "We'll run all that. We'll run those for you but we'll do it for a quarter of the price," I'd be like, mmm-okay. You know? I don't know. Do you watch that stuff?
Brian: No, of course, I do. Yeah. Yeah, I'm anticipating that we're going to see some parity happen, but it hasn't yet.
The other cloud players are still a little bit behind on this one. They've done a container-based solution at both Microsoft and Google. They bundle all your functions in one container.
The programming model feels the same. You're still writing a function but the deployment model and the semantics of the deployment model is totally different. It's back to Rails days where we have load-balance containers. You don't get the delta updates, you don't get the isolation of the security, and you don't get the blazing fast cold starts.
Chris: Wow! It's a lot worse! It's not a little worse.
Brian: Yeah. Well, from my perspective, and I'm buying all into this approach of these tiny, little, isolated, zero cold starting, stateless functions idea. The reality of the industry is that we're still back in stateful, large lived, long-lived workloads.
Over a longer period of time, I totally anticipate both Google and Microsoft are going to have this epiphany and realize that they need to build either a Firecracker competitor or adopt it themselves and sell a reseller version of it. Either one of those outcomes is highly likely to me, so I'm kind of waiting for them to do that before I make any calls on what cloud portability looks like because, given the historic precedence, we think we know who the big players are but I don't think we do.
Back in 2008, I used to think that Blackberry was going to be a big deal in mobile today. I was way wrong on that one. But if you're a Canadian in 2008, that was a very reasonable assumption to make and a lot of people are making the very reasonable assumption that Google and Microsoft matter in this battle. Yeah. [Laughter] I think we've got to wait and see a minute what happens.
Chris: Hmm. Yeah.
Dave: Yeah, it could be like Alibaba serverless or something shows up.
Dave: That's just what we use now, or something.
Chris: But that's one layer. Isn't the other layer Node? To some degree, if you're just writing Node code that some other player can come along, but it's not that bad for us because we still wrote our stuff in Node. If the Node layer changed, that would be even weirder.
Brian: Yeah. No, this is real and possibly also the way to get portability. The final boss for this will still be your database anyway. You could have the blazing fast, ubiquitous compute layer but you're still going to want to have that as close as possible to your database to have low latency.00:43:40
Chris: We should talk databases in a minute, but it's interesting to hang out on this cross-company thing for a minute.
Chris: I think we've all seen this just in past conversation, but there's this famous gist, or at least it's famous to me. It doesn't even have a thousand stars, but I love this. A lot of this stuff I know thanks to my coworker Alex, too, by the way, who is just very deep in this world and also extremely bullish on the, like, Amazon and AWS Lambda, like, this is where things are going and it unlocks a lot of awesome capability.
Anyway, a while back, he pointed me to this Steve Yegge or something--
Chris: He has this Google platform rant.
Chris: It's nine years ago, he wrote this thing. It feels like he could have wrote it any day because it feels very precedent.
Brian: Yeah, he fully called it.
Chris: Really called it.
Chris: It's a beautiful essay, I think, and it's really … because he worked for both companies, right? This is what it says in a nutshell, at least to me. I'm sure other people could take different parts about it.
He says, "Google does everything right and Amazon does everything wrong, but Amazon does one thing right that matters so much that nothing else matters and it's that they dogfood everything. Every part of their system communicates with APIs to each other. Everything has to work because, if it doesn't, it falls apart." You know what I mean?
Chris: Everything is dependent on these APIs and that's so fundamental that they use their own stuff for everything that they build that it all works because it has to.
Chris: Google doesn't do that and that, thus, they're behind. Yeah. Sorry. Go ahead.
Brian: Oh, I was going to say their first customer is Amazon.com. All of this stuff is running Amazon.com, so if you're wondering, "Will Dinamo DB scale for me?" the answer is yes. [Laughter] It's going to have the same capabilities as Amazon's shopping cart and they're doing pretty good. It's a wonderful insight that they had early on that they could build in this tiny teams that only communicate through published APIs with strong login and strong security.
They also got lucky in that no one competed with them for about seven years. Amazon Web Services launched and it kind of hit the floor like a wet towel. Everybody thought it was like, "Oh, I'm not going to host with a retailer. Google is going to be the one that does this because they've got the best infrastructure." Everyone just ignored them for many years and they got such a big lead on the technology and the infrastructure rollout. But the sleeping giants are awake now and the investment is happening in a big way.
Chris: Right, but even if you're Google and you say, "Okay, we're going to do this. We're going to compete and we're going to win this race," it's one thing to say, "Okay, well, we know what the customers want now. This industry has shaken out a little bit. Let's build it," that still doesn't work unless you dogfood it.
Chris: Unless you really put it through the wringer yourself, then you may not have a chance of catching up. But we'll see. They still could. It's just, at the moment it doesn't feel like that.
Brian: No. I mean I think they're going to be fine. I think Azure is definitely the other one to watch if you're paying attention to two clouds. Gartner and Forester have just released studies on their respective kind of views on the cloud and Forester's was called something about the analysis of…. I'll try and send you the link. It's a great read and it breaks it all down. The sleeper there for me was, Amazon is number one. Number two is Azure. This was expected. Number three it Tencent for cloud functions. I was like, oh, my god. Okay. I've got to get on this Chinese cloud thing because I'm totally asleep at the wheel. I would have thought that that would have been Google for sure. But the analysts are now calling Chinese hosted to be ahead of even Google, so it's early days.
It was the same thing with mobile. I remember Blackberry and Nokia and Microsoft making fun of Apple and Google's entry into Mobile. Here we are ten years later and nobody talks about Nokia or Blackberry anymore.00:48:01
Chris: You weren't just a casual observer of that market either, right? You were involved with the phone gap.
Brian: Yes. Yep. That definitely deeply informs a lot of my thinking on this stuff. I weirdly ended up working on mobile for about ten years and it was awesome. [Laughter] I loved it. It was a complete accident, but it was a good accident of history. I was born in the right place and time for both the Blackberry to get popular in Canada and then for the iPhone to dethrone it and to be a Web developer starting in the '90s to today.
Yeah, we accidentally fell into that phone gap thing. Interestingly, that led me to serverless, too.
Chris: Did it?
Chris: Load-balancers, are they totally irrelevant in Lambda land? They kind of are, right?
Brian: In Lambda land, yeah. Although, you can use elastic load-balance or application load-balancer as a way to call Lambdas. There are situations where you might want to do that.
One situation might be that you're moving an app over that's already behind ALB. You're just taking an endpoint at a time or something like that. Yeah, there are still use cases for it, but this is definitely something I would classify as undifferentiated heavy lifting.
Amazon does orchestrate a load-balanced operating system as a matter of course for their Amazon.com property, so there's no sense in me doing that if they rent out that capability. I'd rather outsource that to them. They're a bigger provider. They can do a better job of it than I will and I can focus on my app logic and my runtime code.
Chris: Yeah, the more of that last thing the better, right?
Chris: I don't want to think about anything. Almost nothing do I want to think about.
Brian: Yeah. That's the goal state, right? [Laughter] I want to go home at 5:00. I want to know that, this weekend, my servers aren't going to blow up for some unforeseen reason.
Chris: Yeah, they just can't.
Brian: Yeah. Yeah.00:50:38
Chris: We alluded to the data battle a little bit. Begin does not have any opinion, I don't think, about where you keep your data, right? Nor does it necessarily help you with that. Is that right? Not that that's a weakness, but nobody does.
Brian: No. We do. We're just terrible at marketing it and haven't made this obvious.
Brian: You have, for your app, a DynamoDB table sitting there.
Chris: Oh! You just automatically get a DynamoDB table? Oh, wow!
Brian: You do, but you have to use a client that we provide called Begin Data and it only has six methods. We modeled it after Redis a little bit. You can get, you can set, you can increment a property on a JSON object. You can decrement an object property with an atomic counter, get set, and destroy increment and decrement.
Chris: It's crud with increment and decrement crudded?
Brian: Yeah, so we wanted to give people a key-value store and we initially gave folks access to Dynamo. They were like, "Whoa! This is really hard to use and way too complicated for my use case. I just want to use X."
Brian: X being whatever, Postgres or FADA DB, perhaps, or whatever.
Brian: You can do that. We're not stopping you from doing that. But we do think, for Lambda anyway, you want to use DynamoDB. It's got the best performance out of all the databases and it's designed for this use case. Yeah, we just give you access to that. You can read and write data to and from it.
Chris: It goes on Dynamo but you're limited in its powers or something?
Brian: Yeah, you have to use this client. The reason we do that is because this is an interesting capability of DynamoDB. You can lock down a Dynamo table to a row and an attribute if you want. In our free tier, we have a Dynamo table that everyone is on but every app is only scooped to read and write certain rows from that table.
Brian: I know.
Chris: That's wild.
Brian: Isn't that crazy? You could never do this before but you can do this with Dynamo.
Chris: It almost skeeves me out a little bit, not that it should. Don't listen to me. But the fact that my data is just sitting in the next row down as some other app.
Brian: Yeah. It is.
Chris: Who cares.
Brian: It would be in any kind of shared host anyway.
Chris: Right. Right.
Brian: The solution to this is to use your Amazon account to upgrade and we'll deploy to your AWS and you can use as many Dynamo tables as you want without fear. But we wanted to give people this capability because, eventually, we're going to be opening up other things that you can do with AWS Lambda that require state. Yeah, this is the way to go.
The other nice thing about Dynamo, you get guaranteed single-digit millisecond latency. No matter how many rows you have in that database, it will always return within nine milliseconds. That's a big deal. Any other database, that's a variable number that usually is measured in seconds. If you want to build a real-time app with, let's say, Web sockets, you're going to need a tool like Dynamo. This has been our motivation for having that at hand.
Chris: The fact that you're still modeling your data and doing whatever the heck you want in there. You could put GraphQL in front of it if you want.
Chris: All that stuff is available to you.
Brian: Yeah. If you go to Learn.Begin.com, we've got a big tutorial on how to do all the cruddy, crudal, Rest-y, GraphQL-y things that you want to do. Yeah.
Chris: Yeah, and people do want to do that, right? I don't know. The processing is one part of the equation. How am I going to run this app and deal with that? Then state is the other part and most of that is backed by something. It's not just client.
Brian: Yeah, it's a pretty big part, too. I think a lot of the JAMstack world is just sort of learning this. I'm really happy, though, that it's moving quickly to a higher level of abstraction. Rather than dealing with the database, people are suggesting, "No. Bring in a headless CMS." I think that's great because, for 99% of the use cases, just use the CMS. Don't try and build your own. There are a whole bunch of use cases for fun things where you do want a short-lived key-value store for doing real-time stuff or doing caching.
We're using Dynamo, weirdly perhaps, to store a cache map of our modules that have been bundled. We bundle all of our modules on the fly.
Brian: We fingerprint them, but I don't want to have to re-render all my markup every time. I'd rather just render that as it goes out. I don't know what the fingerprint name is, so we put those in Dynamo and we pull them out on the request. It's great for that.
Chris: Yeah, that's cool.
Brian: It's kind of like using…. Yeah.
Chris: If I'm like, "I'm just going to use the headless CMS," are you even saying, "And also let that headless CMS keep its data wherever it keeps it," or not necessarily?
Brian: Yeah, I think that's an awesome shift. I guess WordPress was kind of the original headless CMS. It just wasn't really designed for that use case. Yeah, you've probably seen this a billion times, but you're building an app and, eventually, it just hits a point where you need to give nontechnical people access. Markdown files in a repo is not going to fly. Now, are you really going to rebuild WordPress? That's nuts. Don't do that. [Laughter]
Dave: Don't tell me it's nuts. I'm doing it.
Chris: No, there are starting to be CMSs, though, that are like, "Let's put a CMS on top of that." Don't give up on the Markdown files. We'll build a CMS that edits those Markdown files, which is very clever, too. It's funny how that stuff is all shaking out.
Yeah, Contentful is a big player in headless CMS. If you use that, your data is on Contentful. It doesn't sync it back over to your DynamoDB through Begin.
Brian: True. Yes.
Chris: The data is just somewhere else. Then even if you use Sanity, Sanity is a big, cool player in all this where you kind of model out your own data and it has a bunch of cool tools but your data is on their servers.
Brian: Yes. Yes, that's true.
Brian: Yeah, that's the tradeoff people have to make. When you're inviting these third-party services, you've got to be aware of that risk. But I think it's pretty tenable and easy to move, generally, these days.
Chris: It's their primary goal, like that's what they do in the world--
Chris: --is make sure your data is all -- yeah. A lot of the serverless stuff is about that kind of trust, that shifting trust.
Brian: Totally. Yeah, and I think that's kind of the crux of the issue. It's like, who do I trust with this stuff? A lot of people don't trust Amazon and I don't blame them. That's fine. Azure is probably a good option.
Venture back startups, yeah, there's some risk there. [Laughter] No doubt. That's why we're a totally open core and want people to be able to deploy to their own AWS. We don't want that exception.
Chris: This is all fascinating stuff. We've also mentioned before about how different some of this future architect stuff can feel and how more connected it seems that they truly are. Hosting a little function that gets invoked to return stuff feels like the opposite of JAMstack. JAMstack is like, "Oh, no. You don't have to do that. It's already pre-rendered. It's just a file." You request it from the server and it's just a file.
The file exists because some build process ran and made that file, some build process that asked for some data, found that data, and prebuilt it into some stuff. Similarly, that cloud function is going to go ask for some data and return some stuff.
Chris: As opposed as they feel, kind of, they're really actually pretty similar. It's just like when that build process happens.
Brian: Totally. Netlify gets that. They've got a functions product, too, based on Lambda, of course.
Chris: They do.00:58:56
Chris: So, import-import-import.
Chris: If there's an error, it tells you what file the error happened in and that stuff?
Brian: Yeah, and so like local work, you feel that waterfall and it's like, "Oh, this sucks." But when we deploy it, the endpoint that you get when you're staging your production will look to see if we have that module bundled. That endpoint is just a lambda function with rollup in it. If we don't have that thing bundled, we'll bundle it and then we'll do a redirect to it. Then we'll forever serve that thing forever.
Chris: That's wild.
Brian: It works great.
Chris: You don't pre-bundle it. There's no process that's like, "Then run the bundling for the entire app and deploy that."
Chris: It bundles on the fly as it needs it. That's wild.
Brian: Yeah. You know one of our devs was like, "Why don't we let the computer do this?" and it's a good idea. I think they're right. These computers are for us. We can abdicate that build step to AWS and let them run it. It doesn't have to be the individual developer.
Chris: Does that save you some deployment time because you don't even build at all?
Brian: It totally does. Our lead time to production is under a minute and we've got 400 Lambda functions, so it's an impressive iteration speed. I'm not saying that you need to be to production in under a minute and I'm not saying you're bad if it takes longer for you, but do consider the faster lead time you have means more iterations and more iterations more chances to get it right and more chances to get it right means a higher likelihood you're going home at 5:00. [Laughter]
Brian: Or not having an issue over the weekend or being able to patch something right away. It's just more agile and feels a lot closer to what the ideal state of this would be.
Chris: Yeah, there's always--I don't know--competing feelings there. That's not particularly JAMstack-y. There is no built file that's just sitting there. It requires some computer to calculate it and serve it. But if that's super-fast and that has other interesting advantages too and it's super-cached anyway because it's sitting in front of these fancy CDNs that we have these days, what's the difference? The difference isn't that different.
Brian: Yeah, it's the same difference. I imagine, after we get this blog post out, it'll probably help explain the pattern a little more because it's not a technology. It's just a pattern and it's just using HTTP 302 redirect to do the job. I imagine people will build implementations on the other….
Chris: It feels like a paradigm shift, though. I get you didn't invent anything to do it but people aren't really doing this, as far as I know.
Brian: Yeah. No, a lot of people are living in their Webpack world. There's nothing wrong with that. But wouldn't it be nice if you could do all of that without any of that and just enjoy the platform for what it is?
Chris: That's really getting you thinking about Lambdas then. If you're using Lambdas for that and you're using it to serve some HTML for your app, you're using it to process some stuff, you're using it for your crudded-- [Laughter]
Chris: --functions, and you're like, "Oh, my god. I built this whole dang thing with little tiny function guys."
Brian: Yeah, well, you know, there are drawbacks, certainly. Now, there are lots of functions. If there are lots of functions, that means there are probably lots of dependencies. Each one of these Lambda functions has its own dependency tree. You've got to manage that yourself. Someone just sent up an example of an app using Lerna to us and we were like, "Oh, yeah. That'd be really nice to be able to NPM install to all my functions at the same time." Currently, the way we've been doing it with Begin is we'd go into each function and we would manage that ourselves. With 400 functions, that can be tedious.
Brian: On the flip side of that, though, we have a really strong CICD, so we know when things are going out of date. We're staying on top of it. We're really controlling the payload per function, so if I have a dependency in one function, I don't necessarily have it in another function, which means I have less surface area for exposure to security risks and less surface for problems with cold starts. It's a tradeoff. It's a management tradeoff that you have to be aware of.
Chris: Mm-hmm. Deployment wise, your own app has got 400 of them. Chances are, on a commit to master or whatever, you changed three lines of code in one function or whatever. You don't have to think about it, right? You just commit the thing and it just figures it out and be like, "Oh, just this one changed. I'll just deploy just that one."
Brian: Exactly. Exactly, and we also allow for what we call dirty deployment where you can override a single function from your local machine. Generally, we don't use this very much but, occasionally, you'll have a situation where you want to patch something really fast and not wait for the deployment cycle to go through its thing. That whole ten seconds might be too long. Maybe you just are iterating on copy on the homepage and you're looking at staging it. You're just being lazy. You can just knock that one function straight up to Amazon and it'll do the thing. Single function deploys usually complete in sub-second or just like [finger snap] that and it's online, which is unbelievable.
Chris: That's pretty rad.
Brian: It is.01:04:47
Chris: I feel like serverless is a little bit of a competitor of yours or whatever but, as far as I know, they don't have a hosted thing quite like this. On your local machine, you still write a little script to use their technology to send the thing live, right?
Brian: Yeah, so they went Widerly. I feel like this was actually a mistake we made with PhoneGap. I don't think it's a bad thing to do. They obviously have got way more adoption than we do and it's kind of growing into the most popular choice because it's so wide. If you need to get on Azure, you need to get on Alibaba, or you need to do a K8s thing or a K native thing, serverless has you covered.
Chris: Oh, I see. They went wide with different things they support.
Brian: Yeah, so they support all the cloud players where, Architect, we have focused almost exclusively on being Web developer experience on Amazon.
Brian: We're just trying to make that great because right now that's a big enough problem to solve.
Chris: Well, then you can solve more narrow things. I just mean if you want CICD with serverless, you've got to write it. You've got to go find that elsewhere and then make serverless part of that CIDC, right? You don't just get it.
Chris: Like with Begin, you just get it.
Brian: I think they have a product now for doing it.
Chris: Oh, yeah. Okay.
Brian: Yeah, and I probably -- whatever.
Chris: I didn't know. Maybe they do.
Brian: It's totally cool. I'm not -- I think people should evaluate it. Go check out the other ways of doing this stuff. The good news is the fastest thing you can do is check out Begin. [Laughter]
Brian: Then you can go see how long it takes to do the other ones. [Laughter]
Chris: Yeah. There you go. Try Begin first.
Dave: I guess we maybe need to start wrapping up.
Dave: We're hitting time here. Serverless, let's go. Let's do it. I'm in. What's my next step? What do I do for the app I'm working on? What kind of roadblocks am I going to hit next?
Chris: Yeah. Nobody is green fielding, right? What do you do with your existing?
Brian: Yeah. My recommendation to people is to start real small and just start eating that elephant one bite at a time. It's a very big world to move to. It's a complete paradigm shift, for sure.
The common one that I hear stories about all the time is somebody brings in a cron Lambda or a scheduled function and they use it for database backups or something like that. They just take a server down that was previously doing some kind of long timing task and they replace it with a function. Maybe if that's looking good, find some other small project to just peel off a little piece of. Maybe you can throw up a GraphQL endpoint for your current Rest API or something like that. Just small side-project type thing and then see where it grows from there.
If you try and do a big "we're going to rewrite in serverless," that's probably not going to work out because there are just so many differences between the current way that we think of our architecture and how this stuff works.
Dave: No, that's very cool. Yeah, I'm excited and I just set up a GraphQL thing. I should have probably used Begin.
Dave: Oh, well. We'll figure that out after the show here. [Laughter] Brian, thank you so much for coming on the show.
Dave: Really, though, your experience with PhoneGap and stuff, I feel like you anticipated the mobile explosion there.
Dave: It's cool to pick your brain and see that you're excited about serverless. It makes me think I should get on this because Brian knows how to pick them.
Brian: [Laughter] Thanks, Dave. I think I just got lucky the last time.
Dave: For people who aren't following you and giving you money, how can they do that?
Brian: Oh, they should go to Begin.com. They should sign up and build an app. They can tweet at me. I'm @BrianLeroux on Twitter. Let me how it goes. Let me know what was bad, please. Good is also nice, but I like to know how we can improve.
If signing up for a thing on the Internet seems like a big ask, go check out arc.codes. That's the open core. You can NPM in it a cloud functions app in about ten seconds locally. That's the development experience working locally and then you can commit code. We'll deploy it to Amazon for you.
If you want to go to the final boss and get that stuff deployed to AWS, it's not too bad either. The community there is big, thriving, and happy and more than happy to help you out if you run into problems getting on Amazon or even if you just have questions.
Dave: No, that's cool. Yeah, architect. Yeah. [Laughter] It's like, "Oh, I get this! Now I understand how this all goes together." Very cool. All right, well, thank you again, Brian, for hopping on the show. Thank you, dear listener, for downloading this in your podcatcher of choice. Be sure to star, heart, favorite up. That's how people find out about the show. Follow us on Twitter, @ShopTalkShow, for tons of tweets a month. If you hate your job, head over to ShopTalkShow.com/jobs and get a brand new one because people want to hire people like you. Chris, do you have anything else you'd like to say?