References / TLDR
There’s no need to read a whole article if you don’t want to.
First, I purchased my domain and got basic DNS things set up. If Google is your jam, they made it stupid easy.
Then I found a nice article on setting up a static site that simply operates over HTTP and serves files out of a GCS bucket. However, the TLD I chose, .dev, is HTTPS-only, which requires a load balancer to do in GCP (apparently).
Later I found they had this other very nice article on setting up a static site behind HTTPS.
Then I found gsutil, which I decided I would use for deploys without having to mess with GCS’s bucket UI (because ew). So I looked at the gsutil rsync documentation to deploy static output from my various projects. HOWEVER:
The one note I have on gsutil is that it costs money to make API requests for bucket information (charged per thousand requests). rsync doesn’t make any attempt to optimize the number of operations, so even when 90% of your files are unchanged and don’t need to be compared, it’ll still be checking and making requests that could add up to costing money.
That’s it. That’s the whole thing. Hope you learned something.
Continue for a more adventure-oriented retelling:
Hey, blakerobbins.dev is available. Should I start a site?
Like most personal projects, this whole thing started on a whim. “Oh, look, Google has a simple domain registrar that sells .dev domains. I wonder if my name is available? Oh, it is, neat!”
And now, here I am, hours and hours deep into crap that I previously had said I’d likely never do outside of work: writing code, futzing with configs of various kinds, finding an IDE that provides support for a variety of projects, getting frustrated when something that’s just supposed to work isn’t working. And writing articles for a blog. Ugh. What a stupid idea.
Anyway.
The last time I endeavored to build and run a site I was using GoDaddy as my registrar and I hated it. The whole process felt very predatory. Every step of the way they were trying to sell me something I didn’t care about and they made it hard for me to just host static content, so when my registration lapsed I was over it.
Fast forward to when I looked up blakerobbins.dev with Google. I guessed it’d be similar to GoDaddy, and I was just goofing off. But it was super simple. And cheap. And they weren’t trying to sell me anything, they were happy to take my money and say “good luck.” I’m sure there were registrars like that when I used GoDaddy but being in the right place this time around certainly spurred me on.
I started looking into how to host a static site with GCP and found a very thorough article to get me started. I was impressed!
Google makes it pretty damn easy if you just want something simple.
Google warned me when I bought my domain initially that .dev is an HTTPS-only domain because their newer TLDs implement HSTS. I looked up how to set up HTTPS for my domain and it seemed straightforward so I forged ahead. Then I followed the Google article’s instructions and discovered that you actually can’t use HTTPS and also just host static content without additional infrastructure.
Damn.
Oh, well, learning new things. Turns out they have another article, also thorough, that walks through setting up a static site behind HTTPS. You have to reserve a static IP to set up a load balancer to provision your cert to host your site. So taking on some extra costs here, but nothing too crazy, so moving on.
So now to update files, I can work on the various projects and just upload them to my site’s bucket. I know that rsync is the usual approach to deployments, but honestly I’ve always just blindly used it. I knew it synchronized directory states but never thought too much about how and why. But then I’m trying to “overwrite” my site by just uploading the build output…oh. There’s extra files leftover from the previous hashed build. I’m not going to turn off hashing…so…hm.
One of the articles I’d stumbled across while getting this whole thing going had mentioned using gsutil rsync. I initially dismissed it (well, filed it away) because of my as-little-effort-and-as-few-tools-as-possible approach. But it started looking attractive. So off we go.
Installation was not my favorite (AWS CLI isn’t any better), but in execution gsutil is very cool.
I can just do a deploy from the command line after my build. That’s ridiculously convenient. And I can put the command in each project’s README for easy copying. When I want to do a deploy, it can be done almost effortlessly. Switch to terminal, run:
gsutil -m rsync -d -r ./<output-dir> gs://<bucket>
It does some Google Cloud Services API requests, does the diffs, syncs the dirs, and I’m done. That’s awesome!
But GCS API requests cost money. Because of course they do.
Fortunately, only by the thousands of requests. And even then, for standard storage you’re still only looking at (at the time of this writing) $0.01/1000 operations. Angular projects output a really small amount of static files, so that’s been pretty straightforward for those projects. But this blog? A non-negligible number of additional files are updated and added whenever there’s a new post. Eventually it might be enough files for me to think about optimizing that, but not for now.
Hexo also supports one-command deploys, so I could write a little package to configure and do the deploy. That’d be cool. Plus, I was looking around for solutions and (naturally) found a package that already attempts to solve this: backup-folder. Perhaps I’ll just fork that, or use it outright. We’ll see.
