The subject line for this post is in jest. Truthfully, I haven’t used it long enough to know if it’s a good or bad solution. I first learned about Hugo when I saw 1Password converted their blog to Hugo and Lambda. I had been experimenting with AWS Lambda at the same time so I thought it would be a good experiment. A blog post explaining how to use Lambda and Hugo was readily available but was using a very old Hugo version and I couldn’t get it working. It could have been my fault, no doubt. But in the engineering spirit of ignoring yet being inspired by previous work, I set off to write my own Lambda function. In the end, it worked out to be similar to the code from ryansb on Github.
High Level Workflow
My function includes not only all the code to operate, but also AWS’s CLI utility and the Hugo executable itself. I wish AWS packaged the CLI utility with Lambda by default, but the additional size may be a a problem. Eventually I will split the utility, and probably Hugo, into its own Lambda layer so I can more easily include update my code.
I have two S3 buckets for my Lambda function - an input and an output bucket. Input is where I put the all the source files Hugo interprets. When a file is created, it will trigger the function. If the file is in the
static directory, Lamda will use the
aws utility to simply copy the file to the production, output bucket. If it’s a file outside of the
static directory the function will download the entire input bucket using the
hugo is executed and the production web page is created in a new directory. All content is uploaded to the production/output S3 bucket and should be immediately available for viewing.
Lambda does provide a nice platform for this functionality. However, it does seem to have a few issues which I had to work around.
S3 triggers the Lambda function every single time a file is uploaded. Normally this wouldn’t be a problem, except during the first upload. If 200 files are uploaded, the function is triggered 200 times. With the exception of the last uploaded file, Hugo is forced to process incomplete data. AWS’s free tier allows for 1,000,000 Lambda calls a month so I’m less concerned about cost than proper architecture of the workflow. Uploading individual files, which will happen almost every time, won’t cause this problem so I don’t expect I’ll investigate fixing this problem.
Hugo doesn’t delete posts on its own. The developers don’t want to be responsible for accidentally deleting data so they force the user to delete content prior to executing
hugo. I’ve had problems, including at the time of this writing, where the source files are deleted but are still shown on certain pages. Lambda is stateless and there’s no way to login to do testing, so tracking these bugs down is more challenging than if I had a shell and could run live commands. When I do try to fix this problem, I expect Lambda’s built in test methods will prove adequate.
I will eventually release the code for this Lambda function. I want to make it more resilient and allow for development and production environments. Even if Hugo on Lambda doesn’t work out, it was a great experiment and an interesting way to learn Lambda.