squeeze

A static site generator that can put the toothpaste back in the tube.
git clone https://git.stjo.hn/squeeze
Log | Files | Refs | README | LICENSE

commit 4b5050282f7ba54cf76fc53a77242bb7a30c4516
parent 4e0b847501f943769d593be6d5f6665a1494b8b8
Author: St John Karp <contact@stjo.hn>
Date:   Wed, 22 Apr 2020 07:59:01 -0500

Replace Pandoc with Markdown

Pandoc was too fiddly for me, so I've replaced it with Markdown
for more straightforward Markdown-to-HTML conversion.

Diffstat:
Mreadme.md | 6+++---
Msqueeze.sh | 11+++++------
2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/readme.md b/readme.md @@ -6,7 +6,7 @@ A static site generator that can put the toothpaste back in the tube. A few months ago I lost the source files I used to generate my static website. Fortunately there was no irreparable data loss because I still had the generated site up on my server. The problem was now I needed to write a script that would extract all the articles into source files again, and then I'd have to reconfigure the site generator. Then I went, "Oh. This is a Prolog problem." (But then I love Prolog so every problem is a Prolog problem but I don't care. Fight me.) A Prolog problem is basically a set of rules and the logic can be run in either direction. I figured if I could write a Prolog program that described my HTML template then I could use the same code both to un-generate and re-generate the website. -So the skinny is I wound up writing my own static website generator in Prolog. Well, the main components are in Prolog. I also wrote a bash script to make use of a bunch of common \*nix utilities (find, sed, grep, etc.) and to pipe output to some third-party programs where I needed them (Smartypants, and it's still TBD but possibly Pandoc in the future). Weirdest bit was that I just couldn't find anything decent to generate RSS feeds. I considered dropping the RSS all together, but I've spent enough time haranguing people for not supporting interoperable standards that I didn't want to be a hypocrite. I wound up writing my own RSS generator too, also in Prolog. +So the skinny is I wound up writing my own static website generator in Prolog. Well, the main components are in Prolog. I also wrote a bash script to make use of a bunch of common \*nix utilities (find, sed, grep, etc.) and to pipe output to some third-party programs where I needed them (Markdown and SmartyPants). Weirdest bit was that I just couldn't find anything decent to generate RSS feeds. I considered dropping the RSS all together, but I've spent enough time haranguing people for not supporting interoperable standards that I didn't want to be a hypocrite. I wound up writing my own RSS generator too, also in Prolog. It's pretty closely tailored to my specific needs, but it works, and IMHO it works better than my old site generator which injected a bunch of nonsense into my HTML. To make this work for your site, all you need to do is define the template of your website in "html.pl". @@ -14,8 +14,8 @@ It's pretty closely tailored to my specific needs, but it works, and IMHO it wor * Bash. Used to run the script that automates everything else. * A Prolog interpreter. Tested with [SWI-Prolog](https://www.swi-prolog.org/), but the syntax aims to be vanilla ISO Prolog and should work with any implementation. -* [Pandoc](http://pandoc.org/). Used to convert Markdown to HTML. -* [Smartypants](https://github.com/leohemsted/smartypants.py). Used to smarten the punctuation in the HTML output. +* [Markdown](https://daringfireball.net/projects/markdown/). Used to convert Markdown to HTML. +* [SmartyPants](https://daringfireball.net/projects/smartypants/). Used to smarten the punctuation in the HTML output. ## Assumptions diff --git a/squeeze.sh b/squeeze.sh @@ -23,16 +23,16 @@ find "$SITE_PATH/$OUTPUT_DIR" -type f -name "*.html" -print0 | # Parse and create all the HTML files. find "$SITE_PATH/$SOURCE_DIR" -type f -name "*.md" -print0 | while IFS= read -r -d '' file; do - echo $file NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | sed 's|.md$|.html|'` # Only process files whose destination doesn't exist, or which has been recently changed. if [ ! -f $NEW_PATH ] || [[ $(find $file -mtime -7) ]]; then - # Get everything after the metadata and feed it through Pandoc. + echo $file + # Get everything after the metadata. sed "1,/^$/d" "$file" | - # Convert Markdown to HTML and smarten punctuation. - pandoc --ascii --from markdown+smart --to html | + # Convert Markdown to HTML. + markdown | # Recombine with the metadata and hand it to Prolog. (sed "/^$/q" "$file" && cat) | swipl --traditional -q -l parse_entry.pl -g "consult('$SITE_PATH/site.pl'), generate_entry." | @@ -60,4 +60,4 @@ ARTICLES=`grep -R --include=\*.md "^Date: " "$SITE_PATH/$SOURCE_DIR" | BUILD_DATE=`date +"%Y-%m-%d %T"` # Parse the articles and generate the RSS. swipl --traditional -q -l generate_rss.pl -g "consult('$SITE_PATH/site.pl'), generate_rss(\"$BUILD_DATE\", ['$ARTICLES'])." \ - > "$SITE_PATH/$OUTPUT_DIR/feeds/rss.xml" -\ No newline at end of file + > "$SITE_PATH/$OUTPUT_DIR/feeds/rss.xml"