squeeze

A static site generator that can put the toothpaste back in the tube.
git clone https://git.stjo.hn/squeeze
Log | Files | Refs | README | LICENSE

commit a402198e6853d06a6f03b81139df418bf5086b3b
parent 54b6aa5aa07cbdc3c7f5b229f5f978b67449ff96
Author: St John Karp <contact@stjo.hn>
Date:   Fri, 14 Feb 2020 14:06:47 -0600

Rebrand Tastic to Squeeze

Diffstat:
Mreadme.md | 12++++++------
Asqueeze.sh | 38++++++++++++++++++++++++++++++++++++++
Dtastic.sh | 73-------------------------------------------------------------------------
Aunsqueeze.sh | 36++++++++++++++++++++++++++++++++++++
4 files changed, 80 insertions(+), 79 deletions(-)

diff --git a/readme.md b/readme.md @@ -1,10 +1,10 @@ -# Tastic +# Squeeze -A static site generator in Prolog (mostly). +A static site generator that can put the toothpaste back in the tube. ## What is this? -A few months ago I lost the source files I used to generate my static website. Fortunately there was no irreparable data loss because I still had the generated site up on my server. The problem was now I needed to write a script that would extract all the articles into source files again, and then reconfigure the site generator. Then I went, "Oh. This is a Prolog problem." I figured if I could write a Prolog program that described my HTML template then I could use the same code both to un-generate and re-generate the website, because a Prolog program is basically a set of rules and the logic can be run in either direction. (But then I love Prolog so every problem is a Prolog problem but I don't care. Fight me.) +A few months ago I lost the source files I used to generate my static website. Fortunately there was no irreparable data loss because I still had the generated site up on my server. The problem was now I needed to write a script that would extract all the articles into source files again, and then I'd have to reconfigure the site generator. Then I went, "Oh. This is a Prolog problem." (But then I love Prolog so every problem is a Prolog problem but I don't care. Fight me.) A Prolog problem is basically a set of rules and the logic can be run in either direction. I figured if I could write a Prolog program that described my HTML template then I could use the same code both to un-generate and re-generate the website. So the skinny is I wound up writing my own static website generator in Prolog. Well, the main components are in Prolog. I also wrote a bash script to make use of a bunch of common \*nix utilities (find, sed, grep, etc.) and to pipe output to some third-party programs where I needed them (Smartypants, and it's still TBD but possibly Pandoc in the future). Weirdest bit was that I just couldn't find anything decent to generate RSS feeds. I considered dropping the RSS all together, but I've spent enough time haranguing people for not supporting interoperable standards that I didn't want to be a hypocrite. I wound up writing my own RSS generator too, also in Prolog. @@ -42,12 +42,12 @@ site.pl contains DCG definitions of this site's specifics, such as title, author Generate a static website from Markdown sources: - ./tastic.sh generate /home/user/website + ./squeeze.sh /home/user/website Generate source files from a static website: - ./tastic.sh ungenerate /home/user/website + ./unsqueeze.sh /home/user/website ## Still to do -The source Markdown files are currently assumed to be plain HTML with a Markdown header containing metadata. I'm going to need something to convert proper Markdown to HTML, so I'll probably add Pandoc as a dependency to tastic.sh. I expect this will also replace Smartypants for doing smart punctuation. +The source Markdown files are currently assumed to be plain HTML with a Markdown header containing metadata. I'm going to need something to convert proper Markdown to HTML, so I'll probably add Pandoc as a dependency to squeeze.sh. I expect this will also replace Smartypants for doing smart punctuation. diff --git a/squeeze.sh b/squeeze.sh @@ -0,0 +1,37 @@ +#!/bin/bash + +OUTPUT_DIR=output +SOURCE_DIR=source + +SITE_PATH=$1 + +# Create the directory structure. +rm -rf "$SITE_PATH"/"$OUTPUT_DIR"/* +find "$SITE_PATH"/"$SOURCE_DIR" -type d | + sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | + xargs -0 -d '\n' mkdir -p -- + +# Parse and create all the HTML files. +find "$SITE_PATH"/"$SOURCE_DIR" -type f -name "*.md" -print0 | + while IFS= read -r -d '' file; do + echo $file + NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | sed 's|.md$|.html|'` + cat "$file" | + swipl --traditional -q -l parse_entry.pl -g "consult('$SITE_PATH/site.pl'), generate_entry." | + smartypants \ + > "$NEW_PATH" + done + +# Copy anything else directly. +find "$SITE_PATH"/"$SOURCE_DIR" -type f -not -name "*.md" -print0 | + while IFS= read -r -d '' file; do + NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|"` + cp "$file" "$NEW_PATH" + done + +# Generate the RSS feed. +mkdir -p "$SITE_PATH"/"$OUTPUT_DIR"/feeds +ARTICLES=`grep -Rl --include=\*.md "^Date: " "$SITE_PATH"/"$SOURCE_DIR" | paste -sd ',' - | sed "s|,|','|g"` +BUILD_DATE=`date +"%Y-%m-%d %T"` +swipl --traditional -q -l generate_rss.pl -g "consult('$SITE_PATH/site.pl'), generate_rss(\"$BUILD_DATE\", ['$ARTICLES'])." \ + > "$SITE_PATH"/"$OUTPUT_DIR"/feeds/rss.xml +\ No newline at end of file diff --git a/tastic.sh b/tastic.sh @@ -1,73 +0,0 @@ -#!/bin/bash - -OUTPUT_DIR=output -SOURCE_DIR=source - -SITE_PATH=$2 - -if [ "$1" == "ungenerate" ] -then - # Create the directory structure. - rm -rf "$SITE_PATH"/"$SOURCE_DIR"/* - find "$SITE_PATH"/"$OUTPUT_DIR" -type d | - sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|" | - xargs -0 -d '\n' mkdir -p -- - - # Parse and create all the markdown files. - find "$SITE_PATH"/"$OUTPUT_DIR" -type f -name "*.html" -print0 | - while IFS= read -r -d '' file; do - NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|" | sed 's|.html$|.md|'` - cat "$file" | - swipl --traditional -q -l parse_entry.pl -g "consult('$SITE_PATH/site.pl'), parse_entry." | - # Unsmarten the punctuation. - sed "s|&nbsp;| |g" | - sed "s|&#8216;|'|g" | - sed "s|&#8217;|'|g" | - sed "s|&#8220;|\"|g" | - sed "s|&#8221;|\"|g" \ - > "$NEW_PATH" - done - - # Copy anything else directly. - # Excludes the RSS folder, which we create ourselves upon generation. - find "$SITE_PATH"/"$OUTPUT_DIR" -path "$SITE_PATH"/"$OUTPUT_DIR"/feeds -prune -o -type f -not -name "*.html" -print0 | - while IFS= read -r -d '' file; do - NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|"` - cp "$file" "$NEW_PATH" - done -elif [ "$1" == "generate" ] -then - # Create the directory structure. - rm -rf "$SITE_PATH"/"$OUTPUT_DIR"/* - find "$SITE_PATH"/"$SOURCE_DIR" -type d | - sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | - xargs -0 -d '\n' mkdir -p -- - - # Parse and create all the HTML files. - find "$SITE_PATH"/"$SOURCE_DIR" -type f -name "*.md" -print0 | - while IFS= read -r -d '' file; do - echo $file - NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|" | sed 's|.md$|.html|'` - cat "$file" | - swipl --traditional -q -l parse_entry.pl -g "consult('$SITE_PATH/site.pl'), generate_entry." | - smartypants \ - > "$NEW_PATH" - done - - # Copy anything else directly. - find "$SITE_PATH"/"$SOURCE_DIR" -type f -not -name "*.md" -print0 | - while IFS= read -r -d '' file; do - NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$SOURCE_DIR|$SITE_PATH/$OUTPUT_DIR|"` - cp "$file" "$NEW_PATH" - done - - # Generate the RSS feed. - mkdir -p "$SITE_PATH"/"$OUTPUT_DIR"/feeds - ARTICLES=`grep -Rl --include=\*.md "^Date: " "$SITE_PATH"/"$SOURCE_DIR" | paste -sd ',' - | sed "s|,|','|g"` - BUILD_DATE=`date +"%Y-%m-%d %T"` - swipl --traditional -q -l generate_rss.pl -g "consult('$SITE_PATH/site.pl'), generate_rss(\"$BUILD_DATE\", ['$ARTICLES'])." \ - > "$SITE_PATH"/"$OUTPUT_DIR"/feeds/rss.xml -else - echo "Invalid argument." - exit 1 -fi diff --git a/unsqueeze.sh b/unsqueeze.sh @@ -0,0 +1,35 @@ +#!/bin/bash + +OUTPUT_DIR=output +SOURCE_DIR=source + +SITE_PATH=$1 + +# Create the directory structure. +rm -rf "$SITE_PATH"/"$SOURCE_DIR"/* +find "$SITE_PATH"/"$OUTPUT_DIR" -type d | + sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|" | + xargs -0 -d '\n' mkdir -p -- + +# Parse and create all the markdown files. +find "$SITE_PATH"/"$OUTPUT_DIR" -type f -name "*.html" -print0 | + while IFS= read -r -d '' file; do + NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|" | sed 's|.html$|.md|'` + cat "$file" | + swipl --traditional -q -l parse_entry.pl -g "consult('$SITE_PATH/site.pl'), parse_entry." | + # Unsmarten the punctuation. + sed "s|&nbsp;| |g" | + sed "s|&#8216;|'|g" | + sed "s|&#8217;|'|g" | + sed "s|&#8220;|\"|g" | + sed "s|&#8221;|\"|g" \ + > "$NEW_PATH" + done + +# Copy anything else directly. +# Excludes the RSS folder, which we create ourselves upon generation. +find "$SITE_PATH"/"$OUTPUT_DIR" -path "$SITE_PATH"/"$OUTPUT_DIR"/feeds -prune -o -type f -not -name "*.html" -print0 | + while IFS= read -r -d '' file; do + NEW_PATH=`echo "$file" | sed "s|^$SITE_PATH/$OUTPUT_DIR|$SITE_PATH/$SOURCE_DIR|"` + cp "$file" "$NEW_PATH" + done +\ No newline at end of file