We keep wiki server backups on an older but stable and secure linux machine as rolling tgz archives from .wiki in digital ocean. We've retrieved all available copies of a lost page from these as a directory full of export files.
Extract the page as json from each available archive.
cd store for i in *.tgz; do echo $i tar xOzf $i .wiki/makecommoningwork.fed.wiki/pages/welcome-visitors >../getum.d/$i.json done
Pull these down to laptop.
scp -r c2.com:wiki/asia/getum.d .
Convert these to export files.
cd getum.d for i in *.tgz.json; do jq '{key:"welcome-visitors",value:.}' $i | \ jq -s 'from_entries' >../gotem.d/$i done
See Backup for constructing an export of a whole site.