The third iteration of Doug Bolden's various thoughts and musings.

Category: Blog Stuff

The Blogger Canonical (?m=1) Issue Revisited

If you want to just see an explanation of the issue, you can skip to THE TECHNICAL ISSUE, below. First, I get to rant a bit and give some context.

When I first returned to blogging after eight years, it was not with a traditional blog: it was with The Doug Alone PROLOGUE. It was a place for me to post notes and recaps about the solo rpg stuff I was doing.1 Only there was a problem. I actually mentioned it on my final post on that blog. Google more or less refused to index it.

It looks like it did at least briefly index a single page and then wiped it later.

Even though the blog was primarily meant as a play journal, there were elements that I wanted people to find. Only there was a primary error that kept showing up by way of explanation:

I had a vague notion of what that meant but the more I looked into it, the more I found posts by people insisting it was not an error. It was intended. It’s not up to Google to SEO for you. Maybe your blog isn’t worthy. Here’s a reddit thread with most of those things said from just a few months ago.

However, after Noism Games posted a post noting their Blogger/Blogspot traffic had just plummeted, I felt curious and looked again.

Doug Is Right: The Blogger Canonical Edition

Here’s the tl;dr: I am right. The SEO experts are wrong on this one. Neener neener.

I knew I was roughly correct. I’ve worked with a lot of different web platforms over the years and am well aware that Google is a fickle beast when it comes to promoting something (say, a one-off post about carpet beetles) over things that are more core to your blog identity (such as old posts about a variety of horror movies). However, months of Google flat out ignoring a blog with unique content was not consistent. At least a few pages would have passed The Algorithm.

Those more in the know of the technical issues probably know, and I had an idea but just not why Blogger/Blogspot was being hit by it. Had I cared more, I would probably have put it together earlier. Would I have still moved blogs? Oh yes. I like having my own space to play.

The Technical Issue

What’s the issue?

Webpages can have canonical tags. It’s not required. It just helps Google (and other search engine type things) to say that the page with the listing is the page you want to index. If you are on a platform where your content might bounce from page to page, you can use it to say that this is the correct page.

EXAMPLE: You have a cooking blog. You have a set of pages with different recipes and other pages that include snippets of those recipes and you don’t want Google to send folks to the pages with only the snippets (such as a category page or a front page that shows the most recent). You prefer your recipes to be front and center. You put the canonical tag on those pages.

In the specific case of Blogger/Blogspot, there’s a bit of code that basically tells each new page to have a tag on the post itself:

<b:include data='blog' name='all-head-content'/>

One aspect of this is to drop a simple line that gives the URL and says “this one, Google” in the <HEAD>:

<link href='https://dougalone.blogspot.com/2025/09/beginning-to-migrate-some-content-to.html' rel='canonical'/>

And that should be well in good except for a technical glitch on Google’s side. It does not scan the blog like a person on a home computer will. It scans largely as a mobile device. And Blogger/Blogspot, a GOOGLE PRODUCT, tries to be helpful by serving up a ?m=1 version of the page. Old themes did not have a native mobile version. Newer ones do, but the artifact from Ye Olde Times is still there.

Which means that Google gets a link like this for the page linked above:

https://dougalone.blogspot.com/2025/09/beginning-to-migrate-some-content-to.html?m=1

You can likely see where this is going. If you click on it, it is identical to the previous page, except the rel='canonical' is not pointing to that link, it is posted to the .html, not the .html?m=1 version.

This means for every Blogger/Blogspot page scanned, Google sees a page constantly serving up alternate pages and because the ?m=1 keeps persisting, it constantly fails to find the canonical pages.

What’s the Fix?

Unfortunately, the two primary fixes are both on Google engineers and since this has been brewing for a few years, I have no idea if they will fix it. Hopefully so, because Blogger/Blogspot is a nice all-in-one blog for people who don’t want to fiddle too hard and just want to get their content out there.

FIX #1 would be for Google to not treat ?x=y as wholly different pages at least in the case of mobile pages where the canonical link has identical content. I appreciate there are lots of cases where it is different content, but there should be a way to prevent that.

FIX #2 would be for Blogger/Blogspot to stop appending the ?m=1 to mobile pages. There are better ways to handle that. That feels like an artifact from 2010 era internet. Back when you had completely separate mobile sites. Ah, I remember those days unfondly.

What can we do as users of the product? I’m not sure. If you look, there are suggestions for Javascript workarounds. I am attempting to use the script at this page. Go gently into that night and double check before you use it, yourself.

I also did try updating my robots.txt file to tell Google to ignore ?m=1 pages. Will it work? I don’t know. I’m not precisely holding my breath. If I remember to check in a couple of months and it has worked, I’ll let you know.

User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Disallow: /share-widget
Disallow: /*?m=1
Allow: /
Sitemap: https://dougalone.blogspot.com/sitemap.xml

Obviously, if you want to use that you want to change the final line to be whatever your blog’s address is. I’ve seen variations of that across multiple posts so I don’t know where it originated. Apparently older Blogger blogs had a baked in robots.txt but mine didn’t. I had to add it whole cloth.

Let’s see what the outcome of this double approach might be.

NOTE: It is possible that Google will eventually scan it via a non-mobile-first scanner and make all this a non-issue. Just 16-months seems like a fair time to run a test.

  1. There is a paradox of solo play where a lot of folks, myself included, have a strong urge to share it with someone. The initial idea was not a blog. I thought about streaming some stuff on Youtube. Since I ended up figuring out a lot of mistakes, tweaking a lot of notions, and so forth: I am glad I went for a format that did not involve me just sitting there confused and sweaty on camera. ↩︎

Blogging Down Memory Lane, Part 1: Doug’s RPG Page >>> “wYrmhole Games”

When I was typing up the post about “reclaiming” Dickens of a Blog, I mentioned making a “Part 2” that would be a trip down memory lane. As I started gathering screenshots I realized that such a trip would involve three decades of websites and some of the early days would be awfully hard to document. I was still interested.

Then I started doing just that and realized that the post that resulted would be very long if I included any real detail.

I have now decided to keep up with the Blogging Down Memory Lane idea, but I’ll break it up into three or four posts, potentially more if there is good reason.

Let’s start:

Doug’s RPG Page

That is a rough mock-up since as far as I know, no actual record of the site remains.

My second ever website. Maybe my third. The early ones barely counted since they were largely just me typing some words and being amazed that hypertext worked.

Doug’s RPG Page represented the first time I was actively interested in having a complete website. It would have started sometime in 1997, likely near the summer of that year.

To this in perspective as to how long ago this was, if you do a search along the lines of “Top 10 Websites,” the only two that I can think of that would have existed at this time were Yahoo and Amazon. Google came out a year later. Most other big websites that are the cornerstones of the net nowadays were a fair bit later, still. Heck, it predated Goatse. We are talking about the days where HTML 3 was replacing HTML 2.

I used frames for goodness sake.

That was one of the lesser sins. There was a MIDI player that played a selection of music for visitors. There was a blinking marquee that scrolled and I cannot recall why I put it there (but I’m pretty sure it was red). Possibly irony. Probably something like “big updates” or some such.

If HTML tags like <frameset>, <marquee>, and <blink> mean nothing to you: you are lucky. It was a terrible time. Everything smelled of cigarette smoke and we had terrible websites.

Why I Made Doug’s RPG Page

Quite frankly, with apologies to Mallory, because it was there. The “it” in that quote would mean “the internet” as well “the ability to make your own websites”.

In 1997, my friend Jason and I offered to do something somewhat silly in retrospect: make a website for Jefferson Davis Community College. Not only is it frankly ridiculous that a couple of eighteen-/nineteen-year-olds would take it upon themselves to write a website for a business but neither Jason nor myself had any real web development experience. We knew the basics. That was it. No servers. No development team. Just an idea.

It ended up being a beautiful failure but by the end, we had worked out enough concepts that the college gave us thanks and then tasked someone much more professional to actually do it.

While doing that, the interest in making my own website grew and grew. Then I found out about Geocities. I’ll let you read the Wikipedia article about it but in the mid-90s, Geocities was a massive website hosting complex that allowed users to make a free website inside of “neighborhoods.” It was one of the highlights of the earlier web.

I made an account and plunked myself down in geocities > area51 > dimension > 9180 (it is strange how quickly I was able to recall that). Then I started posting stuff.

Remember that back in these days websites tended to have purpose so it was pretty common if you had things you wanted to say to the world you would find some excuse to express them. Jim’s Truck Page. Donna’s Potato Recipes Page. Doug’s RPG Page. There were plenty that would show up that would be Justin’s Random Stuff or whatnot but it felt odd to a lot of us back then to just post a blog-type page without something happening to make it feel justified.

And yes, blogging was already taking shape by this time.

What I Recall About It

Besides some of the stuff I posted above, I do not recall a lot. I do not even remember if it was organized around specific RPGs or by genre. There are some memories that stand out, though:

  • The biggest element that came out of that time period was Ghostlight. A quick, odd RPG I made about ghosts who live in an echo of the real world where their interaction with it is based on their emotions. Over time, emotions increase and wane.
  • I made a post showing a potential way to play games without a GM. The gist was to generate content on index cards as possible encounters and then to pull a few at a time and mash them up to tell short scenes and stories. While it was designed for GM-less play, it ended up matching some elements of Solo Play.
  • I had some non-RPG elements that included discussions of music (I do recall writing a rant about liking techno music and being irritated by people who kept asking me why I liked music with no lyrics) but do not recall how many sections that stuff had.

Over time, the non-RPG elements and elements about my daily life and stuff started taking over. Which is what effectively brought the project to a close. I was basically getting into blogging while still being in the mindset that websites needed purpose.

Other scant details drift to mind. I know I would toss in random coarse language and people would call me out for it. I think there was an old school chat room built into one of the pages?

Besides Ghostlight, there was at least one other RPG I posted to it, A.S.I.A. RPG. I have no archives of this but I’m guessing/think it stood for something like A Simple Interactive Adventure Roleplaying Game. Around this time and for a few years after, I was working on concepts like using short phrases and a system of wide | normal | narrow rankings so it might have used that. I think I wrote some stuff for FUDGE. I feel like I had some Call of Cthulhu / Beyond the Supernatural elements.

That is mostly guesswork.

I made a friends from it. I talked to other game designers. I met some people into the techno/edm scene who liked my discussions. It was mind-blowing for a person from the backwoods of lower alabama to suddenly be talking to people not only around the country but from other countries. It was nice.

My online username at the time was “dreamwyrm” and because of that I ended up getting a cameo of sorts in the webcomic Gaming Guardians and was friends with Graveyard Greg who wrote it. The Web Troll (artist) even made art that had my “Dream Wyrm” persona turned into a Buttonman.

For a country bumpkin to create a somewhat cringey 90s email address and a cringey 90s website, that was really cool. Heck, I still find that to be pretty damned cool.

The Death of Doug’s RPG Page and the start of wYrmhole Games

That above image is around 90% of what I know about the next part of this chapter. I grew unnecessarily frustrated with how much of the page had become a proto-blog and so took most of it offline (or at least delinked most of it). I wrote a short, rambling paragraph about how I was going to embark on a new design.

Just look at the stolen Michael Whelan artwork. Lord.

Also, while I had already started using “wyrmis” as an online name, it seems like “The Wyrm” and “WYRM” were frequent. Somewhere in here is where I came up with Wyrmis X. Simryw as an online name and sometimes started capitalizing only the Y: wYrmis. There was some joke about my name had a capital WHY. I’m a damned fool.

From that above screenshot, I am reminded of two elements that I had completely forgotten. First, I had had some discussion of videogames on the original site and wanted to expand that. We are entering the time period of the Playstation RPG explosion but were still close enough to the Super Nintendo days to still be reaping those benefits. For some dumb reason, I wanted to call this “Electronic Portals.” I don’t know if I ever did.

Second, I forgot that I had a period where I was pretty vocal about Christian roleplaying. A response of sort to the Satanic Panic and its continued presence in the mid- to late-90s in Lower Alabama. I don’t think anything really came of it. It has been a long time since I have identified as Christian.

The “Silver” there was another friend from my early college days: Lance.

After posting my, “Back soon, I promise, guys!,” I don’t think I ever returned. My protest about how my personal page had become too personal essentially just killed the whole project.

The other 10% I know, by the way, is that I had kept working on whatever the hell A.S.I.A. RPG was. I only know this because later on I had a post in later blog about moving version 2 to the new blog and making it version 3.

The mind truly boggles.

A Rough Timeline

A rough timeline of this area seems to be…

  • 1997: Doug’s RPG Page is started
    • 1998 (Summer): Ghostlight is added
  • 1998 – 1999: Doug’s RPG is changed to “Wyrm’s Play”
    • Most original pages were hidden but maybe not deleted
    • Split into three sub-pages, all RPG focused
      • wYrmhole Games: Essentially the OG page
      • Electronic Portals: Console and Computer Videogames
      • Circle of Paladins: Roleplaying as a Christian
  • 1999?: Rebranded back to just wYrmhole Games
    • Other elements dropped? No clue.

Next Chapter

Next time, assuming I write the next part, will focus on a massive reversal to the mindset that destroyed Doug’s RPG Page: I rebranded the project to “Doug’s Webpage of Doug.” I am only half joking.

Version 2 of My Simple Sub Cipher

Ok, less “version 2” and more like “version 0.7” but still, I can engage in a bit of version-inflation if I want.

With Edits #1 and #2, below, I am considering this done. Which means version 0.7, aka Version 2, has become Version 3, aka Version 1. It’s complicated.

In last night’s post about an simple inline substitution cipher to help obscure text so that I can avoid spoilers or keep text otherwise hidden until reader action takes place to confirm their intent, I had only the most basic pieces worked out. It was past my bedtime and I was sort of speed typing both the code and the post.

This morning I worked out a few more basic features:

  • I have built a very basic “Simple Inline Substitution Cipher” page to handle the creation of these materials. It is 100% free for you to use and honestly consider it all cc-0. It’s mediocre code for an extremely niche topic.
  • The cipher now should be able to pass through double- and single-quotes without breaking the HTML or Javascript.
  • Rather than paragraph tags, I am using span tags. This should help with adding stuff like single- to few-word elements inline with the rest of the text and no longer requires every instance to be a full paragraph of text.
  • Spans are giving a “click me” type title to help generate tool tips where supported.
  • Spans are given a class of “gentleSubCipher” to allow for CSS to better improve their usability.
  • Spans are now given a random five-character ID to immensely reduce the issue of multiple IDs matching and causing potential breakage.

It looks something like this:

CsrS rS E wESrI VHEveAV SsPbr4T Pjj QsV "D8PQVS" E4z 'Sr4TAV D8PQVS' E4z QsV PQsVf 4Vb VAVvV4QS QsEQ sEyV wVV4 EzzVz.

There are still quite a few limitations:

  • Just to clarify, it is not and will not be secure.
  • It still does not ignore HTML elements within that span. See EDIT #2, below.
  • It does not work with feed readers and I need to test how to make it work better with screen readers.
  • Accented characters are passed through but that still is not quite a problem.
  • Multiple posts with it might result in a problem where version shifts break previous posts when seen the front page, category page, etc.

For the latter, the idea might be to create unique scripts per post. I’ll have to give it some thought and testing. See EDIT #1 below, this is now fixed.

The next version will be building in some logic for ignoring disabling HTML elements (Edit #2). That should be a bit trivial for the types of things I need to ignore, but we’ll have to see.

EDIT 1: I went ahead and added a “slug” function to the document so that each post will have a likely unique bit of script so that later updates should not break previous ones. That’s now built into the page. If nothing is added to the “slug” field it just outputs to the default name which can be fine for pages that will not have other versions of the script shown. It also creates a slug="SLUGNAME" as part of the span tag just in case I ever need to go back and redo something so I have all the pieces in place.

EDIT 2: After some thought, realized that any kind of code that tries even in principle to load/render rewritten HTML is a bad thing. Rather than ignoring those elements that I might type, the script essentially just breaks them into unrendered HTML so folks can get the gist without my substitution cipher being able to inject anything, even accidentally. Tests showed that a whoopsy could lead to weird stuff happening on the page so this helps to protect it in general.

For example:

gG g bUQM KoiMbumZN Ymbu J 2mb oG <Mi>MiQuJKmK</Mi> mb mK 2MbbMS GoS iM bo dKM *JKbMSmKvK* 2MXJdKM mb XJdKMK qMKK XoZGdKmoZ. g SMJqqU Lod2b g YodqL iJvM JZUbumZN J <J uSMG="ubbQK://YYY.YUSimK.Xoi">qmZv bo iU oqL uoiMQJNM</J> 2db mG g Lo GoS KoiM SMJKoZ GoSNMb JZL bUQM bumK, mb uJK J YJU oG GJmqmZN odb oG buM mKKdM.

Inline Substitution Ciphers to Play with Semi-Hidden Text

jLh 903mO moh0 m3 S6 SnE 0so Vhshn0Sh 3SnmsV3 6Y ShFS SL0S 0nh 903mO0MME ‘Lmoohs ms QM0ms 3mSh’ (C2r!) 9tS 0M36 Vhshn0MME nhO6Vsmd09Mh 03 LtU0s-BnmSShs ShFS 9E nhS0msmsV UtOL 6Y SLh QtsOSt0Sm6s, BLmSh3Q0Oh, 0so 6SLhn hMhUhsS3. jLm3 B6tMo hs09Mh Uh, Y6n ms3S0sOh, S6 BnmSh ShFS SL0S O6sS0msho 3Q6mMhn3 6n L0o 6SLhn 03QhOS3 s6S msShsoho S6 9h nh0o 9E jLh 7MV6nmSLU BLmMh 3tnn6tsoho 9E ShFS SL0S m3 QhnYhOSME LtU0s- 0so U0OLmsh-nh0o09Mh. a O6tMo 30E ‘J6t = Q66 Q66 Lh0o’ BmSL6tS SL0S 9hmsV msohFho. uE NhhQmsV mS 0 36UhBL0S 3mUQMh 3t93SmStSm6s OEQLhn, SLm3 Uh0s3 SL0S mS h03E Y6n Qh6QMh S6 Sn0s3M0Sh h1hs BmSL6tS 0sE 3OnmQS 0so 0MM6B3 mS S6 9h nhM0Sm1hME tso6sh 0S 0 M0Shn o0Sh.


If you click the text above, it should “solve out” to a line of text that reads:

The basic idea is to try and generate strings of text that are basically ‘hidden in plain site’ (PUN!) but also generally recognizable as human-written text by retaining much of the punctuation, whitespace, and other elements. This would enable me, for instance, to write text that contained spoilers or had other aspects not intended to be read by The Algorithm while surrounded by text that is perfectly human- and machine-readable. I could say ‘You = poo poo head’ without that being indexed. By keeping it a somewhat simple substitution cypher, this means that it easy for people to translate even without any script and allows it to be relatively undone at a later date.

And then if you click it again (without refreshing the page), it should do essentially nothing. This is my basic first pass on coming up with an idea I have had for Dickens of a Blog since way back. I am unsure when I first posited it but likely around 2006 or 2007.

The idea was simple: set aside some portion of the text in an otherwise open-to-read blog post {e.g., spoilers, info semi-hidden from scrapers, bits that otherwise might be triggers} through a simple enough cipher or baseline encryption that solving it would not become hostile to Doug’s happiness if keys/etc were lost.

The Code Behind It

Version 1 is above. What happens if I have a fairly simple Python code:

from random import sample

def scramble_AlphaNum(oldAlphaNum):
    return ''.join(sample(oldAlphaNum, len(oldAlphaNum)))

alphaNum = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789"
newAlphaNum = scramble_AlphaNum(alphaNum)

text = "The basic idea is to try and generate strings of text that are basically 'hidden in plain site' (PUN!) but also generally recognizable as human-written text by retaining much of the punctuation, whitespace, and other elements. This would enable me, for instance, to write text that contained spoilers or had other aspects not intended to be read by The Algorithm while surrounded by text that is perfectly human- and machine-readable. I could say 'You = poo poo head' without that being indexed. By keeping it a somewhat simple substitution cypher, this means that it easy for people to translate even without any script and allows it to be relatively undone at a later date."
txet = ""
paraName = "demo01"

for t in text:
    try:
        txet = txet + newAlphaNum[alphaNum.index(t)]
    except:
        txet = txet + t
        
output = """ <p id=\"""" + paraName + """\" onclick="gentleScramble('""" + newAlphaNum + """', '""" + paraName + """'); this.onclick=null;">""" + txet + """</p>"""
        
print(output)

Right now, I have to manually edit the file to have the paragraph, div, or span ID and then the contents. It’s fairly trivial to more generalize this. Running that, it spits out a paragraph tag that looks like:

<p id="demo01" onclick="gentleScramble('70u9wOboihzYgVXLamGHINxMAUrsk6CQfcpnv3jS2tR1DB4FJEZdeyWqP8Tl5K', 'demo01'); this.onclick=null;">jLh 903mO moh0 m3 S6 SnE 0so Vhshn0Sh 3SnmsV3 6Y ShFS SL0S 0nh 903mO0MME 'Lmoohs ms QM0ms 3mSh' (C2r!) 9tS 0M36 Vhshn0MME nhO6Vsmd09Mh 03 LtU0s-BnmSShs ShFS 9E nhS0msmsV UtOL 6Y SLh QtsOSt0Sm6s, BLmSh3Q0Oh, 0so 6SLhn hMhUhsS3. jLm3 B6tMo hs09Mh Uh, Y6n ms3S0sOh, S6 BnmSh ShFS SL0S O6sS0msho 3Q6mMhn3 6n L0o 6SLhn 03QhOS3 s6S msShsoho S6 9h nh0o 9E jLh 7MV6nmSLU BLmMh 3tnn6tsoho 9E ShFS SL0S m3 QhnYhOSME LtU0s- 0so U0OLmsh-nh0o09Mh. a O6tMo 30E 'J6t = Q66 Q66 Lh0o' BmSL6tS SL0S 9hmsV msohFho. uE NhhQmsV mS 0 36UhBL0S 3mUQMh 3t93SmStSm6s OEQLhn, SLm3 Uh0s3 SL0S mS h03E Y6n Qh6QMh S6 Sn0s3M0Sh h1hs BmSL6tS 0sE 3OnmQS 0so 0MM6B3 mS S6 9h nhM0Sm1hME tso6sh 0S 0 M0Shn o0Sh.</p>

I add that to my document via Custom HTML. The first string is the randomized a-z/A-Z/0-9 alphanumeric characters of the common American English alphabet (etc). It is randomized per running of the script.

Then at the bottom of the page, I insert another Custom HTML section with this Javascript:

<script>
function gentleScramble(newAlpha,para) {
	const AlphaNum = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789";
	const newAlphaNum = newAlpha;
	
	let victim = document.getElementById(para).textContent;
	let solution = "";
		
	for (let v = 0; v < victim.length; v++) {
		foundIt = newAlphaNum.indexOf(victim[v]);
		if (foundIt != -1) {
			solution = solution + AlphaNum[foundIt];
		} else {
			solution = solution + victim[v];
		}
	}
		
	document.getElementById(para).textContent=solution;
}
</script>	

That text the paragraph and the substitution cipher and runs it the first click before passing the “this.onclick=null” to stop it from glitching out if a reader spam clicks it.

As it runs through it checks for characters in the defined “alphaNum” and ignores any that are not included. Those that are included it just re-subs them back to their original.

Voila.

Before you might say that this is fairly insecure, that is kind of the point. Is not trying to deeply encode the text, it is more just trying to play at gently hiding the text in a somewhat breakable pattern.

Current Issues

The first issue is that it is pretty hands on to generate the content, which is not 100% a problem for me but if I have several of these elements it will start to wear.

The solution I’m going to do is build a quick tool that allows for different element types {div, p, span} and a bit more of a GUI, probably through just a quick HTML page with text areas and buttons.

The second issue is that it only accepts characters in the a-z/A-Z/0-9 ranges. If I am typing in French and other languages, characters with diacritical marks will be ignored. This means that “ä” will show up as “ä” in the enciphered text. It’s not a deal breaker since the bulk of the text will be gently scrambled, but it can lead to potential weirdness.

The solution to this could be either to scan the contents and generate a shortened “alphaNum” that only includes characters in it while ignoring all the punctuation OR creating a new diaAlphaNum that includes a separate list of diacritically marked text.

I’m not sure which I prefer. I think I prefer to not worry about that so much.

The final issue at a glance is that any HTML elements inside that element {em, a, strong} would likewise be translated which at best would simply glitch them and at worse could in theory create HTML that is broken if it happens to stumble upon a different element than intended.

My solution to this problem is just to not do any of that.

There is a slight non-issue that feed readers and such will likely break in trying to help, but that’s a bit ok for the moment. Not for driving clicks or any such thing, just in that earlier attempts to build CSS/Javascript spoiler type solutions sometimes resulted in said spoilers being clearly visible to feed readers. It does possibly interfere with screen readers and that is a much bigger problem, but I’ll have to test it.

Possibilities for Expansion

My possible end goal for this would include this as a checklist:

  • Perhaps using a Vigenère cipher instead of a simple substitution one [because I prefer those],
  • Making it at least “smart” enough to ignore interior HTML elements, and
  • Generating a bit of styling that makes it more obvious what the reader is supposed to do, possibly including a failsafe type option if the reader has all javascript blocked, etc.

The Reclamation of Dickens of a Blog

In the first post on this blog, I talked about the updates to the “old” wyrmis.com and how I consider this to be a continuation — eight-years later — of the initial project which that blog & website represented. Which was in, in principle, a continuation of various blogs and websites that I had been working on for years.

BY THE WAY: I plan on having a part two to this post, more or less, more of a trip down memory lane type thing. I have screenshots and everything. I’ll come back and link it when I post it. If I post it, I suppose, but I think I will. I’m old enough now that reminiscing is nice.

Here is what the old site looked like around the time it was first launched (I think this screenshot would be more in the 2007-era after it had already gone through some evolution):

Then, around the time it was abandoned (2016), it looked more like:

In that decade in between, while the general color schemes and rough layout had remained the same, the back-end had grown a lot more complicated — involving multiple custom scripts in Python and PHP and a more complicated file structure — while also growing more out of date with modern web practises.

To put it in perspective, while that version was well after the general “Blogging” trend had started, it was an outgrowth of a website that had actually started back in 1997. One of its core issues was that it was dragging along a lot of content and structure as it became less and less a 90’s style website and more a 2000’s era blog.

If you note, the first image shows the “journal” section was off-site — first on Livejournal and latter on Blogger — because it wasn’t until later an adequate but generally poor blogging “software” was integrated by myself into the existing page. By “integrated,” I mean that I coded it and then spent entirely too long making it act like the rest of the website.

If there is one lesson you take from this: Make your tools work for you, do not work for your tools. I violated that principle. It shows.

The Problem(s) As It Stands

There are a few problems with what to do with the old site. The main ones to be:

  • The HTML, CSS, PHP, Python, Images and essentially all the rest are a hodge-podge of 1997-2017. Twenty years of various web eras.
    • Even though the bulk of the site was at least partially updated and badly polished throughout the 2010s, enough issues remain that made it nearly impossible to edit as-is into anything truly fitting a post-2010 website.
    • In fact, some of the multitudinous layers of bandages actually hurt the repair because different eras of pages have different enough code that anything but a hand-coded fix is likely to break other portions.
  • A lot of the content would only be saved for purely archival purposes (we’ll dub this The Librarian Principle). Links are likely broken and fixing them would potentially require longer than any value would be added for anyone. Timely content is no longer near timely. Trends and discussions are based on their era which is not that far ago but over a decade.
  • The contrast of The Librarian Principle is The Embarrassment Principle. Past-Doug was a weird boy. Some of the things typed up because I thought it was funny at the time are decidedly not. A few of the points for which I argue vehemently are no longer anything like a stance I would take. By the time we get to the wyrmis.com-era, that is less true, but…man. I think I might make a third part for this. One where I lecture myself. It’s not quite suited to going into any more for this post.

Is It Worth Solving?

In a word…

I don’t really know. Like I said, I just volunteer here. At least some of it seems worthy. A few bits. Possibly even the majority, really.

I need to write a script that takes all the pages on the site and then just randomly picks five of them to read. See if the Librarian beats out the Embarrassed via dice roll.

The truest answer I can make at this time is that the best solution forward is to “fix” the big pieces and then figure out which of the smaller individual pieces to retain.

It feels dishonest to delete all the portions with which I now disagree or dislike, so I’ll work on something like a balance. A triage. Some will get instantly deleted if they simply do not fit (“fit” is doing some heavy lifting, take it as you will), are too time-locked to be worth saving, or any other heavy complaint I might have. Some will get instantly saved and enshrined into place as a part of my decades online. Some will get updated and possibly ported over here.

I think the lines I’ll draw in the sand is that stuff that is good enough to stay as is will stay where it is (Type A), stuff that could be better might get brought over to this blog and updated (Type B), and stuff that I don’t feel like saving will either join with Type A with minimal fixing or simply disappear. This means the old site/blog will have a mix of highs and lows with the middle joining my new writings.

A hint towards verisimilitude actually masking a large scale “reclamation” project.

The Mechanics of It All

Going back up The Problem(s) As It Stands, the first bullet point and the sub-bullets are the meat of the mechanical issues. There are two variations of solution:

  1. Develop a new schema and then port the old bits into the new bits.
  2. Strip the old schema off and just retain the core bits.

I am currently opting for #3: a bit of both. The new schema is largely just a minimal CSS and jQuery working frame that delivers the text in a readable — both human-readable and machine-readable — manner (albeit fairly bland) but otherwise ignores much of the intricacies of what came before. Headers {e.g., H1, H2} and body content {e.g., P, LI} items will be mostly HTML-standard with a few variations.

In practical terms, this means I am:

  1. Taking the old page (currently one at a time)
  2. Replacing the HEAD content with a newer, improved version.
  3. Deleting all the old menu, footer, counter, and similar code not in the main body.
  4. Replacing the title/banner portion with a simplified version.
  5. Adding in new DIVs that act as placeholder for repeating content {e.g., menus, site-wide idents} and then using jQuery to handle that.
  6. Generally going through the body and making sure things mostly work. Deleting a few portions that no longer fit the criteria above.
  7. Uploading that to the site.
  8. In some cases, adding redirects to “close” a portion of the site or to make up for things that will now be missing.
  9. Eventually, going through and deleting the pages that neither fit into Types A or Type B.

I have worked out the stuff to get the new-schema pages and the site as a whole into HTTPs. And to be more responsive. In theory, I can work on a script that will do #2, #3, and #5 for me though I’ll likely have to do the rest by hand.

I also will be adding this post as at least a temporary link to show up near the bottom to explain to folks why things are happening. Ironically, this will only show up on pages I have partially fixed but so it goes.

Why “Reclamation”?

Just to wrap this up: why am I calling it a reclamation?

It just feels right as a term. There were years of myself in that website and blog. Lots of memories. Lots of creative output. In theory it could stay as is — online or just on my personal storage devices — but I like the idea of retaining some of it. More than that. Making it usable, again. Giving credit to past-Doug where credit is due. Also holding my past-self to a higher standard.

I am me because of his idiocy. I just wish someone had fussed at him like I am about to fuss at myself.

There is also a complicated side-aspect that some of those posts have been taken a bit out of context or been copied over and all sorts of stuff that can happen to websites across decades. By cleaning it up and improving its general SEO-ness, it helps to establish it more as a part of its own record.

What Kind of Time-line Are We Looking At?

As for the question of how long will this take? I have only one answer…

Hello, is this thing on?

It is nice to talk to you again, Space Pilgrims.

The very last post I made to the old version of Dickens of a Blog was “I, This Thinking Thing”. That was August 2016. That means it has been over nine years since I’ve made a real post under that branding.

Today, I went through and created a new [possibly temporary] front page to the wyrmis.com site that looks a bit like this:

It mostly directs people to here, to The Doug Alone and to the [still very much so being finalized] Doug Talks Weird. Those two and this site are the new “Dougiverse” [pronounced “Dougie Verse”].

While Doug Alone has been brewing for over a year now, and Doug Talks Weird dates back to something like 2014 YouTube videos, I have spent a good amount of the past two weeks sorting and trying to rebuild my online identity so that I can start posting and sharing things without relying on “more traditional” social media. A strange sentence to type.

So Many Words to Say

I reached a point those nine years ago where I wanted to shut up for a minute. Then, around two-to-three-years later I kind of wanted to take it back. However, the time it would take to rescue the old blog — from younger-Doug’s rambles as much as younger-Doug’s hand-coded functions that had been left behind by something like ten years on a changing web — always made me shy away. I would post online here or there, share pictures here or there, but mostly I just withdrew.

However, I am at a time again where I would like to just have a spot to ramble. So this blog is here, now. It is not a replacement of the old one. It is more a continuation in a way that is a bit more responsive, a bit less intensive — I would sometimes have to go into the Python back-end of the old one and custom tweak things to keep posts working and had to remember dozens of custom commands, tools, and pieces — and hopefully a bit reader-friendly without so many baked-in Dougisms.

It Will Take Time

That being said, it will probably a week or two at least before the page even looks like it is going to look. I’m going to try and not sweat it too much.

As for today, I have just spent five hours getting everything set up to hit the point I can post this. I am an hour behind eating lunch and still need to do my daily work out and shower first. Well, maybe not first. I’ll figure it out.

Hopefully, I’ll see you soon.

–Doug Bolden

Powered by WordPress & Theme by Anders Norén