The third iteration of Doug Bolden's various thoughts and musings.

Tag: ranting about the algorithm

The Blogger Canonical (?m=1) Issue Revisited

If you want to just see an explanation of the issue, you can skip to THE TECHNICAL ISSUE, below. First, I get to rant a bit and give some context.

When I first returned to blogging after eight years, it was not with a traditional blog: it was with The Doug Alone PROLOGUE. It was a place for me to post notes and recaps about the solo rpg stuff I was doing.1 Only there was a problem. I actually mentioned it on my final post on that blog. Google more or less refused to index it.

It looks like it did at least briefly index a single page and then wiped it later.

Even though the blog was primarily meant as a play journal, there were elements that I wanted people to find. Only there was a primary error that kept showing up by way of explanation:

I had a vague notion of what that meant but the more I looked into it, the more I found posts by people insisting it was not an error. It was intended. It’s not up to Google to SEO for you. Maybe your blog isn’t worthy. Here’s a reddit thread with most of those things said from just a few months ago.

However, after Noism Games posted a post noting their Blogger/Blogspot traffic had just plummeted, I felt curious and looked again.

Doug Is Right: The Blogger Canonical Edition

Here’s the tl;dr: I am right. The SEO experts are wrong on this one. Neener neener.

I knew I was roughly correct. I’ve worked with a lot of different web platforms over the years and am well aware that Google is a fickle beast when it comes to promoting something (say, a one-off post about carpet beetles) over things that are more core to your blog identity (such as old posts about a variety of horror movies). However, months of Google flat out ignoring a blog with unique content was not consistent. At least a few pages would have passed The Algorithm.

Those more in the know of the technical issues probably know, and I had an idea but just not why Blogger/Blogspot was being hit by it. Had I cared more, I would probably have put it together earlier. Would I have still moved blogs? Oh yes. I like having my own space to play.

The Technical Issue

What’s the issue?

Webpages can have canonical tags. It’s not required. It just helps Google (and other search engine type things) to say that the page with the listing is the page you want to index. If you are on a platform where your content might bounce from page to page, you can use it to say that this is the correct page.

EXAMPLE: You have a cooking blog. You have a set of pages with different recipes and other pages that include snippets of those recipes and you don’t want Google to send folks to the pages with only the snippets (such as a category page or a front page that shows the most recent). You prefer your recipes to be front and center. You put the canonical tag on those pages.

In the specific case of Blogger/Blogspot, there’s a bit of code that basically tells each new page to have a tag on the post itself:

<b:include data='blog' name='all-head-content'/>

One aspect of this is to drop a simple line that gives the URL and says “this one, Google” in the <HEAD>:

<link href='https://dougalone.blogspot.com/2025/09/beginning-to-migrate-some-content-to.html' rel='canonical'/>

And that should be well in good except for a technical glitch on Google’s side. It does not scan the blog like a person on a home computer will. It scans largely as a mobile device. And Blogger/Blogspot, a GOOGLE PRODUCT, tries to be helpful by serving up a ?m=1 version of the page. Old themes did not have a native mobile version. Newer ones do, but the artifact from Ye Olde Times is still there.

Which means that Google gets a link like this for the page linked above:

https://dougalone.blogspot.com/2025/09/beginning-to-migrate-some-content-to.html?m=1

You can likely see where this is going. If you click on it, it is identical to the previous page, except the rel='canonical' is not pointing to that link, it is posted to the .html, not the .html?m=1 version.

This means for every Blogger/Blogspot page scanned, Google sees a page constantly serving up alternate pages and because the ?m=1 keeps persisting, it constantly fails to find the canonical pages.

What’s the Fix?

Unfortunately, the two primary fixes are both on Google engineers and since this has been brewing for a few years, I have no idea if they will fix it. Hopefully so, because Blogger/Blogspot is a nice all-in-one blog for people who don’t want to fiddle too hard and just want to get their content out there.

FIX #1 would be for Google to not treat ?x=y as wholly different pages at least in the case of mobile pages where the canonical link has identical content. I appreciate there are lots of cases where it is different content, but there should be a way to prevent that.

FIX #2 would be for Blogger/Blogspot to stop appending the ?m=1 to mobile pages. There are better ways to handle that. That feels like an artifact from 2010 era internet. Back when you had completely separate mobile sites. Ah, I remember those days unfondly.

What can we do as users of the product? I’m not sure. If you look, there are suggestions for Javascript workarounds. I am attempting to use the script at this page. Go gently into that night and double check before you use it, yourself.

I also did try updating my robots.txt file to tell Google to ignore ?m=1 pages. Will it work? I don’t know. I’m not precisely holding my breath. If I remember to check in a couple of months and it has worked, I’ll let you know.

User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Disallow: /share-widget
Disallow: /*?m=1
Allow: /
Sitemap: https://dougalone.blogspot.com/sitemap.xml

Obviously, if you want to use that you want to change the final line to be whatever your blog’s address is. I’ve seen variations of that across multiple posts so I don’t know where it originated. Apparently older Blogger blogs had a baked in robots.txt but mine didn’t. I had to add it whole cloth.

Let’s see what the outcome of this double approach might be.

NOTE: It is possible that Google will eventually scan it via a non-mobile-first scanner and make all this a non-issue. Just 16-months seems like a fair time to run a test.

  1. There is a paradox of solo play where a lot of folks, myself included, have a strong urge to share it with someone. The initial idea was not a blog. I thought about streaming some stuff on Youtube. Since I ended up figuring out a lot of mistakes, tweaking a lot of notions, and so forth: I am glad I went for a format that did not involve me just sitting there confused and sweaty on camera. ↩︎

Just deleted most of my Patreon follows, including the free ones

This morning I got a message from a Patreon Creator that was a simple “Hey” but based on previous interactions, I know any response to it would result in this person asking for more money. Let’s call them Person K.

I one time, now years ago, actually did send them some money outside of Patreon. I eventually said I wasn’t going to send them any more and after a few exchanges stop replying. Over the next few years, they sent me a lot:

I obfuscated it because I’m not interested in real naming-and-shaming but that should give you an idea. Each of those blue-boxes is a message, more or less. Most are friendly. Some are a bit insistent. All are basically asking for money. I skipped a few because I think I had enough to establish the point.

Not a single Doug-reply is in there. I left their Patreon a bit back, including the “free tier” [they sent me a “bye Doug :'(” response]. Then they kept messaging me. Not so frequent that I cared all that much.

Today was just a different sort of day, though. The kind of day where I was ready to delete most of the flack off my digital landscape.

I figured out how to block Person K — which gave me a message that Patreon had removed me from their page, so I guess somehow I was a zombie there — and gave a pretty big think about how I wanted to use & engage with Patreon if I kept using it. The final answer, after around thirty total seconds: not much. Very nearly none.

My problems with…well, sort of with Patreon but buckle up because Doug’s about to go off

I have always been a moderate- to lower-tier user, even at peak. There are lots of reasons why I have never gotten deeper into it. Let’s come up with something like a starter three really quick (# given is roughly the order I’d put the problem):

(5) The interface is fairly poor for a website that is one of the major backbones of the indie creator scene.

(3) It quickly gets costly. While backing a couple of Creators is not a whole lot of cash, it is easy to end up backing 10+ and seeing a monthly bill rivaling old school cable bills. Especially with how many Patreons have that stupid “$20+ a month to get your name at the end of my videos” thing.

“I’d like to thank DickMaster2000, the Might Gooble, Tom the Tominator…”

I personally don’t tend to engage with content on a subscription basis, ever. (4) I do things in little bursts.

This means I would back a podcast, listen to some of their backlog, wait a few months (paying the whole time to not use them), and then do another backlog. At this point I might leave or I might wait another few months to pick up another backlog.

If I ever left, if I ever downgraded, I might be losing out in months of content that I could access as long as I kept paying.

Which brings me to a fourth reason which is #2 in the final list, though this is less Patreon-specific and more the whole damned thing that is happening right now:

(2) Business models that promote FOMO [fear-of-missing-out] are inherently problematic: freemium memberships, gachas, crowdfunding with backer-exclusives. Even when they enable some creators to make special content for their best supporters, there are very few safeguards for the backer-side and drive creators to work around this “value added” model.

FOMO is a billion-dollar industry driver across its many facets and a major slice of a lot of the modern hobby landscape. Apps that allow extra features, sites with minor upgrades, games with a few bonus aesthetics, gacha pulls, overspending on crowd-sourcing for extra features, member videos, etc.

I am not necessarily blaming all content-creators. Some do try to take care of their content-consumers. Some are in a place where this is the best way for them to publish their content. Some work very, very hard to make it worthwhile.

And, to clarify: exclusive content is not necessarily evil, no more than having a unique painting for sale at an art festival is evil, but when combined with the structure of the modern content marketplace, it has to be careful.

These massive third parties that run the websites and portals make it a constant focus for content-creators [from big media empires to smaller creators] to drive content-consumers to enter into a buy-in relationship. Break the old game with new characters. Make your character look more unique. Get a campaign exclusive t-shirt that you might never wear. A bonus chapter for your favorite book series. An exclusive series of videos shot in the director’s bedroom!

Come inside, friends, here is exclusive!

Which is where we get to #1 in the ever growing list:

(1) Business models that thrive on parasocial relationships, pseudo-communities, and consumer addiction are inherently evil.

In many cases, they force consumers to spend a lot of time and effort to keep engaging with these communities and hobbies. Not just with the central creators but also the other members of the community, including trophies for heavy interaction and fake incentives to share memberships and similar addiction-driving behaviors.

We all lose (maybe not the platform owners)

These last two feed on each other. Creators are driven into increasingly less-profitable time-sinks to push a business model that has the real capability of driving consumers into feeling actively responsible for the well-being of their favorite creators.

That latter point cannot be stated loud enough. Whether it is time [like, comment, subscribe, share] or actual money and effort, our relationship to content creators is in a terribly strange place now. With many smaller creators having no other real options but to encourage the same predatory behavior that enables other entities [larger content creators and platforms] to also feed upon those same consumers.

Platforms like Youtube and Twitch have created a new type of rock star for us all to want to be. One with the doors kicked wide open. Only, the rules keep changing. The revenue keeps dropping. The user experience gets worse. Creators start tacking on Patreons, memberships, donation drives, subscription drives, an all sorts of behaviors that take away from the core experience that justifies the content creator even being on the damned platform to begin with.

Too often, your success is not about whether you are good or talented or just in it for fun and having a good time. Over and over the message is driven home: success is doing exactly the sort of thing that increases profits for the platform owners, the revenue handlers, and all the people who use them for advertisement. Keep your fans engaging so their data can be more widely shared with entities that are barely required to even admit they in the food chain. .

At best, it is a terrible stop-gap for a broken creator-consumer relationship where a few entities own so much of what we can consume while more indie folk are constantly trying to stay afloat [and here comes GenAI to tighten the screws further while eating the indie creations to learn how to emulate them].

At worst, this is an active abuse of psychological principles that have plagued humanity all the way back to our hunter-gatherer tribal roots. The need for community. The need for recognition. The need to provide. The fear of scarcity.

[Recap] The list in order of importance and slightly expanded

  1. Business models that thrive on parasocial relationships, pseudo-communities, and consumer addiction are inherently evil.
  2. Business models that promote FOMO [fear-of-missing-out] are inherently problematic.
  3. Patreon quickly gets costly if you support more than a small number of Creators or feel the need [see #1 and #2] to engage at a higher tier.
  4. I do things in little bursts, which systems like Patreon take advantage: you either engage constantly or you generate a backlog where you keep paying to avoid missing the content you already “own.”
  5. The interface is fairly poor for a website all about connecting Creators to their consumers, which again means you have to engage frequently or spend time navigating past other temptations.

Um…Doug? We were talking about Patreon…

Right. RIGHT. Sorry, I get a bit ranty when I have a headache.

Also, like…when I don’t have a headache. Just, you know, in general.

The above thoughts had been on my mind for a while. The three “about Patreon” parts (#3, #4, and #5) are really why I just never could enjoy Patreon, personally: the UI, the cost, and the way I actually like enjoying the things I enjoy.

I didn’t like going to the website very much. I refused to get the app. I would get notifications and sometimes actually follow the link to get whatever file or post it was about. I would sometimes skip a month or two and just miss stuff. Every time I had a backlog I would just get frustrated trying to figure out what stuff for which I had “paid” was actually available [note: about that paying for…it’s complicated for such a model].

I still kept it up for a small handful of creators, some just a month or two, because I liked to support them. What’s that, did I feel responsible for them? Yeah, kind of. That is part of the problem, see? You start to feel like you, the viewer, are somehow beholden to pre-pay for content you may or may not enjoy because a lot of those content-creators are nice people with a dream.

However, when Person K from the first section of this post contacted me, it was a breaking point.

I went through a list.

Every Patreon I followed, paid or not, that I primarily enjoyed off-Patreon, I instantly unfollowed.

If I like their content on Patreon but was only there for short glimpses into the background “behind the scenes” type commentary [i.e., one that played, inadvertently or not against my sense of FOMO], I unfollowed.

If I was just there to support them for a bit and had already accomplished that, I unfollowed.

If I was only keeping one around to eventually get around to getting the content to which I had already subscribed but hadn’t actually consumed, I unfollowed. Yes, I lost all that content.

And on a personally selfish level: was I getting my money and time back or more? If not, I unfollowed.

Finally, was it sparking the hell out of some joy..

…if not? Yep.

There were times these points clashed. There were some that actually sparked joy but had exclusive tiers I didn’t want to bother with. Some were worth it but I would rather engage with them elsewhere.

Absolutely none of the people I unfollowed today were honestly bad actors (Person K is the closest to an exception but I can understand wanting money). They were all lovely creators. Just Patreon and all those points above showed up on a day when I headache.

The two which remain + some bonus shout outs

To end this on a kind of positive note, here are the two that survived all the cuts:

  • Witch House Media: I have been following them since their HPPodcraft days and they put out regular, good content about a niche genre that I adore.
  • Tana Pigeon | Mythic: I use Mythic a lot and I love reading the magazine. While I do get to take part in some polls and such and ask questions and whatnot, the Patreon is well worth the fee since I would have spent that money on the books and zines anyhow. It also lets me get some news about something that is a major hobby of mine. Excellently run.

Two that I did not keep for various reasons but did deeply appreciate are Dean Spencer Art and Brandon Scott. Dean Spencer puts out some of the best stock art for content creators and has regular posts and engagement. I just would rather go back to buying the pieces I will use, when I use them. Brandon Scott makes wonderfully creative stuff. He is the most likely candidate for someone I will go back and refollow once my headache clears.

Bonus shout out: Cracking the Cryptic. Lots of interactions, lots of content. I just reached a point I’d rather watch them on Youtube and buy their games/books.

Credits

The “Empty Tunnel”: Photo from from Getty Images via Unsplash+ License.

500 Day Reading Streak: I would like to thank the constant gamification of everyday pleasures! Also, my mom…

Last night I hit a 500-day reading streak on my Kindle.

Which is to say on my Kindle App because I don’t think my Kindle, not even my newer Colorsoft one, has any sort of streak/days-of-reading/Kindle Challenge type screen. Maybe it does. I’m not going to look for it.

That’s neat though, hitting that. Only you can likely tell from the fact that the number of books I read on Kindle are only 31 this year [roughly 3 a month] so it doesn’t quite line up. With the move and all, it’s been a rough year for reading a lot.

I have maintained the act of looking at pages on a screen in a prescribed manner. I am the best.

Four quick thoughts and then to my morning workout with me! Why this streak is a lie…

The Streak Is a Lie Because: it’s actually longer…

The real total is something like 800 days. Twice over the past 2-3 years, the system has essentially not counted days when I have 100% read something. The last hiccup, apparently 500 days ago, was after I had spent a couple of hours finishing the back half of some book.

I remember being irate at the time because not only had I read for some time, but because I had the book in my library clearly marked as finished and had submitted a rating through the app. The “Finished Date” and presumably the “Rating Date” would have been for the day that the same app was claiming I had skipped reading.

Part of the reason I got to 500-days this time is because I was initially fussy about that and then it just became a habit.

I don’t recall the time before but I remember irate at that time, too.

The Streak Is a Lie Because: it only tracks the bare minimum…

I don’t know what all it actually tracks, not really. Is just opening the app enough? Just opening a book? The truth is that at 100-days of those 500-days were me opening the app or my Kindle (etc) and just reading for maybe 3- to 5-minutes. I would guess my average duration per day would not be all that high.

It is nice to have a gentle prod to keep up some reading because reading is a habit you have to nurture. It just might be better if I could set a minimal threshold [e.g., 10 pages, 20 minutes] to actually count.

The Streak Is a Lie Because: it only counts books-on-Kindle…

Probably half my reading, or more, in that whole time period was via physical books. Which means I either have to do the bare minimum opening of the app to satisfy above or I have to get a book on Kindle and on paper and then move the Kindle version forward.

I have done a bit both. Where both feel silly.

The Streak Is a Lie Because: the constant gamification of everyday pleasures is a poison…

In this case, the streak is not so much a lie as a constant external stressor to stay addicted to an app for reasons only tangentially related to the purpose of the app. Reading some is not hard for many of us but reading regularly is hard. Much like diet apps and exercise apps and productivity apps and language apps and many others: having this gamification added to them can help you to hit goals. That is true.

However, the fact that so many apps have such streaks and such baked in is mentally draining. We can no longer just play our games. Now, we have to play our games daily for shiny lights and particle effects to keep blessing us. Skip a day and you might just receive a meaningless warning. Our gentle hobbies to survive the soul-crushing march of modern life have been turned into just another stress for us to endure.

The whole time our personal data and habits are being scraped and digested by The Algorithm. Using the app is giving them permissions to dig deeper into our lives.

And we don’t even get paid. Hell, we pay for it.

Anyhow, off to see if I can hit 1000-days.

Inline Substitution Ciphers to Play with Semi-Hidden Text

jLh 903mO moh0 m3 S6 SnE 0so Vhshn0Sh 3SnmsV3 6Y ShFS SL0S 0nh 903mO0MME ‘Lmoohs ms QM0ms 3mSh’ (C2r!) 9tS 0M36 Vhshn0MME nhO6Vsmd09Mh 03 LtU0s-BnmSShs ShFS 9E nhS0msmsV UtOL 6Y SLh QtsOSt0Sm6s, BLmSh3Q0Oh, 0so 6SLhn hMhUhsS3. jLm3 B6tMo hs09Mh Uh, Y6n ms3S0sOh, S6 BnmSh ShFS SL0S O6sS0msho 3Q6mMhn3 6n L0o 6SLhn 03QhOS3 s6S msShsoho S6 9h nh0o 9E jLh 7MV6nmSLU BLmMh 3tnn6tsoho 9E ShFS SL0S m3 QhnYhOSME LtU0s- 0so U0OLmsh-nh0o09Mh. a O6tMo 30E ‘J6t = Q66 Q66 Lh0o’ BmSL6tS SL0S 9hmsV msohFho. uE NhhQmsV mS 0 36UhBL0S 3mUQMh 3t93SmStSm6s OEQLhn, SLm3 Uh0s3 SL0S mS h03E Y6n Qh6QMh S6 Sn0s3M0Sh h1hs BmSL6tS 0sE 3OnmQS 0so 0MM6B3 mS S6 9h nhM0Sm1hME tso6sh 0S 0 M0Shn o0Sh.


If you click the text above, it should “solve out” to a line of text that reads:

The basic idea is to try and generate strings of text that are basically ‘hidden in plain site’ (PUN!) but also generally recognizable as human-written text by retaining much of the punctuation, whitespace, and other elements. This would enable me, for instance, to write text that contained spoilers or had other aspects not intended to be read by The Algorithm while surrounded by text that is perfectly human- and machine-readable. I could say ‘You = poo poo head’ without that being indexed. By keeping it a somewhat simple substitution cypher, this means that it easy for people to translate even without any script and allows it to be relatively undone at a later date.

And then if you click it again (without refreshing the page), it should do essentially nothing. This is my basic first pass on coming up with an idea I have had for Dickens of a Blog since way back. I am unsure when I first posited it but likely around 2006 or 2007.

The idea was simple: set aside some portion of the text in an otherwise open-to-read blog post {e.g., spoilers, info semi-hidden from scrapers, bits that otherwise might be triggers} through a simple enough cipher or baseline encryption that solving it would not become hostile to Doug’s happiness if keys/etc were lost.

The Code Behind It

Version 1 is above. What happens if I have a fairly simple Python code:

from random import sample

def scramble_AlphaNum(oldAlphaNum):
    return ''.join(sample(oldAlphaNum, len(oldAlphaNum)))

alphaNum = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789"
newAlphaNum = scramble_AlphaNum(alphaNum)

text = "The basic idea is to try and generate strings of text that are basically 'hidden in plain site' (PUN!) but also generally recognizable as human-written text by retaining much of the punctuation, whitespace, and other elements. This would enable me, for instance, to write text that contained spoilers or had other aspects not intended to be read by The Algorithm while surrounded by text that is perfectly human- and machine-readable. I could say 'You = poo poo head' without that being indexed. By keeping it a somewhat simple substitution cypher, this means that it easy for people to translate even without any script and allows it to be relatively undone at a later date."
txet = ""
paraName = "demo01"

for t in text:
    try:
        txet = txet + newAlphaNum[alphaNum.index(t)]
    except:
        txet = txet + t
        
output = """ <p id=\"""" + paraName + """\" onclick="gentleScramble('""" + newAlphaNum + """', '""" + paraName + """'); this.onclick=null;">""" + txet + """</p>"""
        
print(output)

Right now, I have to manually edit the file to have the paragraph, div, or span ID and then the contents. It’s fairly trivial to more generalize this. Running that, it spits out a paragraph tag that looks like:

<p id="demo01" onclick="gentleScramble('70u9wOboihzYgVXLamGHINxMAUrsk6CQfcpnv3jS2tR1DB4FJEZdeyWqP8Tl5K', 'demo01'); this.onclick=null;">jLh 903mO moh0 m3 S6 SnE 0so Vhshn0Sh 3SnmsV3 6Y ShFS SL0S 0nh 903mO0MME 'Lmoohs ms QM0ms 3mSh' (C2r!) 9tS 0M36 Vhshn0MME nhO6Vsmd09Mh 03 LtU0s-BnmSShs ShFS 9E nhS0msmsV UtOL 6Y SLh QtsOSt0Sm6s, BLmSh3Q0Oh, 0so 6SLhn hMhUhsS3. jLm3 B6tMo hs09Mh Uh, Y6n ms3S0sOh, S6 BnmSh ShFS SL0S O6sS0msho 3Q6mMhn3 6n L0o 6SLhn 03QhOS3 s6S msShsoho S6 9h nh0o 9E jLh 7MV6nmSLU BLmMh 3tnn6tsoho 9E ShFS SL0S m3 QhnYhOSME LtU0s- 0so U0OLmsh-nh0o09Mh. a O6tMo 30E 'J6t = Q66 Q66 Lh0o' BmSL6tS SL0S 9hmsV msohFho. uE NhhQmsV mS 0 36UhBL0S 3mUQMh 3t93SmStSm6s OEQLhn, SLm3 Uh0s3 SL0S mS h03E Y6n Qh6QMh S6 Sn0s3M0Sh h1hs BmSL6tS 0sE 3OnmQS 0so 0MM6B3 mS S6 9h nhM0Sm1hME tso6sh 0S 0 M0Shn o0Sh.</p>

I add that to my document via Custom HTML. The first string is the randomized a-z/A-Z/0-9 alphanumeric characters of the common American English alphabet (etc). It is randomized per running of the script.

Then at the bottom of the page, I insert another Custom HTML section with this Javascript:

<script>
function gentleScramble(newAlpha,para) {
	const AlphaNum = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789";
	const newAlphaNum = newAlpha;
	
	let victim = document.getElementById(para).textContent;
	let solution = "";
		
	for (let v = 0; v < victim.length; v++) {
		foundIt = newAlphaNum.indexOf(victim[v]);
		if (foundIt != -1) {
			solution = solution + AlphaNum[foundIt];
		} else {
			solution = solution + victim[v];
		}
	}
		
	document.getElementById(para).textContent=solution;
}
</script>	

That text the paragraph and the substitution cipher and runs it the first click before passing the “this.onclick=null” to stop it from glitching out if a reader spam clicks it.

As it runs through it checks for characters in the defined “alphaNum” and ignores any that are not included. Those that are included it just re-subs them back to their original.

Voila.

Before you might say that this is fairly insecure, that is kind of the point. Is not trying to deeply encode the text, it is more just trying to play at gently hiding the text in a somewhat breakable pattern.

Current Issues

The first issue is that it is pretty hands on to generate the content, which is not 100% a problem for me but if I have several of these elements it will start to wear.

The solution I’m going to do is build a quick tool that allows for different element types {div, p, span} and a bit more of a GUI, probably through just a quick HTML page with text areas and buttons.

The second issue is that it only accepts characters in the a-z/A-Z/0-9 ranges. If I am typing in French and other languages, characters with diacritical marks will be ignored. This means that “ä” will show up as “ä” in the enciphered text. It’s not a deal breaker since the bulk of the text will be gently scrambled, but it can lead to potential weirdness.

The solution to this could be either to scan the contents and generate a shortened “alphaNum” that only includes characters in it while ignoring all the punctuation OR creating a new diaAlphaNum that includes a separate list of diacritically marked text.

I’m not sure which I prefer. I think I prefer to not worry about that so much.

The final issue at a glance is that any HTML elements inside that element {em, a, strong} would likewise be translated which at best would simply glitch them and at worse could in theory create HTML that is broken if it happens to stumble upon a different element than intended.

My solution to this problem is just to not do any of that.

There is a slight non-issue that feed readers and such will likely break in trying to help, but that’s a bit ok for the moment. Not for driving clicks or any such thing, just in that earlier attempts to build CSS/Javascript spoiler type solutions sometimes resulted in said spoilers being clearly visible to feed readers. It does possibly interfere with screen readers and that is a much bigger problem, but I’ll have to test it.

Possibilities for Expansion

My possible end goal for this would include this as a checklist:

  • Perhaps using a Vigenère cipher instead of a simple substitution one [because I prefer those],
  • Making it at least “smart” enough to ignore interior HTML elements, and
  • Generating a bit of styling that makes it more obvious what the reader is supposed to do, possibly including a failsafe type option if the reader has all javascript blocked, etc.

Ghost and The King: Dreamin’

Here we go. Here’s a decent “first real post” besides the classic “hello, world” type posts with with which blogs are stricken.

Every once in a while, I get into a mood for new music. It tends to be the kind of thing where I am beholden to The Algorithm™ to actually find anything. Bandcamp. Youtube. Maybe Google searches. A few other places.

One trick is to find a song/musician I liked. Search that/them. Look for other stuff recommended or discussed around them. Dip toes in. Keep going. Follow the trails. “Truffle hunting,” we sometimes called it while working at the library. Find a resource, see what it linked and cited, follow those and keep digging until you have a better scope.

Except, you know… The Algorithm™.

At any rate, one of those songs that I drummed up earlier this year while cruising through a mix of Durry’s build up to This Movie Sucked, trying to find new Japanese pop music, and trying to find new Belgian/French music (prior to the move) was Ghost and the King’s “Dreamin'” off their 2024 self-titled album.

I quite enjoyed it. Pleasant tune. Fair lyrics. I liked the set up of the video [including some very low budget Legend of Zelda cosplay]. I miss that personal vibe for Youtube. It feels like a passion project.

A handful of times since then I’d go back and listen to it. I even bought the album. I consider the album as a whole fair. “Dreamin'” is my still favorite song, but you can sample tracks “Don’t Often Sing the Blues,” “Nightingale,” and “Give a Damn” if you want to get a vibe for the rest.

With my most recent rewatch, I saw that only had 250 views (on Youtube, not sure about Spotify since I don’t really hang out in the latter anymore unless someone insists it upon me). Which seems low. I wanted to go ahead and give a shout out. Left a comment. All in all just trying to poke The Algorithm™.

It is currently the last video they posted and I don’t much else about it or them. Their online presence seems to be mostly social media. I’ll leave that to others to share.

Powered by WordPress & Theme by Anders Norén