2024-06-16

HTML iconography


➣  This post was meaningfully revised at 2024-06-17 @ 02:35 PM EDT. The previous revision is here, diff here. (See update history.)

My web skillz are very old-school.

I only recently learned we're not supposed to use <tt> anymore. (<code> is what the kids use.) We're not supposed to use <a name="whatev"> for our in-document link targets. We should just use <a id="whatev">.

(To be fair, it's pretty cool the targets don't have to be <a> tags any more.)

Anyway, back in my day, to add little icons that might represent your website, we just added a 16x16 pixel /favicon.ico file in some weird, nonstandard Microsoft image format.

Thank you Internet Explorer, the very first evil internet silo that kids these days have never encountered!

My ancient "interfluidity main" site has one of those old-school /favicon.ico files, and I'm not messing with it. But I thought I'd add fresh icons for this site and interfluidity drafts. One 16x16 icon file isn't enough for the modern world. Your site might need an icon on a phone, a tablet, a watch, whatver. Android and Apple devices treat icons differently. Firefox, I discovered, chooses icons differently than other browsers.

The best resource I found to help make sense of the brave new world of website icons was an article by Mathias Bynens.

That article's last update was in 2013, so maybe it's not current? It's a decade newer than my old habits, so hey.

I used Affinity Photo to take the photo I use as an avatar on social media, and label it, for this website as "tech". For prettiness as icons on mobile devices, I also needed to give it rounded corners. I wanted to select a rectangle, round the corners, then invert the selection, and delete to transparent to make rounded corners.

That'a basically what I did, but there's nowhere to set a corner radius on a straight-up rectangualar selection in Affinity Photo.

However, there is a rounded rectangle drawing tool, which draws on its own layer, and — very useful to know! — there is a Selection From Layer menu item, that converts a shape drawn in a layer to a selection. Once I had my selection, invert and delete was no problem and I got my rounded corners.

I gather one can omit rounding corners oneself, if you only care about Apple devices. Apple defines apple-touch-icon and apple-touch-icon-precomposed, and if you supply the not-precomposed version, devices should round corners and maybe drop shadow to "compose" your icon.

Most resources I looked at suggested taking control, so you know what you will get and can use the same icons crossplatform, so that's what I did. So, I rounded my own corners yee-haw!

Then I exported my image as a PNG in all of the sizes recommended by the Bynens article, stole his recommended HTML snippet, and added it — with some modification, see below! — to the main layout of my unstatic-based static-site generators.

    <!-- icons / favicons -->

    <!-- we just want the squared-corner image with no overlays for traditional favicon uses at tiny sizes -->
    <!-- swaldman added, ick, firefox scales down the biggest size for its tab icon, so we use the graphic we want for small sizes as the largest... -->
    <link rel="icon" type="image/png" sizes="500x500" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-500x500.png"> 
    <link rel="icon" type="image/png" sizes="32x32" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-32x32.png">     <!-- swaldman added, for standard favicon size -->
    <link rel="icon" type="image/png" sizes="16x16" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-16x16.png">     <!-- swaldman added, for standard favicon size -->
    <link rel="icon" type="image/png" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-57x57.png">                   <!-- swaldman added, for small icons by default -->

    <!-- at bigger sizes, we overlay a bit of text -->
    <!-- icons as recommened by https://mathiasbynens.be/notes/touch-icons -->
    <!-- For Chrome for Android: -->
    <link rel="icon" sizes="192x192" href="<( iconLoc.relative )>/interfluidity-wave-tech-192x192.png">
    <!-- For iPhone 6 Plus with @3× display: -->
    <link rel="apple-touch-icon-precomposed" sizes="180x180" href="<( iconLoc.relative )>/interfluidity-wave-tech-180x180.png">
    <!-- For iPad with @2× display running iOS ≥ 7: -->
    <link rel="apple-touch-icon-precomposed" sizes="152x152" href="<( iconLoc.relative )>/interfluidity-wave-tech-152x152.png">
    <!-- For iPad with @2× display running iOS ≤ 6: -->
    <link rel="apple-touch-icon-precomposed" sizes="144x144" href="<( iconLoc.relative )>/interfluidity-wave-tech-144x144.png">
    <!-- For iPhone with @2× display running iOS ≥ 7: -->
    <link rel="apple-touch-icon-precomposed" sizes="120x120" href="<( iconLoc.relative )>/interfluidity-wave-tech-120x120.png">
    <!-- For iPhone with @2× display running iOS ≤ 6: -->
    <link rel="apple-touch-icon-precomposed" sizes="114x114" href="<( iconLoc.relative )>/interfluidity-wave-tech-114x114.png">
    <!-- For the iPad mini and the first- and second-generation iPad (@1× display) on iOS ≥ 7: -->
    <link rel="apple-touch-icon-precomposed" sizes="76x76" href="<( iconLoc.relative )>/interfluidity-wave-tech-76x76.png">
    <!-- For the iPad mini and the first- and second-generation iPad (@1× display) on iOS ≤ 6: -->
    <link rel="apple-touch-icon-precomposed" sizes="72x72" href="<( iconLoc.relative )>/interfluidity-wave-tech-72x72.png">
    <!-- For non-Retina iPhone, iPod Touch, and Android 2.1+ devices: -->
    <link rel="apple-touch-icon-precomposed" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-57x57.png">

    <!-- end icons / favicons -->

A complication emerged, in that my text-labeled icons looked busy and bad, and the text was illegible, when rendered at very small sizes. So you'll note that, for small sizes, I use interfluidity-wave-blank-square files rather than interfluidity-wave-tech. (I thought the very small icons looked better with square corners as well.)

But Firefox kept picking up the largest <link rel="icon" ... > and downsampling from that, rather than downloading the nearest or nearest-larger icon.

So I added the image I want used only for small icons also as a very large icon.

    <link rel="icon" type="image/png" sizes="500x500" href="<( iconLoc.relative )>/interfluidity-wave-blank-square-500x500.png"> 

Less quirky browsers hopefully never choose this to render from, because there is always a better-sized icon to choose from. But Firefox does choose this one and downsample to render its very small icons-in-a-tab, so the trick gets rid of the ugly, illegibly scaled text in tiny icons under Firefox.

(It does seem a bit wasteful to trick Firefox into downloading 500x500 images to render at 16x16 or 32x32, but if it smartens up, it can download icons prerendered in just those tiny sizes!)

Anyway, that was what I did to add icons to this site and to drafts.

Please let me know if there are much better ways!


Update (17-June-2024):

Carlana Johnson points me to a great article by Andrey Sitnik, How to Favicon in 2024: Six files that fit most needs.

For now, because I'm lazy, and because my icons are not SVG-friendly, I'm leaving things as they are.

But perhaps someday I'll make better, vector, logos and icons, rather than just repurpose my social media avatar. Then I will try out this carefully thought-out approach.

2024-06-08

Should blogs adopt the itunes:category RSS tag?


Apple organized a whole slew of standard categories or genres for podcasts, when they defined the itunes RSS namespace for podcasts. This helped discoverability of podcasts, as podcast applications and indexers can let users search or browse by genre, or make suggestions based on genres users seem to prefer.

Apple seems to have done a pretty good job at this. It's not obvious that "podcast genres" are meaningfully distinct from "blog genres". We could, of course, invent some analogous kind of categorization just for blogs, but why? As Dave Winer hath writ:

Fewer format features is better

If you want to add a feature to a format, first carefully study the existing format and namespaces to be sure what you're doing hasn't already been done. If it has, use the original version. This is how you maximize interop.

Podcasts got a huge lift from what was originally the blog-centric RSS format. Why haven't blogs adopted podcast-RSS best practices to get a lift right back?

There's a potential issue that some applications may use the presence of itunes RSS tags to imply an RSS feed is for a podcast. But that's pretty dumb. If applications expecting podcasts import blogs without soundfiles because they use this heuristic, well, bad on them. They should fix that. When blogs do contain some posts with audio <enclosure/> elements, then arguably they are podcasts inter alia. Client applications should use intelligent criteria to decide what they want to consider suitably a "podcast" or "podcast episode".

It strikes me as a good idea to make use of good ideas from the itunes (and podcast) namespace for blogs and other RSS applications.

Starting, perhaps, with itunes:category.

Apple defines itunes:category as a channel-level element that permits multiple entries (you don't have to be just one genre), and nested entries for subcategories. Seems pretty good!

What do you think?

2024-06-06

Neonix


➣  This post was meaningfully revised at 2024-06-06 @ 06:30 PM EDT. The previous revision is here, diff here. (See update history.)
➣  This post is expected to evolve over time. You can subscribe to ongoing updates here.

I've found ripgrep to be an invaluable tool. At some level it's just grep, but its speed and ergonomics make it something else. I find things much more quickly. In combination with projectile, it gives me a fast project-wide find, reducing one of the advantages of commercial IDEs over my humble emacs.

Today, Bill Mill points me to a command line tool called fzf which looks like kind of a command-line Swiss army knife. It certainly makes sorting through very long find . output a breeze.

Some of Bill's scripts use a find replacement called fd, which I plan to take a look at.

I think this is an interesting trend, taking venerable UNIX command-line tools and rethinking, reimplementing them with modern languages and the decades of experience since that first, revolutioary, burst of command-line creativity in the early UNIX days.

I'll let this post become a "sprout" from which I can track these kinds of tools as I encounter them.

  • fd
    A modern retake on find I haven't played with yet.

  • fzf
    A fuzzy-matching tool for interactively sorting through large command line and command completion outputs. See Bill Mill.

  • rg
    "ripgrep". A new take on grep, super fast, seachers directories recursively, by default excluding .git and whatever is .gitignore-d.

I'll add more as I, um, fd them!


p.s. apparently there's a DJ called Neonix! Sorry! I'm using the, er, neologism to refer to neo-UNIX.


Update 2024-06-06: Kartik Agaram's points me to Bill Mill's "modern unix tools" page. Which itself contains a link to a "Modern Unix" collection by Ibraheem Ahmed. So much to play with!

2024-06-05

Readying a blog for revision histories and sprouts under unstatic


➣  This post was meaningfully revised at 2024-06-06 @ 01:30 PM EDT. The previous revision is here, diff here. (See update history.)

I've been developing support for my take on Chris Krycho' "sprouts" against this blog. Much of that support is now built into unstatic, my library for building static-site generators. But it does also require some support from within applications of that library, from the scala code and the untemplates of the individual site generators.

I'm going to upgrade my "drafts" blog to support revisions, diffs, and sprouts. I'll document what it takes to do that here.

Enable revision- and diff-generation in Scala code

In the object DraftsSite, the unstatic.ztapir.ZTSite that defines the site to be generated, inside the unstatic.ztapir.simple.SimpleBlog that defines the blog, add a RevisionBinder that can pull old revisions of pages and generate them into the website, and a DiffBinder that can generate diffs between current and new revisions:

 override val revisionBinder : Option[RevisionBinder] = Some( RevisionBinder.GitByCommit(DraftsSite, JPath.of("."), siteRooted => Rel("public/").embedRoot(siteRooted)) )
 override val diffBinder     : Option[DiffBinder]     = Some( DiffBinder.JavaDiffUtils(DraftsSite) )

By default, SimpleBlog sets these values to null. We override them.

The RevisionBinder we are using is RevisionBinder.GitByCommit. Its constructor accepts

  1. our ZTSite;
  2. a file path (java.nio.file.Path) to the git repository in which revisions are stored, just '.' for us because the git repository is the static-site generator's working directory;
  3. a function that converts a site-rooted path (unstatic.UrlPath.Rooted) into the associated path within the repository relative to its root (as unstatic.UrlPath.Rel);
  4. A RevisionBinder.RevisionPathFinder, a function which takes a document's site-rooted path and a "revision spec" (which for this revision binder is a full-size hex git commit) and determines the path the revision should take within the site.

We omit the fourth argument because we use a default, which coverts a path like /a/b/whatever.html to /a/b/whatever-oldcommit-c6e71f4d689f2b208c3eae19e647435322fa6d04.html

For a DiffBinder, we use DiffBinder.JavaDiffUtils, based on the java-diff-utils library. When we ask it to generate a diff for a path, we give it a reference to the RevisionBinder.RevisionPathFinder so it can know the filenames old versions get generated into. We also give it a DiffBinder.DiffPathFinder, which computes the pathnames of the generated diffs. Again, the DiffBinder.DiffPathFinder is omitted our code. We rely a default argument, which produces diff paths like /a/b/whatever-diff-72eaf9fdfebc9e627bff33bbe1102d4d250ad1d0-to-199e44561de3fd9e731a335d8b2a655f42d9bc04.html.

Now, if we ever provide update histories to any posts, copies of any old revisions referenced will be generated into the public directory of the site, as well as diffs between adjacent items in the update history.

Modify the site to generate update histories at the end of posts

It's a matter of taste, but we'll display update histories only on single-post permalink pages, not at the end of each post when concatenated together. And we won't include them as content in RSS. (Update histories do get included as additional metadata in RSS. That's built in.) SimpleBlog conveniently distinguishes between Single, Multiple, and Rss; we can just check our presentation and behave appropriately.

So... We'll

  1. Steal layout-update-history.html.untemplate from the tech blog, and bring it in as a layout of drafts. (I had to import com.interfluidity.drafts.DraftsSite.MainBlog, and modify the link in the note to point to the drafts got repository, rather than the tech rep.)
  2. Modify layout-entry.html.untemplate in drafts to bring in the new layout of update history. That turns out to be really easy, because we already have logic at the end of our entry layout to restrict addition of previous and next links to single page presentations. So all we have to do is add our update history layout just after the div for those links, but within the conditionally added region. It's literally just
    <( layout_update_history_html( input ) )>
    

    inserted just after that div, still within the conditional region.

Modify the main layout and CSS so that old revisions are visually distinct from, and link back to, current revisions

At the top of the body element of layout-main.html.untemplate, we add an empty div element called top-banner.

  </head>
  <body>
    <div id="top-banner"></div>

In current revisions, this will remain invisible and empty. But we'll add a bit of javascript to detect if we're in an old revision, and add some HTML with a link back to the current revision. If we are in an old revision, we'll also add a class called old-draft to the body element, so that we can do whatever we feel like in CSS to make the old revision visually distinct.

We use a javascript regular expression and our current location to decide if we are in an old revision.

    <script>
      document.addEventListener("DOMContentLoaded", function() {
          const regex = /(^.*)\-oldcommit\-[0-9A-Fa-f]+\.html/;
          const match = window.location.pathname.match(regex);
          if (match) {
              const b  = document.querySelector("body");
              const tb = document.getElementById("top-banner");
              b.classList.add("old-draft");
              tb.innerHTML = "You are looking at an old, superceded version of this page. For the current version, please <a href=\"" + match[1] + ".html\">click here</a>.";
          }
       });
    </script>

We adjust our main CSS to keep the top-banner div at the top of our document, when it's relevant:

body.old-draft #top-banner {
    position: fixed;
    top: 0;
    left: 0;
    width: 100vw;
    color: black;
    background-color: yellow;
    text-align: center;
    font-family: 'RobotoCondensed', 'Arial', 'Helvetica', sans-serif;
    font-variation-settings: "wght" 500;
    padding-top: 4px;
    padding-bottom: 4px;
    border-bottom: 2px solid black;
}

Also add CSS so that, when viewing old revisions, the documents look, well, old.

body.old-draft {
    padding-top: 1em;
    background-color: #F3F5DA;
    color: #6E7FD9;
    font-family: 'GabrieleD', 'Courier';
}

(These choices were inspired by the TT2020 image here, although ultimately I went for Gabriele, because the TT2020 file sizes were very large.)

Add a prologue to posts with revisions or that generate sprout RSS

When a post is a revision or a sprout, we want a prologue that indicated that it is, with links to the prior revision, the update history, and the sprout RSS.

I'm too lazy to describe what it took to add that in detail, but here's a nice, concise commit. Check out the diff.

Miscellaneous tweaks

I don't want to have to import UpdateRecord whenever I want to add update histories to entries, so I added them as an extra import to my untemplate customizer in my mill build file, build.sc:

  override def untemplateSelectCustomizer: untemplate.Customizer.Selector = { key =>
    var out = untemplate.Customizer.empty

    if (key.inferredPackage.indexOf("mainblog")>=0 && key.inferredFunctionName.startsWith("entry_")) {
      out = out.copy(extraImports=Seq("unstatic.*","com.interfluidity.drafts.DraftsSite.MainBlog","unstatic.ztapir.simple.UpdateRecord"))
    }

The "update history note" should be small, so I add to css:

.update-history-note {
    font-size: smaller;
    line-height: 100%;
}

Republish the site

Even though nothing visible should change, let's go ahead and republish the site, so that our javascript and css scaffolding for old-looking updates become available.

Test and tweak

Even though I don't have any actual new revisions to create, I added a fake revision history to the most recent post, played around in CSS with the look of the old revision until I liked it, then commented away the fake update history.

2024-06-02

Green shoots of sprouts


Erlend Sogge Heggen pointed me to a post by Chris Krycho on "sprouts".

Krycho points out that most of our online infrastructure is organized around feeds of posts, which are "published" or "announced" as finished work. But creative work naturally develops in drafts and increments. It might be best to publish at first only the barest outline of a thing, and then collaborate in the open to flesh it out and bring it forward. What we want to announce, then, are not new posts, but a beginning and then new milestones. If we can, we'd want to retain the full history of the process.

I've gone a fair distance towards implementing one version of this vision recently. My blogging infrastructure is my own static-site generation library unstatic, which is built on top of "untemplates". Untemplates are just thin wrappers around Scala functions.

Like lots of static-site generators, I write in files that are mostly markdown, with some metadata in a special header. But in the untemplate header, I literally write Scala code.

Here's a very simple example, from a recent post, "c3p0 and loom":

> val UntemplateAttributes = immutable.Map[String,Any] (
>   "Title"     -> "c3p0 and loom",
>   "PubDate"   -> "2024-03-18T22:20:00-04:00",
>   "Anchor"    -> "c3p0-and-loom"
> )

given PageBase = PageBase.fromPage(input.renderLocation)

(input : MainBlog.EntryInput)[]~()>      ### modify Title/Author/Pubdate above, add markdown or html below!

I write [a lot of open source software](https://github.com/swaldman), but I've only ever really had one "hit".
That makes me pretty sad, actually. I think some of what I've written is pretty great, and it's lonesome to be the
sole user...

For most posts, I just copy and modify this header, and then write markdown text below. But if I want to do anything more fancy, well, I have the full Scala programming language to work with.

In order to realize a vision of "sprouts", I first implemented update histories as a simple List of UpdateRecord objects.

The post prior to this one has already become a particularly sprouty sprout. The post documents experiments with extensions to RSS. I try stuff out, then, as often as not, I untry it. So there's lots of revising. Here's the beginning of that post, for now:

> val updateHistory =
>    UpdateRecord("2024-06-02T00:25:00-04:00",Some("Drop <code>iffy:timestamp</code>. We can just reuse <code>atom:updated</code> for the same work."),Some("199e44561de3fd9e731a335d8b2a655f42d9bc04")) ::
>    UpdateRecord("2024-06-01T21:35:00-04:00",Some("Add initial take on tags related to updates and revisions."),Some("72eaf9fdfebc9e627bff33bbe1102d4d250ad1d0")) ::
>    UpdateRecord("2024-05-25T23:00:00-04:00",Some("Add JS/CSS so that prior revisions are visually distinct from current."),Some("13de0232319ceab2f830591c318089d18cbec78d")) ::
>    UpdateRecord("2024-05-24T00:25:00-04:00",Some("Drop tags <code>iffy:when-updated</code> and <code>iffy:original-guid</code>, bad appraoch to updates."),Some("394986cb8d9c57f567d324e691a44d50102101ce")) ::
>    Nil
>
> val UntemplateAttributes = immutable.Map[String,Any] (
>   "Title"         -> "The 'iffy' XML namespace",
>   "PubDate"       -> "2024-05-13T04:10:00-04:00",
>   "Permalink"     -> "/xml/iffy/index.html",
>   "UpdateHistory" -> updateHistory,
>   "Sprout"        -> true,
>   "Anchor"        -> "iffy-xml-namespace"
> )

given PageBase = PageBase.fromPage(input.renderLocation)

(input : MainBlog.EntryInput)[]~()>      ### modify Title/Author/Pubdate above, add markdown or html below!

I want to do a lot of things with RSS that require
extensions of RSS (as the RSS spec [foresees](https://www.rssboard.org/rss-specification#extendingRss))...

Each UpdateRecord marks a discretionary choice, to declare a "significant" or "material" update. Most updates are not! There are typically several minor revisions, typo fixes, and tweaks, between these noted updates.

When I decide a revision is serious, I provide a timestamp, and, optionally, a description and a "revision spec". The description is self-explanatory. The revision spec is arbitrary. When I define a site, I optionally provide an RevisionBinder that is able to convert a revision spec and a path into the contents of a resource within the referenced revision.

There might be many different implementations of RevisionBinder, each with its own kind of revision specification. The one that exists for now just pulls resources from git commits.

The revision specs shown are just git commits, referenced in full-length hex. With each "major" update, I provide the hex for the commit prior to my update, the one it is superceding. That usually will not be the same commit as the "major" update prior, because most "major" updates are followed by a series of minor tweaks.

So, as I work on an evolving document, I note significant updates by adding records to a list, and I include the list I build in UntemplateAttributes, the standard Map in which I also define Title, PubDate, Author, etc. (The example posts omit Author, because this site has a default author — me! — if that field is left unset.)

The update history converts pretty directly into a user readable history. Let's take a look!

(If you want to see the template that lays out the update history, you can find it here.)

To realize the "sprout" vision, we need more than this. We need some means by which people can follow the evolutions of the document. Ideally, when a "major update" is published, subscribers to the blog should see something about that in their feeds. The RSS feeds we generate include <atom:updated> tags, but as Krycho points out, very few feed readers do anything with that.

You'll note that in addition to the UpdateHistory, we've added to UntemplateAttributes a key called Sprout. When the site generator encounters a mapping of Sprout to true on an untemplate, it generates an additional RSS feed that will track the update history of just this post. For our example post, you can find that feed here

The template that lays out blog entries looks for an update history, and if it is there, checks for prior revisions references, diffs, and the sprouts flag. It prepends to the post a brief note with links, to the prior revision if it's available, to the update history, and to the post-specific RSS feed. Go ahead, check out the beginning of our example post.

Going forward, I think I will add the capability of generating "synthetic" posts when there are new major updates. They'd just be formulaic announcements of the updates. They might never appear on the blog front page. But they would be included in the RSS feed. I'd add metadata to the RSS items for these posts, indicating that they are synthetic and describing them, so that tools like feedletter can make intelligent choices about whether and how subscribers should receive notificatios about these posts.

But that is all still to come!

For now, we have update histories with links to prior revisions and diffs, and dedicated RSS feeds by which dedicated collaborators can stay abreast of our burgeoning sprouts.

2024-05-13

The 'iffy' XML namespace


➣  This post was meaningfully revised at 2024-06-17 @ 11:50 PM EDT. The previous revision is here, diff here. (See update history.)
➣  This post is expected to evolve over time. You can subscribe to ongoing updates here.

I want to do a lot of things with RSS that require extensions of RSS (as the RSS spec foresees).

The URL http://tech.interfluidity.com/xml/iffy/ will mark an XML namespace in which some of these extensions will be defined.

The conventional prefix associated with this namespace will be iffy.

The current version of this namespace is v0.0.1-SNAPSHOT.

(-SNAPSHOT signifies that the version preceding that suffix has not yet been finalized. Much more to come!)


Table of Contents


Element — iffy:completeness

Solely a channel level element

Contains one of the following four values:

  1. Ping
  2. Metadata
  3. Content
  4. Media

iffy:completeness describes the completeness that clients should expect of RSS item elements.

  • Ping makes the least commitment. Items need not include a guid element, or any elements at all beyond RSS' requirement that at least one of title or description be present. RSS documents have completion Ping by default. Any or all items may meet the requirement for a higher completeness level, but no promises or commitment is made beyond the base specification.

  • Metadata commits that each item MUST include a guid element, as well as meeting the base requirements for an RSS item.

  • Content commits that each item, either inside its description tag, or via an extension such as content:encoded, includes the full content of the items it includes, suitable for independent rendering by any client capable also of resolving references to linked media externally. No limitation is placed on whether the full content is placed in a description element, in content:encoded, or in some other extension.

  • Media augments Content by embedding attachments to subsidiary media inside the RSS document. Subsidiary media does not include all potential links, just links which share a prefix with the current RSS document, which by default means all links subsidiary to the parent of the RSS document as specified in an atom:link

    More information on this soon when iffy:attachment is defined

The four values represent nested, hierarchical levels of commitment. Ping commits to nothing more than the spec requires. Media makes every commitment promised by the prior three levels, and an additional one.

If not specified, no commitment is made, the feed should be considered Ping.

Example:

<?xml version='1.0' encoding='UTF-8'?>

<rss version="2.0" xmlns:iffy="http://tech.interfluidity.com/xml/iffy/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>tech — interfluidity</title>
    <atom:link type="application/rss+xml" rel="self" href="https://tech.interfluidity.com/feed/index.rss"/>
    <iffy:completeness>Content</iffy:completeness>
    <!-- Other channel elements -->
    <item>
      <!-- Other item elements -->
    </item>
  </channel>
</rss>


Element — iffy:diff

When a subelement of iffy:update

MUST contain a URL, URI, or IRI of a human-reviewable a diff of the current updated and the final minor revision of the update prior (or of the initially published post, if the current update is the first declared update).

Example:

<iffy:diff>https://tech.interfluidity.com/xml/iffy/index-diff-394986cb8d9c57f567d324e691a44d50102101ce-to-13de0232319ceab2f830591c318089d18cbec78d.html</iffy:diff>

See also iffy:update-history example.


Element — iffy:hint-announce

When a subelement of item

Represents a hint to RSS consumers that "push" — announce, rebroadcast, or notify — items whether this item should be so pushed. Consumers are free to ignore this hint or make use of it as they wish.

MUST contain an iffy:policy element, whose values MUST BE one of

  • Always — the item should always be notified
  • Never — the item should never be notified
  • Piggyback — the item should be notified as part of digests or other announcements of multiple items, but should not constitute its own announcement.

MAY contain an iffy:restriction element, which represents an application-specific restriction over the consumers to which it is addressed. No restrictions are placed on the content of the iffy:restriction element. Applications can define restrictions as they see fit.

An iffy:hint-announce element with NO iffy:restriction or an empty iffy:restriction tag should be interpreted as the intended default for ALL applications not addressed by an iffy:hint-announce with a more specific restriction.

Multiple iffy:hint-announce elements may be placed within a single item, provided that only one has an omitted or empty iffy:restriction, and all iffy:hint-announce elements containing an iffy:restriction contain a unique restriction. Each iffy:restriction SHOULD apply to nonoverlapping application-specific contexts. If that is not the case, how applications prioritize or respond to conflicting iffy-hint-announce elements whose restrictions both apply must be determined by the application.

Example:

<iffy:hint-announce>
  <iffy:policy>Piggyback</iffy:policy>
</iffy:hint-announce>

Element — iffy:initial

When a subelement of iffy:update-history

MAY contain a sequence of dc:creator elements, defining the initial authorship of an item, if authorship has changed. Since the containing item should always reflect current authorship (that of the most recent revision), but no iffy:update element is defined for the initially published version, this container is required for completeness.

Example:

<iffy:initial>
  <!-- Perhaps more recent updates, and the current item, include more authors -->
  <dc:creator>First Author, Esq.</dc:creator>
</iffy:initial>

Element — iffy:policy

In general, represents a statement of some kind of policy with respect to its containing element, suggested to feed consumers for handling a feed or item.

When a subelement of iffy:hint-announce

Please see iffy:hint-announce.


Element — iffy:provenance

When an item level element

If present in an item, the item contains a sequence of one or more atom:link elements, each of whose

  • rel attribute is MUST BE via
  • href attribute MUST BE the URL of an RSS feed from which the base contents of this item were drawn
  • type attribute SHOULD BE application/rss+xml

If the item from which the current item was sourced does not contain an iffy:provenance, then the current item should include just one atom:link.

If the item from which the current item was sourced does contains an iffy:provenance, then the current feed SHOULD include all items of that element, with the URL of the feed from which the item was sourced PREPENDED.

This will ensure the most immediate source will be the first atom:link element. The origin — or at least the source for which no further provenance is known — will be the last atom:link element.

Processors may expect a channel level atom:link element with rel="self" and type="application/rss+xml" to use as the basis for provenance in source documents. See RSS Best Practices.

Example (from here):

<?xml version='1.0' encoding='UTF-8'?>

<rss version="2.0" xmlns:iffy="http://tech.interfluidity.com/xml/iffy/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>interfluidity, all blogs</title>
    <!-- Other channel elements -->
    <atom:link type="application/rss+xml" rel="self" href="https://www.interfluidity.com/unify-rss/all-blogs.rss"/>
    <item>
      <title>Industrial policy and ecosystems</title>
      <guid isPermalink="true">https://drafts.interfluidity.com/2024/05/11/industrial-policy-and-ecosystems/index.html</guid>
      <author>nospam@dev.null (Steve Randy Waldman)</author>
      <link>https://drafts.interfluidity.com/2024/05/11/industrial-policy-and-ecosystems/index.html</link>
      <!-- Other item elements -->
      <iffy:provenance>
        <atom:link type="application/rss+xml" rel="via" href="https://drafts.interfluidity.com/feed/index.rss"/>
      </iffy:provenance>
    </item>
  </channel>
</rss>

Element — iffy:restriction

In general, represents an expressin of some kind of restriction over the application of its containing element.

When a subelement of iffy:hint-announce

Please see iffy:hint-announce.


Element — iffy:revision

MUST contain a URL, URI, or IRI of a either a fixed past revision or the current (potentially evolving) revision of an item.

Example:

<iffy:revision>https://tech.interfluidity.com/xml/iffy/index-oldcommit-13de0232319ceab2f830591c318089d18cbec78d.html</iffy:revision>

See also iffy:update-history example.


Element — iffy:synthetic

This element, usually empty, is intended to mark channels or items that are in some sense "synthetic", rather than, um, hand-made?

When a subelement of channel

If all the items in this feed are automatically rather than human generated, however you want to defined that, iffy:synthetic can mark an entire channel as synthetic, bot-produced.

Applications that include iffy:synthetic as a subject of channel SHOULD NOT also mark individual items as iffy:synthetic, unless there is some meaningful sense in which some items are more synthetic than others. It serves no purpose to mark every item of a feed iffy:synthetic when the channel is already so marked.

When a subelement of item

Marks items as "synthetic", that is more synthetic than other, unmarked items in the feed.

Exactly what that means is not defined, but it should be relative to the other items in the feed. If every item in a feed is automatically generated — suppose a weather feed, announcing conditions on the hour — then those items should NOT be marked iffy:synthetic, because they are the usual for the feed. The channel as a whole might be marked iffy:synthetic.

When iffy:synthetic is a subelement of item, it is intended to distinguish more automatic from less automatically produced items. It serves no purpose if it is used to mark all items.

Example:

<iffy:synthetic/>

Element — iffy:update

When a sub-element of iffy:update-history

MUST contain one atom:updated element.

MAY also contain one each of

MAY, but usually will not, contain any number of dc:creator elements, reflecting authorship specific to this revision. By default, an update's authors are the same as the authorship of the containing item, which always reflects the curren revisions authors. If authorship is evolving over time, it SHOULD be specified for every update except the most recent one. Initial authorship may be specified in an iffy:initial element.

Typo fixes, small rephrasings, and other tweaks are not expected to be recorded as distinct updates. That is, within a "single update" there may be a sequence of smaller revisions that go unrecorded. Applications that want a more forensic history might consider including and exposing every published change in version control.

See iffy:update-history example.


Element — iffy:update-history

When an item level element

SHOULD contain a sequence of iffy:update elements, in reverse chronological order, describing the histories of major revisions to an item.

MAY contain one iffy:initial element.

Items containing an iffy:update-history SHOULD also include an atom:updated tag corresponding to the most recent update.

Typo fixes, small rephrasings, and other minor tweaks are not expected to be recorded as distinct updates. That is, within a "single update" there may be a sequence of smaller revisions that go unrecorded. Applications that want a more forensic history might consider including and exposing every published change in version control.

Example:

<item>
  <!-- Other item elements -->
  <iffy:update-history>
    <iffy:update>
      <atom:updated>2024-06-02T04:20:00Z</atom:updated>
      <atom:summary>
        <![CDATA[Drop <code>iffy:timestamp</code>. We can just reuse <code>atom:updated</code> for the same work.]]>
      </atom:summary>
      <iffy:revision>
        https://tech.interfluidity.com/xml/iffy/index-oldcommit-199e44561de3fd9e731a335d8b2a655f42d9bc04.html
      </iffy:revision>
      <iffy:diff>
        https://tech.interfluidity.com/xml/iffy/index-diff-199e44561de3fd9e731a335d8b2a655f42d9bc04-to-current.html
      </iffy:diff>
    </iffy:update>
    <iffy:update>
      <atom:updated>2024-06-02T01:35:00Z</atom:updated>
      <atom:summary><![CDATA[Add initial take on tags related to updates and revisions.]]></atom:summary>
      <iffy:revision>
        https://tech.interfluidity.com/xml/iffy/index-oldcommit-72eaf9fdfebc9e627bff33bbe1102d4d250ad1d0.html
      </iffy:revision>
      <iffy:diff>
        https://tech.interfluidity.com/xml/iffy/index-diff-72eaf9fdfebc9e627bff33bbe1102d4d250ad1d0-to-199e44561de3fd9e731a335d8b2a655f42d9bc04.html
      </iffy:diff>
    </iffy:update>
    <iffy:update>
      <atom:updated>2024-05-26T03:00:00Z</atom:updated>
      <atom:summary><![CDATA[Add JS/CSS so that prior revisions are visually distinct from current.]]></atom:summary>
      <iffy:revision>
        https://tech.interfluidity.com/xml/iffy/index-oldcommit-13de0232319ceab2f830591c318089d18cbec78d.html
      </iffy:revision>
      <iffy:diff>
        https://tech.interfluidity.com/xml/iffy/index-diff-13de0232319ceab2f830591c318089d18cbec78d-to-72eaf9fdfebc9e627bff33bbe1102d4d250ad1d0.html
      </iffy:diff>
    </iffy:update>
    <iffy:update>
      <atom:updated>2024-05-24T04:25:00Z</atom:updated>
      <atom:summary>
        <![CDATA[Drop tags <code>iffy:when-updated</code> and <code>iffy:original-guid</code>, bad appraoch to updates.]]>
      </atom:summary>
      <iffy:revision>
        https://tech.interfluidity.com/xml/iffy/index-oldcommit-394986cb8d9c57f567d324e691a44d50102101ce.html
      </iffy:revision>
      <iffy:diff>
        https://tech.interfluidity.com/xml/iffy/index-diff-394986cb8d9c57f567d324e691a44d50102101ce-to-13de0232319ceab2f830591c318089d18cbec78d.html
      </iffy:diff>
    </iffy:update>
  </iffy:update-history>
</item>
2024-04-09

Names too on the nose


➣  This post was meaningfully revised at 2024-06-01 @ 11:40 PM EDT. The previous revision is here, diff here. (See update history.)
➣  This post is expected to evolve over time. You can subscribe to ongoing updates here.

This will be an odd post for a tech blog. But here is a list of names "too on-the-nose":

  • Kenneth ChesebroGuy from Wisconsin who came up with the alternative electors idea to try to confuse the 2020 elections so the House could throw it to Trump. He seems like a pretty cheesy bro to me.
  • Bernie Madoff — He made off with the money.
  • Terra Rodgers — a "director for superhot rock energy", that is a form of geothermal energy, a kind of terrestrial energy.
  • Yahya Sinwar — Whatever you think of the broader Israel/Palestine conflict, the operation he ordered on October 7, 2023 was sin and guaranteed a brutal war. That Sinwar is preceded by Yahya renders the name a kind of Satanic cheerleader's chant.

More names coming soon!

I've kind of wanted to maintain a list like this for a long time. I sometimes think we're inhabiting a work of fiction, given how contrivedly a propos certain names often are. I encounter these names, and I want to make a note of them.

I'll do that here!

(I think I might once have encountered a Twitter thread in this vein. My apologies to whomever I am ripping off!)

Please get in touch with any suggestions!

Update: Joe Crawford writes to point out that the on-the-nosedness of names might have mechanisms other than the universe being a crude dissimulation!


Anyway, I'm finally putting together this list here, on my "tech blog", because it's a good way to experiment with an idea that Chris Krycho describes as "sprouts".

Often "posts" ought not be thought of as finished pieces, but as beginnings — seeds, even — of ongoing, evolving work. (Thanks to Erlend Sogge Heggen for pointing me to this piece!)

I've begun by adding support to the <atom:updated> tag in my site generator's RSS.

When I make meaningful changes, I can update this value, and my feed will re-sort the updated post to the top, and prefix "Updated:" to the title. I can optionally mark posts to create a new GUID for each update, which may cause tools (like my own feedletter) to treat them as new posts.

(For now, I am leaving that turned off for this post, and just re-sorting updates to the top of the feed. In the future, who knows?)

There's lots, lots more to explore in this vein. Do read Chris Krycho's post. But this, I hope, is a start.

2024-03-27

tar or tgz?


A thing I've done over the last while is automate a lot of my sysadmin, using systemd timers to hit scala-cli scripts.

I've built for myself a little framework that is incredibly in need of documentation, but that lets me define scripts very flexibly and can provide great step-by-step information about what happens and anything that goes wrong in brightly colored HTML e-mails. I love it.

Much of what I do is back things up to a cloud service using rclone.

Last night I wrote a script just to back up some directory. I ran into what I consider an age-old dilemma.

Sometime early in my geekish career, I picked up the nugget that its best to keep important backups as straight tar files rather than tgz (or tbz or whatever), because if some bit gets corrupted, most of a straight tar will remain recoverable, while the compressed archive will just be toast.

Is that right? Is it a real concern? I don't think I've ever experienced a corrupted archive, tar or tgz, but of course backup is a form of insurance, the whole point is to be resilient to tail risks.

Still, searching the interwebs, I don't see a lot of people recommending uncompressed archives. Space is more of a bottleneck to me than CPU or time, so if the resilience advantage isn't significant, I'd compress.

What do you think?


Update: Feel free to comment here

2024-03-18

c3p0 and loom


I write a lot of open source software, but I've only ever really had one "hit". That makes me pretty sad, actually. I think some of what I've written is pretty great, and it's lonesome to be the sole user.

Nevertheless, my one "hit" was c3p0, a JDBC Connection pool that, in its day, was extremely popular in Java web application stacks.

Its day was a long time ago, though! c3p0 was first released on Sourceforge in 2001, and was very widely used from the mid aughts through the early 2010s.

c3p0 is "mature" software, and I have just let it alone for years at a time. But I do continue to use it in all of my own database projects. Periodically I still put it (and myself) through intense bouts of maintenance.

Actually, I have hated my years-long lapses (and myself) because github issues collect and I get snarky comments about abandonware and I feel like I am a Very Bad Maintainer. So the first order of business in my most recent "sprint" (isn't that what the kids call it?) was to move c3p0 from a very bespoke and manual ant build to something sleek and modern and automatic, so that maybe I wouldn't put off maintenance into years-delayed batches just because it is annoying to touch. c3p0's new mill build works beautifully.

The new build is much lighter, and the modern style of just publishing git repositories rather than source distributions and uploading releases to Sonatype is fast and easy. I think it'll really improve my maintenance promptness.

c3p0's latest release, 0.10.0, includes lots of enhancements and improvements. But a really fun thing was to integrate the very latest shiny new thing in Java — "Project Loom" virtual threads — into this very old, highly concurrent library.

c3p0 is very old school. It was initially written in Java 1.2 or 1.3. Java's standard concurrency utilities, the java.util.concurrent package, did not yet exist. There were no standard thread pools defined as ExecutorService implementations. So I rolled my own. c3p0 relies entirely on the JVM's built-in primitives — monitors and synchronized blocks, wait() and notifyAll() — to manage concurrency.

Over the years, people have requested that c3p0 support asynchroneity via pluggable Executor instances, rather than just its own, hand-rolled thread pool. Users mostly seemed to want this so c3p0 could share existing application thread pools, avoiding the resource footprint of several c3p0-dedicated threads.

A couple of weeks ago, I finally got around to implementing pluggable threading. Sharing application thread pools is now supported. But I was mostly motivated by curiousity about how well this very old library would work with newfangled loom virtual threads.

Great, it turns out!

  • I was concerned, since c3p0 relies so much on monitors and synchronized blocks, that virtual threads would be "pinned". Virtual threads are scheduled to, and deschedule from, "carrier" operating-system threads, but they cannot be descheduled while they hold a monitor. If a thread blocks while holding a monitor, it is described as "pinned", and that's a bad thing.

    But c3p0 is very careful not to perform potentially blocking operations while holding a monitor. Running tests with

    -Djdk.tracePinnedThreads=full
    

    produced no stack traces of pinned threads, even under heavy load. This was gratifying.

  • Using virtual threads rather than a thread pool can reduce contention for monitors. The thread pool itself is a site of contention, as information about which threads are pooled and which are available to run tasks constitute shared, mutable state. Replacing a thread pool with simply firing and forgetting a virtual thread for each asynchrnous task left nothing to contend for. c3p0-loom includes two implementations of TaskRunnerFactory:

    com.mchange.v2.c3p0.loom.VirtualThreadPerTaskExecutorTaskRunnerFactory tracks the number of simultaneously active threads (which you can observe via JMX), which involve synchronizing on a monitor so some contention is still possible.

    But with com.mchange.v2.c3p0.loom.UninstrumentedVirtualThreadPerTaskTaskRunnerFactory, nothing at all is tracked and no monitors are acquired. Some analog of contention might result from managing shared state within the loom virtual-threading runtime, but all overt contention for thread-pool monitors is eliminated.

In practice, the thread pool is not c3p0's main site of monitor contention, however.

c3p0's resource pool is its main site of monitor contention. For most applications, the contention overhead is negligible, when amortized over Connection operations. But in rare cases, when very large numbers of threads are hitting the pool, contention can become an issue. For now, the only way to address contention at the resource pool is to construct multiple DataSource instances and balance the load across them.

In any case, c3p0 and loom work very well together!

I still recommend that applications start by using c3p0's default, hand-rolled thread pool. It implements deadlock detection and recovery, and logs verbose debugging information about what happened. This makes it very easy to diagnose what kinds of operations have been hanging and consuming threads when something goes wrong.

Under loom, applications that might otherwise have logged flamboyant thread-pool problems will proceed gracefully for some time. No matter what operations hang, new (virtual) threads will always be available for the next request, and the memory footprint of the frozen "fibers" (rather than full threads) should be modest.

But if Connection acquisition, Connection destruction, or Statement destruction tasks do hang, eventually the pool will become exhausted and your application will hang or fail, despite the almost inexhaustible virtual threads.

I'd start by using c3p0's default, battle-tested thread pool to detect these kinds of issues, and log them with its signature, much-hated APPARENT DEADLOCK messages if they occur. Those very ugly APPARENT DEADLOCK messages make it very easy to figure out just what is going wrong.

But once your application is stable, then you might absolutely consider setting

c3p0.taskRunnerFactoryClassName=com.mchange.v2.c3p0.loom.UninstrumentedVirtualThreadPerTaskTaskRunnerFactory

to reduce monitor contention and eliminate the overhead of a dedicated c3p0 thread pool.


Note:

The latest version of c3p0 (as of this writing) is 0.10.0. Ordinarily, you'd hit that at Maven Central as

  • com.mchange:c3p0:0.10.0

But c3p0 is built under older Java version, to support old applications. (c3p0-0.10.0 supports JVMs as old as Java 7.)

Loom support has to be built under Java 21+, so it is built separately. Just hit

  • com.mchange:c3p0-loom:0.10.0

at Maven Central. That will bring in the loom implementations, and the rest of c3p0 as a transitive dependency.

2024-02-06

What does private mean at package level in Scala 3?


TL; DR:

  • private declarations at a top-level scope of a package in Scala 3 are equivalent to a private[pkg] in other contexts.
  • They are accessible to everything within the package and its subpackages, but nothing else.

In Scala 2, to place a declaration at the "package" level, one would define a "package object":

package top

package object pkg {
  private val Hush = 0
  val Loud = Int.MaxValue
}

Given this

  • one might refer to Loud from anywhere with fully-qualified name top.pkg.Loud
  • import top.pkg._ would pick it up
  • inside the package top.pkg one coul refer to it simply as Loud

So far, so intuitive.

In Scala 2, the semantics of private val Loud was also intuitive. A package object is just an object. A private member of an object is only visible within that object's scope. While the Scala compiler does some magic to make nonprivate declarations more broadly visible, access to private members of the package object was restricted to the object in the ordinary way.

But Scala 3 introduces "naked" top-level declarations, which I find I use constantly.

So the declarations above might translate to:

package top.pkg

private val Hush = 0
val Loud = Int.MaxValue

There is no object scope! So what does private even mean in this context.

I could imagine four possibilities:

  1. private to a virtual object scope constituted of all top-level declaraions
  2. private to the top-level of the current compilation unit (i.e. file)
  3. private to the current compilation unit (including nested scopes)
  4. private to the package as a whole, i.e. the same as private[pkg]

Playing around, it looks like #4 is the winner.

A private top-level declaration seems visible to any code in the package, even if defined in other files or directories. It is visible from anywhere in the pkg or subpackages of pkg.

So now I know! And so do you!