The 'iffy' XML namespace

I want to do a lot of things with RSS that require extensions of RSS (as the RSS spec foresees).

The URL http://tech.interfluidity.com/xml/iffy/ will mark an XML namespace in which some of these extensions will be defined.

The conventional prefix associated with this namespace will be iffy.

The current version of this namespace is v0.0.1-SNAPSHOT.

(-SNAPSHOT signifies that the version preceding that suffix has not yet been finalized. Much more to come!)

Element — iffy:provenance

When an item level element

If present in an item, the item contains a sequence of one or more atom:link elements, each of whose

  • rel attribute is MUST BE via
  • href attribute MUST BE the URL of an RSS feed from which the base contents of this item were drawn
  • type attribute SHOULD BE application/rss+xml

If the item from which the current item was sourced does not contain an iffy:provenance, then the current item should include just one atom:link.

If the item from which the current item was sourced does contains an iffy:provenance, then the current feed SHOULD include all items of that element, with the URL of the feed from which the item was sourced PREPENDED.

This will ensure the most immediate source will be the first atom:link element. The origin — or at least the source for which no further provenance is known — will be the last atom:link element.

Processors may expect a channel level atom:link element with rel="self" and type="application/rss+xml" to use as the basis for provenance in source documents. See RSS Best Practices.

Example (from here):

<?xml version='1.0' encoding='UTF-8'?>

<rss version="2.0" xmlns:iffy="http://tech.interfluidity.com/xml/iffy/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <title>interfluidity, all blogs</title>
    <!-- Other channel elements -->
    <atom:link type="application/rss+xml" rel="self" href="https://www.interfluidity.com/unify-rss/all-blogs.rss"/>
      <title>Industrial policy and ecosystems</title>
      <guid isPermalink="true">https://drafts.interfluidity.com/2024/05/11/industrial-policy-and-ecosystems/index.html</guid>
      <author>nospam@dev.null (Steve Randy Waldman)</author>
      <!-- Other item elements -->
        <atom:link type="application/rss+xml" rel="via" href="https://drafts.interfluidity.com/feed/index.rss"/>

Names too on the nose

This will be an odd post for a tech blog. But here is a list of names "too on-the-nose":

  • Kenneth ChesebroGuy from Wisconsin who came up with the alternative electors idea to try to confuse the 2020 elections so the House could throw it to Trump. He seems like a pretty cheesy bro to me.
  • Bernie Madoff — He made off with the money.
  • Terra Rodgers — a "director for superhot rock energy", that is a form of geothermal energy, a kind of terrestrial energy.

More names coming soon!

I've kind of wanted to maintain a list like this for a long time. I sometimes think we're inhabiting a work of fiction, given how contrivedly a propos certain names often are. I encounter these names, and I want to make a note of them.

I'll do that here!

(I think I might once have encountered a Twitter thread in this vein. My apologies to whomever I am ripping off!)

Please get in touch with any suggestions!

Anyway, I'm finally putting together this list here, on my "tech blog", because it's a good way to experiment with an idea that Chris Krycho describes as "sprouts".

Often "posts" ought not be thought of as finished pieces, but as beginnings — seeds, even — of ongoing, evolving work. (Thanks to Erlend Sogge Heggen for pointing me to this piece!)

I've begun by adding support to the <atom:updated> tag in my site generator's RSS.

When I make meaningful changes, I can update this value, and my feed will re-sort the updated post to the top, and prefix "Updated:" to the title. I can optionally mark posts to create a new GUID for each update, which may cause tools (like my own feedletter) to treat them as new posts.

(For now, I am leaving that turned off for this post, and just re-sorting updates to the top of the feed. In the future, who knows?)

There's lots, lots more to explore in this vein. Do read Chris Krycho's post. But this, I hope, is a start.


tar or tgz?

A thing I've done over the last while is automate a lot of my sysadmin, using systemd timers to hit scala-cli scripts.

I've built for myself a little framework that is incredibly in need of documentation, but that lets me define scripts very flexibly and can provide great step-by-step information about what happens and anything that goes wrong in brightly colored HTML e-mails. I love it.

Much of what I do is back things up to a cloud service using rclone.

Last night I wrote a script just to back up some directory. I ran into what I consider an age-old dilemma.

Sometime early in my geekish career, I picked up the nugget that its best to keep important backups as straight tar files rather than tgz (or tbz or whatever), because if some bit gets corrupted, most of a straight tar will remain recoverable, while the compressed archive will just be toast.

Is that right? Is it a real concern? I don't think I've ever experienced a corrupted archive, tar or tgz, but of course backup is a form of insurance, the whole point is to be resilient to tail risks.

Still, searching the interwebs, I don't see a lot of people recommending uncompressed archives. Space is more of a bottleneck to me than CPU or time, so if the resilience advantage isn't significant, I'd compress.

What do you think?

Update: Feel free to comment here


c3p0 and loom

I write a lot of open source software, but I've only ever really had one "hit". That makes me pretty sad, actually. I think some of what I've written is pretty great, and it's lonesome to be the sole user.

Nevertheless, my one "hit" was c3p0, a JDBC Connection pool that, in its day, was extremely popular in Java web application stacks.

Its day was a long time ago, though! c3p0 was first released on Sourceforge in 2001, and was very widely used from the mid aughts through the early 2010s.

c3p0 is "mature" software, and I have just let it alone for years at a time. But I do continue to use it in all of my own database projects. Periodically I still put it (and myself) through intense bouts of maintenance.

Actually, I have hated my years-long lapses (and myself) because github issues collect and I get snarky comments about abandonware and I feel like I am a Very Bad Maintainer. So the first order of business in my most recent "sprint" (isn't that what the kids call it?) was to move c3p0 from a very bespoke and manual ant build to something sleek and modern and automatic, so that maybe I wouldn't put off maintenance into years-delayed batches just because it is annoying to touch. c3p0's new mill build works beautifully.

The new build is much lighter, and the modern style of just publishing git repositories rather than source distributions and uploading releases to Sonatype is fast and easy. I think it'll really improve my maintenance promptness.

c3p0's latest release, 0.10.0, includes lots of enhancements and improvements. But a really fun thing was to integrate the very latest shiny new thing in Java — "Project Loom" virtual threads — into this very old, highly concurrent library.

c3p0 is very old school. It was initially written in Java 1.2 or 1.3. Java's standard concurrency utilities, the java.util.concurrent package, did not yet exist. There were no standard thread pools defined as ExecutorService implementations. So I rolled my own. c3p0 relies entirely on the JVM's built-in primitives — monitors and synchronized blocks, wait() and notifyAll() — to manage concurrency.

Over the years, people have requested that c3p0 support asynchroneity via pluggable Executor instances, rather than just its own, hand-rolled thread pool. Users mostly seemed to want this so c3p0 could share existing application thread pools, avoiding the resource footprint of several c3p0-dedicated threads.

A couple of weeks ago, I finally got around to implementing pluggable threading. Sharing application thread pools is now supported. But I was mostly motivated by curiousity about how well this very old library would work with newfangled loom virtual threads.

Great, it turns out!

  • I was concerned, since c3p0 relies so much on monitors and synchronized blocks, that virtual threads would be "pinned". Virtual threads are scheduled to, and deschedule from, "carrier" operating-system threads, but they cannot be descheduled while they hold a monitor. If a thread blocks while holding a monitor, it is described as "pinned", and that's a bad thing.

    But c3p0 is very careful not to perform potentially blocking operations while holding a monitor. Running tests with


    produced no stack traces of pinned threads, even under heavy load. This was gratifying.

  • Using virtual threads rather than a thread pool can reduce contention for monitors. The thread pool itself is a site of contention, as information about which threads are pooled and which are available to run tasks constitute shared, mutable state. Replacing a thread pool with simply firing and forgetting a virtual thread for each asynchrnous task left nothing to contend for. c3p0-loom includes two implementations of TaskRunnerFactory:

    com.mchange.v2.c3p0.loom.VirtualThreadPerTaskExecutorTaskRunnerFactory tracks the number of simultaneously active threads (which you can observe via JMX), which involve synchronizing on a monitor so some contention is still possible.

    But with com.mchange.v2.c3p0.loom.UninstrumentedVirtualThreadPerTaskTaskRunnerFactory, nothing at all is tracked and no monitors are acquired. Some analog of contention might result from managing shared state within the loom virtual-threading runtime, but all overt contention for thread-pool monitors is eliminated.

In practice, the thread pool is not c3p0's main site of monitor contention, however.

c3p0's resource pool is its main site of monitor contention. For most applications, the contention overhead is negligible, when amortized over Connection operations. But in rare cases, when very large numbers of threads are hitting the pool, contention can become an issue. For now, the only way to address contention at the resource pool is to construct multiple DataSource instances and balance the load across them.

In any case, c3p0 and loom work very well together!

I still recommend that applications start by using c3p0's default, hand-rolled thread pool. It implements deadlock detection and recovery, and logs verbose debugging information about what happened. This makes it very easy to diagnose what kinds of operations have been hanging and consuming threads when something goes wrong.

Under loom, applications that might otherwise have logged flamboyant thread-pool problems will proceed gracefully for some time. No matter what operations hang, new (virtual) threads will always be available for the next request, and the memory footprint of the frozen "fibers" (rather than full threads) should be modest.

But if Connection acquisition, Connection destruction, or Statement destruction tasks do hang, eventually the pool will become exhausted and your application will hang or fail, despite the almost inexhaustible virtual threads.

I'd start by using c3p0's default, battle-tested thread pool to detect these kinds of issues, and log them with its signature, much-hated APPARENT DEADLOCK messages if they occur. Those very ugly APPARENT DEADLOCK messages make it very easy to figure out just what is going wrong.

But once your application is stable, then you might absolutely consider setting


to reduce monitor contention and eliminate the overhead of a dedicated c3p0 thread pool.


The latest version of c3p0 (as of this writing) is 0.10.0. Ordinarily, you'd hit that at Maven Central as

  • com.mchange:c3p0:0.10.0

But c3p0 is built under older Java version, to support old applications. (c3p0-0.10.0 supports JVMs as old as Java 7.)

Loom support has to be built under Java 21+, so it is built separately. Just hit

  • com.mchange:c3p0-loom:0.10.0

at Maven Central. That will bring in the loom implementations, and the rest of c3p0 as a transitive dependency.


What does private mean at package level in Scala 3?


  • private declarations at a top-level scope of a package in Scala 3 are equivalent to a private[pkg] in other contexts.
  • They are accessible to everything within the package and its subpackages, but nothing else.

In Scala 2, to place a declaration at the "package" level, one would define a "package object":

package top

package object pkg {
  private val Hush = 0
  val Loud = Int.MaxValue

Given this

  • one might refer to Loud from anywhere with fully-qualified name top.pkg.Loud
  • import top.pkg._ would pick it up
  • inside the package top.pkg one coul refer to it simply as Loud

So far, so intuitive.

In Scala 2, the semantics of private val Loud was also intuitive. A package object is just an object. A private member of an object is only visible within that object's scope. While the Scala compiler does some magic to make nonprivate declarations more broadly visible, access to private members of the package object was restricted to the object in the ordinary way.

But Scala 3 introduces "naked" top-level declarations, which I find I use constantly.

So the declarations above might translate to:

package top.pkg

private val Hush = 0
val Loud = Int.MaxValue

There is no object scope! So what does private even mean in this context.

I could imagine four possibilities:

  1. private to a virtual object scope constituted of all top-level declaraions
  2. private to the top-level of the current compilation unit (i.e. file)
  3. private to the current compilation unit (including nested scopes)
  4. private to the package as a whole, i.e. the same as private[pkg]

Playing around, it looks like #4 is the winner.

A private top-level declaration seems visible to any code in the package, even if defined in other files or directories. It is visible from anywhere in the pkg or subpackages of pkg.

So now I know! And so do you!


Style-by-mail in feedletter

If, my most patient dear reader, you followed the feedletter tutorial in the previous post, you saw that we styled feedletter newsletters by starting up a development webserver, which would serve up HTML for an example newsletter.

This is still the best to get started styling your newsletters, because you can iteratively edit your untemplate, then just hit refresh in your web browser to very quickly play with your style and layout.

However, there are differences between how web browsers and e-mail clients render HTML. Getting things to look great in your browser, both at wide and mobile-like narrow widths, is not enough to guarantee that things will look good as emails, on desktop or mobile e-mail clients.

So, feedletter v0.0.8 now supports styling by e-mail.

Instead of firing up a webserver to preview your newsletter, if you give feedletter-style commands --from and --to arguments, an example e-mail will be sent. So you can now accurately preview and tweak exactly how newsletters will look in the mail clients they are actually sent to.

To put that more specifically, just replace a command like...

$ ./feedletter-style compose-single --subscribable-name lgm --port 45612


$ ./feedletter-style compose-single --subscribable-name lgm --from feedletter@mchange.com --to swaldman@mchange.com

No development webserver will be spun up. Instead, a sample e-mail will be sent.


Feedletter tutorial

I've been working for some time on a service to turn RSS feeds into e-mail newsletters, which I've called feedletter.

The service watches any number of RSS feeds, can host a variety of subscription types for each feed, including one e-mail per article, daily or weekly digests, compendia of every n posts, etc. It can also notify other services, like Mastodon, of new posts. It lets you define, for each feed, a notion of when an item is stable and finalized, and takes great care never to e-mail or notify the same item more than once.

Great minds think alike! After weeks of working on this, I discovered a similar project with the very same name.

Here I want to go through the process of setting up a feedletter instance, configuring it, tweaking or customizing the newsletter style, and running it.

You can host feedletter on any Linux/UNIX-ish server. For completeness, I'm going to set up a server from scratch, from a fresh Digital Ocean droplet. But of course you can run feedletter along side other services on an existing machine, and skip a lot of these steps. feedletter's main prerequisite is postgres, but we'll make use of nginx, certbot, systemd etc. as we go along.

Much of the code and config we develop will be memorialized in this github repo.

Let's go!

Table of contents

  1. Set up a server with a DNS name
  2. Download dependencies
  3. Create user feedletter
  4. Install feedletter
  5. Prepare the postgres database
  6. Set up feedletter-secrets.properties
  7. Get an https certificate
  8. Configure nginx to forward to the API
  9. Initialize the feedletter database
  10. Perform in-database configuration
  11. Add feeds to watch
  12. Define "subscribables" to feeds
  13. Enable feedletter as a systemd daemon
  14. Let users subscribe to your subscribables!
  15. Tweak the newsletter styles
  16. Advanced: Customize the content
  17. Conclusion

1. Set up a server with a DNS name

We launch a "droplet" from Digital Ocean. You can use whatever Linux flavor you like. We'll pick the latest Ubuntu.

Screenshot of Digital Ocean droplet setup

And we go ahead and give it a name.

Screenshot of FastMail DNS setup

2. Download dependencies

We login as root to our new droplet (however we've configured that), and download a bunch of stuff we'll need:

# apt install postgresql
# apt install openjdk-17-jre-headless
# apt install nginx
# apt install certbot
# apt install emacs

While we're at it, let's upgrade everything on the server and restart.

# apt upgrade
# shutdown -r now

3. Create user feedletter

We'll create a passwordless user:

# adduser --disabled-password feedletter
info: Adding user `feedletter' ...
info: Selecting UID/GID from range 1000 to 59999 ...
info: Adding new group `feedletter' (1000) ...
info: Adding new user `feedletter' (1000) with group `feedletter (1000)' ...
info: Creating home directory `/home/feedletter' ...
info: Copying files from `/etc/skel' ...
Changing the user information for feedletter
Enter the new value, or press ENTER for the default
	Full Name []: 
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] Y
info: Adding new user `feedletter' to supplemental / extra groups `users' ...
info: Adding user `feedletter' to group `users' ...

4. Install feedletter

We'll become user feedletter, and download a local installation of the feedletter app:

# su - feedletter
feedletter@feedletter-play:~$ git clone https://github.com/swaldman/feedletter-install.git feedletter-local
Cloning into 'feedletter-local'...
remote: Enumerating objects: 46, done.
remote: Counting objects: 100% (46/46), done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 46 (delta 19), reused 38 (delta 11), pack-reused 0
Receiving objects: 100% (46/46), 8.75 KiB | 2.19 MiB/s, done.
Resolving deltas: 100% (19/19), done.

The first time you run feedletter, it will take a couple of minutes to download its dependencies and compile stuff.

Although we can't meaningfully use it yet, let's give the feedletter applicaton a test run:

$ cd feedletter-local/
$ ./feedletter
Missing expected command (add-feed or alter-feed or daemon or db-dump or db-init or db-migrate or define-email-subscribable or define-mastodon-subscribable or drop-feed-and-subscribables or drop-subscribable or edit-subscribable or export-subscribers or list-config or list-feeds or list-items-excluded or list-subscribables or list-subscribers or list-untemplates or send-test-email or set-config or set-extra-params or set-untemplates or subscribe)!

    feedletter [--secrets <propsfile>] add-feed
    feedletter [--secrets <propsfile>] alter-feed
    feedletter [--secrets <propsfile>] daemon
    feedletter [--secrets <propsfile>] db-dump
    feedletter [--secrets <propsfile>] db-init
    feedletter [--secrets <propsfile>] db-migrate
    feedletter [--secrets <propsfile>] define-email-subscribable
    feedletter [--secrets <propsfile>] define-mastodon-subscribable
    feedletter [--secrets <propsfile>] drop-feed-and-subscribables
    feedletter [--secrets <propsfile>] drop-subscribable
    feedletter [--secrets <propsfile>] edit-subscribable
    feedletter [--secrets <propsfile>] export-subscribers
    feedletter [--secrets <propsfile>] list-config
    feedletter [--secrets <propsfile>] list-feeds
    feedletter [--secrets <propsfile>] list-items-excluded
    feedletter [--secrets <propsfile>] list-subscribables
    feedletter [--secrets <propsfile>] list-subscribers
    feedletter [--secrets <propsfile>] list-untemplates
    feedletter [--secrets <propsfile>] send-test-email
    feedletter [--secrets <propsfile>] set-config
    feedletter [--secrets <propsfile>] set-extra-params
    feedletter [--secrets <propsfile>] set-untemplates
    feedletter [--secrets <propsfile>] subscribe

Manage e-mail subscriptions to and notifications from RSS feeds.

Options and flags:
        Display this help text.
    --secrets <propsfile>
        Path to properties file containing SMTP, postgres, c3p0, and other configuration details.

Environment Variables:
        Path to properties file containing SMTP, postgres, c3p0, and other configuration details.

        Add a new feed from which mail or notifications may be generated.
        Alter the timings of an already-defined feed.
        Run daemon that watches feeds and sends notifications.
        Dump a backup of the database into a configured directory.
        Initialize the database schema.
        Migrate to the latest version of the database schema.
        Define a new email subscribable, a mailing lost to which users can subscribe.
        Define a Mastodon subscribable, a source from which Mastodon feeds can receive automatic posts..
        Removes a feed, along with any subscribables defined upon it, from the service.
        Removes a subscribable from the service.
        Edit an already-defined subscribable.
        Dump subscriber information for a subscribable in CSV format.
        List all configuration parameters.
        List all feeds the application is watching.
        List items excluded from generating notifications.
        List all subscribables.
        List all subscribers to a subscribable.
        List available untemplates.
        Send a brief email to test your SMTP configuration.
        Set configuration parameters.
        Add, update, or remove extra params you may define to affect rendering of notifications and messages.
        Update the untemplates used to render subscriptions.
        Subscribe to a subscribable.
1 targets failed
runMain subprocess failed

All good!

5. Prepare the postgres database

We'll exit back to root, become user postgres, and create a feedletter database that user feedletter can command:

$ exit
# su - postgres
$ createdb feedletter
$ createuser feedletter
$ psql
psql (15.5 (Ubuntu 15.5-0ubuntu0.23.10.1))
Type "help" for help.

postgres=# ALTER DATABASE feedletter OWNER TO feedletter;
postgres=# ALTER USER feedletter WITH PASSWORD 'not-actually-this';
postgres=# \q

6. Set up feedletter-secrets.properties

feedletter expects passwords and some other configuration information in a "secrets" file, in Java properties file format. You can place this anywhere you want (feedletter will look for a command-line argument or an environment variable), but by default it looks for /etc/feedletter/feedletter-secrets.properties or /usr/etc/feedletter/feedletter-secrets.properties.

The file must belong to the user who will run feedletter, and it must have restrictive permission, readable and optionally writable by the user only.

The contents of the file will be something like this:

feedletter.secret.salt=Arbitrary secret string

You’ll want to fill in your real SMTP authentication configuration. For information about this configuration, see mailutil.

You can configure database access via any and all c3p0 configuration properties.

So, let's do it! We exit from our last stint as user postgres first, then...

$ exit
# mkdir /etc/feedletter/
# emacs /etc/feedletter/feedletter-secrets.properties

Here we pause to edit the file, see the template above...

# chown -R feedletter:feedletter /etc/feedletter
# chmod go-wrx /etc/feedletter/feedletter-secrets.properties
# ls -l /etc/feedletter/
total 8
-rw------- 1 feedletter feedletter 370 Jan 25 18:59 feedletter-secrets.properties
-rw-r--r-- 1 feedletter feedletter 372 Jan 25 18:57 feedletter-secrets.properties~

Oops! emacs created a backup file with open permissions. Let's get rid of it so those secrets don't leak.

# rm /etc/feedletter/feedletter-secrets.properties~

7. Get an https certificate

We gave our server the name play.feedletter.org.

feedletter offers a web API to manage subscriptions. We'll want that to use https rather than http for privacy's sake.

Let's acquire a free Let's Encrypt certificate. I prefer to pause nginx to acquire and renew certificates, and use certbot's standalone server to verify control of the domain, rather than have certbot mess around with my nginx config.


# systemctl stop nginx
# certbot certonly -d play.feedletter.org
Saving debug log to /var/log/letsencrypt/letsencrypt.log

How would you like to authenticate with the ACME CA?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Spin up a temporary webserver (standalone)
2: Place files in webroot directory (webroot)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1 
Requesting a certificate for play.feedletter.org

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/play.feedletter.org/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/play.feedletter.org/privkey.pem
This certificate expires on 2024-04-24.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
We were unable to subscribe you the EFF mailing list because your e-mail address appears to be invalid. You can try again later by visiting https://act.eff.org.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# systemctl start nginx

8. Configure nginx to forward to the API

By default, feedletter's API is bound to localhost on port 8024. If you need to, you can customize the web API port or interface, run ./feedletter set-config --help to see how. We'll stick with that default.

As root, we create and edit a file /etc/nginx/conf.d/play.feedletter.org.conf:

# emacs /etc/nginx/conf.d/play.feedletter.org.conf

It should look like this:

    # play.feedletter.org
    server {
        listen 80;
        listen [::]:80;
        server_name play.feedletter.org;
        return 301 https://play.feedletter.org$request_uri;
    server {
        listen       443 ssl http2;
        listen       [::]:443 ssl http2;
        server_name  play.feedletter.org;
        ssl_certificate /etc/letsencrypt/live/play.feedletter.org/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/play.feedletter.org/privkey.pem;
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
            proxy_set_header  X-Real-IP $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;

Then we restart nginx:

root@feedletter-tutorial:/etc/nginx# systemctl restart nginx

9. Initialize the feedletter database

If we've set up the database and secrets file property, it should be as easy as

# su - feedletter
$ cd feedletter-local
$ ./feedletter db-init

10. Perform in-database configuration

Some of feedletter's config sits in the secrets file, but much lives in the application's database itself.

We can see feedletter's current (default) configuration simply by running

$ ./feedletter list-config
¦ Configuration Key     ¦ Value                                                 ¦
¦ ConfirmHours          ¦ 48                                                    ¦
¦ DumpDbDir             ¦ throws com.mchange.feedletter.db.ConfigurationMissing ¦
¦ MailBatchDelaySeconds ¦ 600                                                   ¦
¦ MailBatchSize         ¦ 100                                                   ¦
¦ MailMaxRetries        ¦ 5                                                     ¦
¦ MastodonMaxRetries    ¦ 10                                                    ¦
¦ TimeZone              ¦ Etc/UTC                                               ¦
¦ WebApiBasePath        ¦ /                                                     ¦
¦ WebApiHostName        ¦ localhost                                             ¦
¦ WebApiPort            ¦ None                                                  ¦
¦ WebApiProtocol        ¦ http                                                  ¦
¦ WebDaemonInterface    ¦                                             ¦
¦ WebDaemonPort         ¦ 8024                                                  ¦

The WebApi* keys are used to construct URLs that point back to the application (for creating, confirming, and removing subscriptions).

  • We won't want these to be localhost URLs, so we'll modify WebApiHostName
  • We'll want WebApiProtocol to be https rather than http
  • I'd prefer the timezone (used to format dates, and to decide the boundries of days and weeks for daily and weekly roundups) be America/New_York

Let's checkout the set-config command:

$ ./feedletter set-config --help
[49/49] runMain 
Usage: feedletter set-config [--confirm-hours <hours>] [--dump-db-dir <directory>] [--mail-batch-size <size>] [--mail-batch-delay-seconds <seconds>] [--mail-max-retries <times>] [--time-zone <zone>] [--web-daemon-interface <interface>] [--web-daemon-port <port>] [--web-api-protocol <http|https>] [--web-api-host-name <hostname>] [--web-api-base-path <path>] [--web-api-port <port>]

Set configuration parameters.

Options and flags:
        Display this help text.
    --confirm-hours <hours>
        Number of hours to await a user confiration before automatically unsubscribing.
    --dump-db-dir <directory>
        Directory in which to create dump files prior to db migrations.
    --mail-batch-size <size>
        Number of e-mails to send in each 'batch' (to avoid overwhelming the SMTP server).
    --mail-batch-delay-seconds <seconds>
        Time between batches of e-mails are to be sent.
    --mail-max-retries <times>
        Number of times e-mail sends (defined as successful submission to an SMTP service) will be attempted before giving up.
    --time-zone <zone>
        ID of the time zone which subscriptions based on time periods should use.
    --web-daemon-interface <interface>
        The local interface to which the web-api daemon should bind.
    --web-daemon-port <port>
        The local port to which the web-api daemon should bind.
    --web-api-protocol <http|https>
        The protocol (http or https) by which the web api is served.
    --web-api-host-name <hostname>
        The host from which the web api is served.
    --web-api-base-path <path>
        The URL base location upon which the web api is served (usually just '/').
    --web-api-port <port>
        The port from which the web api is served (usually blank, protocol determined).
1 targets failed
runMain subprocess failed

So, we can do all of this configuring in a single simple command:

$ ./feedletter set-config --web-api-protocol https --web-api-host-name play.feedletter.org --time-zone America/New_York
[49/49] runMain 
¦ Configuration Key     ¦ Value                                                 ¦
¦ ConfirmHours          ¦ 48                                                    ¦
¦ DumpDbDir             ¦ throws com.mchange.feedletter.db.ConfigurationMissing ¦
¦ MailBatchDelaySeconds ¦ 600                                                   ¦
¦ MailBatchSize         ¦ 100                                                   ¦
¦ MailMaxRetries        ¦ 5                                                     ¦
¦ MastodonMaxRetries    ¦ 10                                                    ¦
¦ TimeZone              ¦ America/New_York                                      ¦
¦ WebApiBasePath        ¦ /                                                     ¦
¦ WebApiHostName        ¦ play.feedletter.org                                   ¦
¦ WebApiPort            ¦ None                                                  ¦
¦ WebApiProtocol        ¦ https                                                 ¦
¦ WebDaemonInterface    ¦                                             ¦
¦ WebDaemonPort         ¦ 8024                                                  ¦

11. Add feeds to watch

Let's check out the add-feed command:

$ ./feedletter add-feed --help
[49/49] runMain 
    feedletter add-feed --ping <feed-url>
    feedletter add-feed [--min-delay-minutes <minutes>] [--await-stabilization-minutes <minutes>] [--max-delay-minutes <minutes>] [--recheck-every-minutes <minutes>] <feed-url>

Add a new feed from which mail or notifications may be generated.

Options and flags:
        Display this help text.
        Check feed as often as possible, notify as soon as possible, regardless of (in)stability.
    --min-delay-minutes <minutes>
        Minimum wait (in miunutes) before a newly encountered item can be notified.
    --await-stabilization-minutes <minutes>
        Period (in minutes) over which an item should not have changed before it is considered stable and can be notified.
    --max-delay-minutes <minutes>
        Notwithstanding other settings, maximum period past which an item should be notified, regardless of its stability.
    --recheck-every-minutes <minutes>
        Delay between refreshes of feeds, and redetermining items' availability for notification.

When we add feeds, we also define how "finalization" of feed items will be defined. Items will never be notified or considered final prior to min-delay-minutes. Even after this period has passed, they will not be considered final unless they have been stable (the item has been unchanged) for at least await-stabilization-minutes, or until max-delay-minutes has passed. (max-delay-minutes is a failsafe, in case feed items never stabilize due to a changing timestamp or such.)

Feeds will be polled every recheck-every-minutes minutes.

If --ping (and only --ping) is set, feedletter will poll at its maximum frequency and notify immediately, irrespective of the item's (in)stability.

All of the (non-ping) values have defaults. Let's have our application watch the blog Lawyers, Guns, and Money, whose feed is at https://www.lawyersgunsmoneyblog.com/feed:

$ ./feedletter add-feed  https://www.lawyersgunsmoneyblog.com/feed
[49/49] runMain 
¦ Feed ID ¦ Feed URL                                  ¦ Min Delay Mins ¦ Await Stabilization Mins ¦ Max Delay Mins ¦ Recheck Every Mins ¦ Added                       ¦ Last Assigned               ¦
¦ 1       ¦ https://www.lawyersgunsmoneyblog.com/feed ¦ 30             ¦ 15                       ¦ 180            ¦ 10                 ¦ 2024-01-27T17:03:47.452533Z ¦ 2024-01-27T17:03:47.452533Z ¦

So, by default, this feed will wait at least 30 minutes before notifying, and require a post to have been stable for at least 15 minutes. After 180 minutes, it will be considered final no matter what. It will be checked every approximately 10 minutes.

If you don't like these values, you can change them any time with the ./feedletter alter-feed command.

I am not republishing these blogs without permission. That would be icky. I'm using these feeds for demonstration purposes. I'll be their only e-mail subscriber.

By the time you read this tutorial, play.feedletter.org will have been sadly retired.

Let's add another feed to watch, Atrios' Eschaton blog, whose feed URL is https://www.eschatonblog.com/feeds/posts/default?alt=rss. I'm just going to stick with the default timings for now:

$ ./feedletter add-feed https://www.eschatonblog.com/feeds/posts/default?alt=rss

[49/49] runMain 
¦ Feed ID ¦ Feed URL                                                 ¦ Min Delay Mins ¦ Await Stabilization Mins ¦ Max Delay Mins ¦ Recheck Every Mins ¦ Added                       ¦ Last Assigned               ¦
¦ 1       ¦ https://www.lawyersgunsmoneyblog.com/feed                ¦ 30             ¦ 15                       ¦ 180            ¦ 10                 ¦ 2024-01-27T17:03:47.452533Z ¦ 2024-01-27T17:03:47.452533Z ¦
¦ 2       ¦ https://www.eschatonblog.com/feeds/posts/default?alt=rss ¦ 30             ¦ 15                       ¦ 180            ¦ 10                 ¦ 2024-01-27T17:04:55.092686Z ¦ 2024-01-27T17:04:55.092686Z ¦

12. Define "subscribables" to feeds

Once the application is watching feeds, we can define various kinds of "subscribables" to them.

A subscribable is a subscription type. We use the made-up word to distinguish a thing you can subscribe to (a "subscribable") from an individual's subscription.

E-mail subscribables are by default one-post-per-newsletter, but they can also be defined as daily digests, weekly compendia, or bundles of every n posts, for an n you choose.

Let's take a look at the ./feedletter define-email-subscribable command:

$ ./feedletter define-email-subscribable --help
[49/49] runMain 
Usage: feedletter define-email-subscribable --feed-id <feed-id> --name <name> --from <e-mail address> [--reply-to <e-mail address>] [--compose-untemplate <fully-qualified-name>] [--confirm-untemplate <fully-qualified-name>] [--removal-notification-untemplate <fully-qualified-name>] [--status-change-untemplate <fully-qualified-name>] [--each | --daily [--time-zone <id>] | --weekly [--time-zone <id>] | --num-items-per-letter <num>] [--extra-param <key:value>]...

Define a new email subscribable, a mailing lost to which users can subscribe.

Options and flags:
        Display this help text.
    --feed-id <feed-id>
        The ID of the RSS feed to be watched.
    --name <name>
        A name for the new subscribable.
    --from <e-mail address>
        The email address from which emails should be sent.
    --reply-to <e-mail address>
        E-mail address to which recipients should reply (if different from the 'from' address).
    --compose-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will render notifications.
    --confirm-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will ask for e-mail confirmations.
    --removal-notification-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that be mailed to users upon unsubscription.
    --status-change-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will render results of GET request to the API.
        E-mail each item.
        E-mail a compilation, once a day.
    --time-zone <id>
        ID of a time zone for determining the beginning and end of the period.
        E-mail a compilation, once a week.
    --num-items-per-letter <num>
        E-mail every fixed number of posts.
    --extra-param <key:value>
        An extra parameter your notification renderers might use.
1 targets failed
runMain subprocess failed

The only required elements are --feed-id <id>, --name <a name you choose name>, and --from <e-mail address>. If you set only these, feedletter will use its default style (you'll not have set any custom "untemplates"), it will have no "reply to" address distinct from the "from" address you've given, and it will be of type --each, that is one e-mail per post.

If you set the flag --daily you'll send daily digests. If you set the flag --weekly, then weekly compendia. If you set --num-items-per-letter <num>, you'll send an e-mail every num posts.

More rarely, you can set any number of --extra-param items. These can be passed through to custom untemplates, to configure your own styles and themes as you see fit.

Lawyers, Guns, and Money tends to post long-ish essays, so the default --each type is probably appropriate.

Let's recall (scroll up!) that when we added that feed, it was given ID 1. So let's create a subscribable:

$ ./feedletter define-email-subscribable --name lgm --feed-id 1 --from feedletter@feedletter.org
[49/49] runMain 


Subscribable Name:    lgm
Feed ID:              1
Subscription Manager: {
    "composeUntemplateName": "com.mchange.feedletter.default.email.composeUniversal_html",
    "statusChangeUntemplateName": "com.mchange.feedletter.default.email.statusChange_html",
    "confirmUntemplateName": "com.mchange.feedletter.default.email.confirm_html",
    "from": {
        "addressPart": "feedletter@feedletter.org",
        "type": "Email",
        "version": 1
    "removalNotificationUntemplateName": "com.mchange.feedletter.default.email.removalNotification_html",
    "extraParams": {},
    "type": "Email.Each",
    "version": 1
An email subscribable to feed with ID '1' named 'lgm' has been created.

We can create more than one subscribable to a single feed! Let's also make a daily roundup option for Lawyers, Guns, and Money:

$ ./feedletter define-email-subscribable --name lgm-daily --feed-id 1 --from feedletter@feedletter.org --daily
[49/49] runMain 


Subscribable Name:    lgm-daily
Feed ID:              1
Subscription Manager: {
    "composeUntemplateName": "com.mchange.feedletter.default.email.composeUniversal_html",
    "statusChangeUntemplateName": "com.mchange.feedletter.default.email.statusChange_html",
    "confirmUntemplateName": "com.mchange.feedletter.default.email.confirm_html",
    "from": {
        "addressPart": "feedletter@feedletter.org",
        "type": "Email",
        "version": 1
    "removalNotificationUntemplateName": "com.mchange.feedletter.default.email.removalNotification_html",
    "extraParams": {},
    "type": "Email.Daily",
    "version": 1
An email subscribable to feed with ID '1' named 'lgm-daily' has been created.

Atrios' Eschaton blog publishes frequent, sometimes very short posts. Let's create a subscribable that sends out groups of three. Recall from above that its feed ID was 2. So...

$ ./feedletter define-email-subscribable --name atrios-three --feed-id 2 --from feedletter@feedletter.org --num-items-per-letter 3
[49/49] runMain 


Subscribable Name:    atrios-three
Feed ID:              2
Subscription Manager: {
    "composeUntemplateName": "com.mchange.feedletter.default.email.composeUniversal_html",
    "statusChangeUntemplateName": "com.mchange.feedletter.default.email.statusChange_html",
    "numItemsPerLetter": 3,
    "confirmUntemplateName": "com.mchange.feedletter.default.email.confirm_html",
    "from": {
        "addressPart": "feedletter@feedletter.org",
        "type": "Email",
        "version": 1
    "removalNotificationUntemplateName": "com.mchange.feedletter.default.email.removalNotification_html",
    "extraParams": {},
    "type": "Email.Fixed",
    "version": 1
An email subscribable to feed with ID '2' named 'atrios-three' has been created.

13. Enable feedletter as a systemd daemon.

Let’s define a feedletter.service file right here in our installation directory, just because it seems convenient. We edit /home/feedletter/feedletter-local/feedletter.service:

Description=Feedletter RSS-To-Mail-Etc Service
After=syslog.target network.target


ExecStart=/home/feedletter/feedletter-local/feedletter daemon --fork



Now we setup the symlinks that would make this a permanent systemd service. First we exit to get back to root, then…

$ exit
# cd /etc/systemd/system/
# ln -s /home/feedletter/feedletter-local/feedletter.service 
# systemctl enable feedletter
Created symlink /etc/systemd/system/multi-user.target.wants/feedletter.service → /home/feedletter/feedletter-local/feedletter.service.

Now let's actually start our new service, and check its logs:

# systemctl start feedletter
# journalctl -u feedletter --follow
Jan 27 17:11:53 feedletter-play systemd[1]: Starting feedletter.service - Feedletter RSS-To-Mail-Etc Service...
Jan 27 17:11:59 feedletter-play systemd[1]: feedletter.service: Can't open PID file /home/feedletter/feedletter-local/feedletter.pid (yet?) after start: No such file or directory
Jan 27 17:12:02 feedletter-play feedletter[37405]: Jan 27, 2024 5:12:02 PM com.mchange.v2.log.MLog
Jan 27 17:12:02 feedletter-play feedletter[37405]: INFO: MLog clients using java 1.4+ standard logging.
Jan 27 17:12:06 feedletter-play systemd[1]: Started feedletter.service - Feedletter RSS-To-Mail-Etc Service.
Jan 27 17:12:07 feedletter-play feedletter[37405]: 2024-01-27@17:12:07 [INFO] [com.mchange.feedletter.Daemon] Spawning daemon fibers.
Jan 27 17:12:07 feedletter-play feedletter[37405]: 2024-01-27@17:12:07 [INFO] [com.mchange.feedletter.Daemon] Starting web API service on interface '', port 8024.

It all looks good!

Occasionally I've had problems at first seeing log entries using journalctl. I'd see messages like

No journal files were found.
-- No entries --

The fix is to run

# systemctl restart systemd-journald.service

and then to restart the feedletter service.

14. Let users subscribe to your subscribables!

The feedletter services has a simple API that, for now, uses (abuses) the HTTP GET method. Here’s an example of an HTML form that would allow subscription to our new newsletter:

    <form id="subscribe-form" action="https://play.feedletter.org/v0/subscription/create" method="GET">
      <input type="hidden" name="subscribableName" value="lgm">
      E-mail: <input type="text" name="addressPart"><br>
      Display Name: <input type="text" name="displayNamePart"> (Optional)<br>
      <input name="main-submit" value="Subscribe!" type="submit">

As of feedletter v0.0.8, you can use method="POST" in subscribe forms.

Using method="GET" (and therefore also simulating form submission by pasting a URL) remain supported as well.

(You can see live examples of feedletter subscription forms on the subscribe page of this site!)

We will fake hitting the form above just by pasting the following URL into our browser:


We are immediately informed of our success: Screenshot of 'Subscription Created' page And, you've got mail!

Screenshot of e-mail requesting subscription confirmation

We hit the confirm link and we're done:

Screenshot of 'Subscription confirmed!' page

We've made two more subscribables we'll want to test, whose let's-fake-a-form URLs will be


We go through the same (faked) submit then confirm steps for each of these.

And now we're subscribed! We just have to wait for the mail to roll in.

15. Tweak the newsletter styles

But what will this mail actually look like? We can sneak a peek.

We will want to have two terminal windows open, logged into our feedletter host. In one terminal, we will run a single-page webserver that simulates the HTML e-mails we will receive.

In the second terminal, we can edit an untemplate, until we have the look we want. Then we can update our subscribable to use our perfected untemplate to generate its mails.

Let's get the simulation server running. It's easy to run, but we won't be able to see what it's serving if we don't open up a port on our server to serve it through. We'll use port 45612. Making that available on Ubuntu is just

# ufw allow 45612
Rules updated
Rules updated (v6)

Now we just become user feedletter again, and check out the feedletter-style command.

$ ./feedletter-style --help
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
    feedletter-style [--secrets <propsfile>] compose-multiple
    feedletter-style [--secrets <propsfile>] compose-single
    feedletter-style [--secrets <propsfile>] confirm
    feedletter-style [--secrets <propsfile>] removal-notification
    feedletter-style [--secrets <propsfile>] status-change

Iteratively edit and review the untemplates through which your posts will be notified.

Options and flags:
        Display this help text.
    --secrets <propsfile>
        Path to properties file containing SMTP, postgres, c3p0, and other configuration details.

Environment Variables:
        Path to properties file containing SMTP, postgres, c3p0, and other configuration details.

        Style a template that composes a multiple items.
        Style a template that composes a single item.
        Style a template that asks users to confirm a subscription.
        Style a template that notifies users that they have subscribed.
        Style a template that informs users of a subscription status change.

You can style the infrastructure — confirmation and removal e-mails, web pages that inform users that their subscription status has changed.

But mostly you'll want to style the compose untemplates. For subscribables that mail just one e-mail at a time, you'll want compose-single. For subscribables that will pull together multple posts into a single mail, you'll want compose-multiple.

The ./feedletter-style command never terminates. You have to type <ctrl-c> to quit out.

This is because it's designed to be terminated and restarted each time you change underlying templates or css. A mill process runs perpetually, watching for chages and restarting whatever command you last tried.

Let's try compose-single. Our subscribable lgm sends just one post per-email. Let's try to style it.

$ ./feedletter-style compose-single --help
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
Usage: feedletter-style compose-single --subscribable-name <name> [--untemplate-name <fully-qualified-name>] [--first | --random | --guid <string>] [--e-mail <address> [--display-name <name>] | --sms <number> | --masto-instance-name <name> --masto-instance-url <url>] [--within-type-id <string>] [--interface <interface>] [--port <num>]

Style a template that composes a single item.

Options and flags:
        Display this help text.
    --subscribable-name <name>
        The name of an already defined subscribable that will use this template.
    --untemplate-name <fully-qualified-name>
        Fully name of an untemplate to style.
        Display first item in feed.
        Choose random item from feed to display
    --guid <string>
        Choose guid of item to display.
    --e-mail <address>
        The e-mail address to subscribe.
    --display-name <name>
        A display name to wrap around the e-mail address.
    --sms <number>
        The number to which messages should be sent.
    --masto-instance-name <name>
        A private name for this Mastodon instance.
    --masto-instance-url <url>
        The URL of the Mastodon instance
    --within-type-id <string>
        A subscription-type specific sample within-type-id for the notification.
    --interface <interface>
        The interface on which to bind an HTTP server, which will serve the rendered untemplate.
    --port <num>
        The port on which to run a HTTP server, which will serve the rendered untemplate.

There's a lot here, but note that the only required option is --subscribable-name. We've opened port 45612, so we'll also want to hit the --port option. Let's try running the ./feedletter-style compose-single command for subscribable lgm:

$ ./feedletter-style compose-single --subscribable-name lgm --port 45612
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
Starting single-page webserver on interface '', port 45612...

Great. Now let's see how our newsletter looks, with its HTML served on http://play.feedletter.org:45612/. Not so good!

Screenshot of web-served lgm newsletter via ./feedletter-style compose-single, with a badly formatted image

(Update: As of feedletter v0.0.8 you can also style newsletters by e-mail, in addition to hitting a development webserver with a browser.)

By default, we just pulled the first item (and most recent, since blogs are usually reverse-chronological) from the feed. We can also pull a random item off the feed to view with --random, or a particular item identified by its <guid> element in the feed with --guid <guid>.

Let's see how we can restyle this post to make it a bit better.

We have been using the built-in default untemplate to compose our items. We cannot modify that.

But our feedletter installation directory contains a copy of this untemplate that we can deploy and tweak.

To do so, we'll have to create a folder for this file under untemplate. We’ll call it tutorial. Then...

$ mkdir untemplate/tutorial/
$ cp sample/defaultCompose.html.untemplate untemplate/tutorial/lgmCompose.html.untemplate

feedletter now has access to this untemplate, under a name you can find by calling ./feedletter list-untemplates:

$ ./feedletter list-untemplates
[42/49] compile 
[info] compiling 2 Scala sources to /home/feedletter/feedletter-local/out/compile.dest/classes ...
[info] done compiling
[49/49] runMain 
¦ Untemplate, Fully Qualified Name                              ¦ Input Type                                                                                          ¦
¦ com.mchange.feedletter.default.email.composeUniversal_html    ¦ com.mchange.feedletter.style.ComposeInfo.Universal                                                  ¦
¦ com.mchange.feedletter.default.email.confirm_html             ¦ com.mchange.feedletter.style.ConfirmInfo                                                            ¦
¦ com.mchange.feedletter.default.email.item_html                ¦ scala.Tuple2[com.mchange.feedletter.style.ComposeInfo.Universal,com.mchange.feedletter.ItemContent] ¦
¦ com.mchange.feedletter.default.email.removalNotification_html ¦ com.mchange.feedletter.style.RemovalNotificationInfo                                                ¦
¦ com.mchange.feedletter.default.email.statusChange_html        ¦ com.mchange.feedletter.style.StatusChangeInfo                                                       ¦
¦ com.mchange.feedletter.default.email.style_css                ¦ scala.collection.immutable.Map[java.lang.String,scala.Any]                                          ¦
¦ tutorial.lgmCompose_html                                      ¦ com.mchange.feedletter.style.ComposeInfo.Universal                                                  ¦

Now we can ask feedletter-style to show us what this post would look like using our "new" untemplate to render it:

$ ./feedletter-style compose-single --subscribable-name lgm --untemplate-name tutorial.lgmCompose_html --port 45612
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
Starting single-page webserver on interface '', port 45612...

Initially it looks exactly the same, because it is just a copy of the default untemplate!

But now we can just modify that file, untemplate/tutorial/lgmCompose.html.untemplate, hit reload, and play!

This is why we needed a second terminal window. We edit the template in one terminal while ./feedletter-style is running in the other. After each edit and save, we hit reload to see our changes. (We may have to wait 10-15 secs!)

Occasionally the autoreload glitches out, in which case you should manually <ctrl-c> and rerun your ./feedletter-style command.

If you see error messages when you rerun, you may have hit compilation errors (an untemplate is transformed into a Scala source code, which is then compiled), which you will have to resolve. You can ask for help!)

Our new untemplate has a section that looks like this:

      <( style_css() )>
      /* add extra CSS styling here! */

Let's go ahead and add some CSS! We'll edit it to...

      <( style_css() )>
      /* add extra CSS styling here! */
      img {
        width: 100%;
        height: auto;

We save, and hit reload on our browser still pointed at http://play.feedletter.org:45612/, and see...

Screenshot of web-served lgm newsletter with a better laid-out image.

Much better!

If we are very picky, we see that at the end of our post, there is a line that doesn't logically belong in the post, and should be italicized or something.

Screenshot of web-served lgm newsletter with a better laid-out image.

If we view the source, we'll find it's the last <p> element in <div class="item-contents">. So we modify our styling as follows:

      <( style_css() )>
      /* add extra CSS styling here! */
      img {
        width: 100%;
        height: auto;
      div.item-contents p:last-of-type {
        font-style: italic;

Looks better!

Screenshot of web-served lgm newsletter with a better laid-out image.

We can keep editing all we like. We add the --random flag and run our ./feedletter-style command over and over to make sure that posts in general render well.

When we are happy, we want to tell our subscription to use the new untemplate.

Remember, the name of the untemplate we've been editing was tutorial.lgmCompose_html.

Let's check that command out:

$ ./feedletter set-untemplates --help
[49/49] runMain 
Usage: feedletter set-untemplates --subscribable-name <name> [--compose-untemplate <fully-qualified-name>] [--confirm-untemplate <fully-qualified-name>] [--removal-notification-untemplate <fully-qualified-name>] [--status-change-untemplate <fully-qualified-name>]

Update the untemplates used to render subscriptions.

Options and flags:
        Display this help text.
    --subscribable-name <name>
        The name of an already-defined subscribable.
    --compose-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will render notifications.
    --confirm-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will ask for e-mail confirmations.
    --removal-notification-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that be mailed to users upon unsubscription.
    --status-change-untemplate <fully-qualified-name>
        Fully qualified name of untemplate that will render results of GET request to the API.
1 targets failed
runMain subprocess failed

Okay. So we run...

$ ./feedletter set-untemplates --subscribable-name lgm --compose-untemplate tutorial.lgmCompose_html
[49/49] runMain 
Updated Subscription Manager: {
    "composeUntemplateName": "tutorial.lgmCompose_html",
    "statusChangeUntemplateName": "com.mchange.feedletter.default.email.statusChange_html",
    "confirmUntemplateName": "com.mchange.feedletter.default.email.confirm_html",
    "from": {
        "addressPart": "feedletter@feedletter.org",
        "type": "Email",
        "version": 1
    "removalNotificationUntemplateName": "com.mchange.feedletter.default.email.removalNotification_html",
    "extraParams": {},
    "type": "Email.Each",
    "version": 1

And we are done! We have restyled our newsletter.

We could (and should!) do the same with our other subscriptions (using ./feedletter-style compose-multiple). We could also do much more elaborate things then just mess with the stylesheet. Our compose untemplate was really the definition of a pretty arbitrary Scala function that accepted a ComposeInfo.Single object and produced a String (embedded in an untemplate.Result).

Learn more about untemplates here.

The default compose untemplate actually accepts a ComposeInfo.Universal, a parent type of both ComposeInfo.Single and ComposeInfo.Multiple. So we can fix up the glitches we know about already in our lgm-daily subscribable just by setting for it the same compose untemplate:

$ ./feedletter set-untemplates --subscribable-name lgm-daily --compose-untemplate tutorial.lgmCompose_html
[49/49] runMain 
Updated Subscription Manager: {
    "composeUntemplateName": "tutorial.lgmCompose_html",
    "statusChangeUntemplateName": "com.mchange.feedletter.default.email.statusChange_html",
    "confirmUntemplateName": "com.mchange.feedletter.default.email.confirm_html",
    "from": {
        "addressPart": "feedletter@feedletter.org",
        "type": "Email",
        "version": 1
    "removalNotificationUntemplateName": "com.mchange.feedletter.default.email.removalNotification_html",
    "extraParams": {},
    "type": "Email.Daily",
    "version": 1

If we take a look at that with compose-multiple..

$ ./feedletter-style compose-multiple --subscribable-name lgm-daily --port 45612
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
Starting single-page webserver on interface '', port 45612...

We'll find that it looks pretty good!

16. Advanced: Customize the content

feedletter supports a variety of customizers, including

  • subject customizers
  • contents customizers
  • "MastoAnnouncement" customizers"
  • template params customizers (see templating note below)

For each subscribable, you can define just one of each kind of customizer, but customers can perform any number of steps internally.

For an example, we'll build a content customizer. Both of our feeds frequently embed YouTube videos as iframe HTML elements in their blog posts. Unfortunately, mail clients generally do not render this form of embedded content, leaving awkward empty-spaces in and sometimes mangling the formatting of our newsletters.

So let's build a content customizer that replaces these with well-behaved div elements containing links to the resources that would have been in the iframe. We'll include a class="embedded" attribute on the div elements, so that we will be able to style them however we want.

Writing customizers in writing Scala code. We'll use the excellent jsoup library to manipulate HTML. We'll give ourselves space to work by creating a tutorial package in our installation's src directory, and then exiting a file called core.scala inside that.

$ mkdir src/tutorial
$ emacs src/tutorial/core.scala

First, we write a function that takes post HTML, and transforms the iframe elements into the div elements we're after. Then we embed that in the form of a Customizer.Contents, which is a function that accepts some metainformation and the original contents of a feed as ItemContents objects, and then outputs transformed contents.

Here is what all that looks like:

package tutorial

import org.jsoup.Jsoup
import org.jsoup.nodes.{Document,Element}

import scala.jdk.CollectionConverters.*

import com.mchange.feedletter.*
import com.mchange.feedletter.style.Customizer

private def createDivEmbedded( link : String ) : Element =
  val div = new Element("div").attr("class","embedded")
  val a = new Element("a").attr("href",link)
  val linkText =
    if link.toLowerCase.contains("youtube.com/") then
      "Embedded YouTube video"
      "Embedded item"

def iframeToDivEmbedded( html : String ) : String =
  val doc = Jsoup.parseBodyFragment( html )
  val iframes = doc.select("iframe").asScala
  iframes.foreach: ifr =>
    val src = ifr.attribute("src").getValue()
    ifr.replaceWith( createDivEmbedded(src) )

val IframelessCustomizer : Customizer.Contents =
  ( subscribableName : SubscribableName, subscriptionManager : SubscriptionManager, withinTypeId : String, feedUrl : FeedUrl, contents : Seq[ItemContent] ) =>
    contents.map: ic =>
      ic.article match
        case Some( html ) => ic.withArticle( iframeToDivEmbedded( html ) )
        case None => ic

Once we have IframelessCustomizer defined, to "install" it, we just register it as the Customizer.Contents for each of our feeds in our installation's PreMain object.

We modify the default src/PreMain.scala, just inserting three Customizer.Contents.register(...) lines (and the import that brings in the name Customizer).

import com.mchange.feedletter.{UserUntemplates,Main}
import com.mchange.feedletter.style.{AllUntemplates,StyleMain}

import com.mchange.feedletter.style.Customizer

object PreMain:
  def main( args : Array[String] ) : Unit =
    AllUntemplates.add( UserUntemplates )
    Customizer.Contents.register("lgm", tutorial.IframelessCustomizer)
    Customizer.Contents.register("lgm-daily", tutorial.IframelessCustomizer)
    Customizer.Contents.register("atrios-three", tutorial.IframelessCustomizer)
    val styleExec =
      sys.env.get("FEEDLETTER_STYLE") match
        case Some( s ) => s.toBoolean
        case None      => false
    if styleExec then StyleMain.main(args) else Main.main(args)

Once the customizers are registered, they will be called whenever the application generates content for the named subscribable.

We can verify that our customizer does as we expect by using ./feedletter-style to preview newsletter output. (See above).

$ ./feedletter-style compose-multiple --subscribable-name atrios-three --port 45612
[50/50] runMainBackground 
Watching for changes to 14 paths and 9 other values... (Enter to re-run, Ctrl-C to exit)
Starting single-page webserver on interface '', port 45612...

We can find one of Atrios' "Rock on." posts, which used to render blank in mail clients, but now render like...

Screenshot of a transformed-to-div iframe

Of course we can style that div and link however we like.

Re: "TemplateParams" customizers

Confusingly, feedletter newsletters are rendered with two kinds of templating.

  • "untemplates" render newsletter HTML.
  • but that HTML can itself be a template, by including case-insensitive constructs like %PercentDelimitedKey% which get filled in just prior to notification.

The role of the second round of templating is to add subscriber-specific customizations, which might commonly include a particular subscriber's name and e-mail, as well as an unsubscribe link specific to that subscriber.

Each notification is rendered by an untemplate just once, but any %Key% left in that rendering can be filled in differently for each subscriber.

Template-params customizers let you add key-value pairs to the built-in set of available substitutions for these last-minute, per-subscriber customizations.


This was a lot!

It probably seems intimidating.

But if you know how to self-host systemd daemon processes, much of the above should have been familiar. Setting up a feedletter server should take one to two hours of your time.

Defining new feeds and subscribables, once the server is set up, becomes just a 5 minute operation.

One feedletter instance can host as many feeds and subscribables as you like.

Restyling your subscribables, or writing customizers and bespoke untemplates for them, can take longer. Developing custom front-ends is time-consuming detail work.

I'd love it if you gave feedletter a try!


APIs against dependent types in Scala

Scala supports instance-dependent types, which is very cool! So I can define...

class Human( name : String ):
  case class Tooth( num : Int ):
    override def toString(): String = s"${name}'s #${num} tooth"
  val teeth = Set.from( (1 to 32).map( Tooth.apply ) )
  def brush( myTeeth : Set[Tooth] ) : Unit = println(s"fluoride goodness for ${name}")
val me = new Human("Steve")
val you = new Human("Awesome")

me.brush( me.teeth )
//me.brush( you.teeth ) // gross! doesn't compile. (as it should not!)

My teeth and your teeth are different types, even though they are of the same class. The identity of the enclosing instance is a part of the type.

And we see here how that can be useful! Often inner classes represent internal structures that should mostly be managed by their enclosing instance. It's good that the compiler pushes back against code in which you might brush my teeth or pump my heart!

But sometimes inner instances are not so internal, or even if they are, an external thing might have business interacting with it. The virtual human we are modeling might have need of a dentist or a cadiologist.

Scala's type system doesn't prevent external things from accessing inner class instances, it just demands you do it via a correct type.

I know of two ways to define external APIs against instance-dependent types. First, Scala supports projection types, like Human#Teeth. Where an ordinary dot-separated path would have required me to identify some particular instance, Human#Teeth matches the tooth of any human.

A second way to hit instance-dependent types from an external API is to require the caller to identify the instance in the call, and then let the type of a later argument to the same call include the identified instance. I think it's kind of wild that Scala supports this. It's an example where the type of arguments to a statically declared function is effectively determined at runtime. You don't even need separate argument lists, although I think I prefer them.

class Dentist:
  def checkByProjection( tooth : Human#Tooth ) : Unit = println( s"Found ${tooth} (by projection)" )
  def checkByIdentifying( human : Human)( tooth : human.Tooth ) : Unit = println( s"Found ${tooth} (by identification)" )

val d  = new Dentist

// API by projection
d.checkByProjection( me.teeth.head )
d.checkByProjection( you.teeth.head )

// API by identification
d.checkByIdentifying( me )( me.teeth.head )
d.checkByIdentifying( you )( you.teeth.head )

// d.checkByIdentifying( me )( you.teeth.head ) // does not compile, as it should not
// d.checkByIdentifying( you )( me.teeth.head ) // does not compile, as it should not

I've used projection types a lot, over the eons. I know some people think that any need for external APIs against inner types is code smell or something. But I've found a variety of places where they seem to make sense, and the "do it right" workarounds (e.g. define some instance-independent abstract base type for the inner things, and write external APIs against that) just create busy work and maintenance complexity.

Nevertheless, in some corner cases, projection types aren't completely supported, and my sense is that much of the Scala community considers them icky (like brushing someone else's teeth).

Sometimes you need to write APIs against inner types by identification anyway, because you need to know stuff about the enclosing instance (which inner instances don't disclose unless they declare an explicit reference).

But sometimes you don't need to be told the identity of the outer instance (because it's not relevant to what you are doing, or because the inner instance discloses a reference explicitly).

Are projection types icky and it best to just standardize on requiring explicit identification of enclosing instances?

Or are projection types a cool trick we should delight in using?

Enquiring minds want to know!

(This blog doesn't support comments yet, but you can reply to this post on Mastodon.)


(Library + Script) vs (Application + Config File)


For Scala apps, instead of installing applications and writing separate config files, why not do config like this?

#!/usr/bin/env -S scala-cli shebang

//> using dep "com.example::cool-app:1.0.0"

val config = coolapp.Config(
  name = "Fonzie",                    // the name of your installation
  apparel = coolapp.Apparel.Leather,  // see elements defined in coolapp.Apparel
  gesture = coolapp.Gesture.ThumbsUp, // see elements defined in coolapp.Gesture
  reference = "Very dated, old man.", // a string to help users identify your character
  port = 8765                         // the port on which the app will run

coolapp.start( config )

Once upon a time, I spent a very great deal of time supporting and integrating multiple config formats into my work. I used to describe c3p0 as a configuration project attached to a connection pool.

Lately, though, I find I am skipping any support of config files. I mostly write Scala, and Scala case classes strike me as a pretty good configuration format.

  • Since you can intitialize case classes with named arguments, key = value, they can be made literate and intuitive.

  • They support rich comments, because the Scala language supports comments.

  • With simple string or integer values, they are as simple as most config formats.

Case-class config is extremely flexible, because your values are specified in a general purpose programming language, and can include variables or functions. And you get compile-time feedback for misconfigurations.

When I first became enamored with case-classes-as-config, I wrote a special purpose bootstrap app that would compile a file containing a case-class-instance-as-config, then use Java reflection to load it from a container.

val podcast : Podcast =
      mainUrl                = "https://superpodcast.audiofluidity.com/",
      title                  = "Superpodcast",
      description            = """|<p>Superpodcast is the best podcast you've ever heard.</p>
                                  |<p>In fact, you will never hear it.</p>""".stripMargin,
      guidPrefix             = "com.audiofluidity.superpodcast-",
      shortOpaqueName        = "superpodcast",
      mainCoverImageFileName = "some-cover-art.jpg",
      editorEmail            = "asshole@audiofluidity.com",
      defaultAuthorEmail     = "asshole@audiofluidity.com",
      itunesCategories       = immutable.Seq( ItunesCategory.Comedy ),
      mbAdmin                = Some(Admin(name="Asshole", email="asshole@audiofluidity.com")),
      mbLanguage             = Some(LanguageCode.EnglishUnitedStates),
      mbPublisher            = Some("Does Not Exist, LLC"),
      episodes               = episodes

In more recent projects, I've just used either scala-cli or mill as a runner. Sometimes I've left the definition of a stub case-class instance in the src directory for users to fill in, as in fossilphant. Other times I've defined abstract main classes, asking users to extend them by overriding a method that supplies config as a case class instance, as in unify-rss.

package com.mchange.unifyrss

import scala.collection.*

import zio.*

abstract class AbstractDaemonMain extends ZIOAppDefault:

  def appConfig : AppConfig

  override def run =
      mergedFeedRefs   <- initMergedFeedRefs( appConfig )
      _                <- periodicallyResilientlyUpdateAllMergedFeedRefs( appConfig, mergedFeedRefs )
      _                <- ZIO.logInfo(s"Starting up unify-rss server on port ${appConfig.servicePort}")
      exitCode         <- server( appConfig, mergedFeedRefs )
    yield exitCode

So far, I've just instantiated these with concrete objects in Scala source files.

But it strikes me that a natural refinement would be to design libraries with entry points that accept a case-class-config object as an argument, and expect users to deploy them as e.g. scala-cli scripts. Just something like:

#!/usr/bin/env -S scala-cli shebang

//> using dep "com.example::cool-app:1.0.0"

val config = coolapp.Config(
  name = "Fonzie",                    // the name of your installation
  apparel = coolapp.Apparel.Leather,  // see elements defined in coolapp.Apparel
  gesture = coolapp.Gesture.ThumbsUp, // see elements defined in coolapp.Gesture
  reference = "Very dated, old man.", // a string to help users identify your character
  port = 8765                         // the port on which the app will run

coolapp.start( config )

There is a bit of ceremony, and a bit that might intimidate people not accustomed to Scala syntax and tools. But "standard" config file formats get complicated and intimidating too. Here users get quick feedback if they don't pick a valid value without developers having to write special validation logic. Users are still just deploying a text file, as they would with ordinary config.

If your priority is 100% user experience, then using a standard (or new and improved, ht Bill Mill) config file format, then hand-writing informative, fail-fast validation logic is going to be a better way to go.

But your priority should not always be user experience! Not all software development should take the form of a "product" developed at a high cost that will then be amortized over sales to or adoption by a very large number of users.

Software is a form of collaboration, and often that collaboration will be more productive and evolve more quickly when "users" are understood to be reasonably capable and informed, so developers don't expand the scope of their work and their maintenance burden in order to render the application accessible to the most intimidated potential users.

Obviously it depends what you are doing! But if there is going to be a config file at all, you are already collaborating with a pretty restricted set of people who are okay with setting up and editing an inevitably arcane text file.

For many applications and collaborations, maintainability at moderate cost in time and money and speed of evolution, are important. For these applications, when written in an expressive, strongly-typed language like Scala, defining config as a data structure in a script, that then executes an app defined as an entry point to a library, strike me as a pretty good way to go.


Contributing to mill

I'm a big fan of Scala build tools, both sbt and mill. I've done some pretty big projects intimately based on sbt. Recently I spend a lot of time in mill because it's very well suited to static-site generators, and because I've had better success getting mill builds to call into Scala 3 code, as some of my site-generation tools require. I've contributed to both projects.

There are a few hints I want to give myself for when I contribute to mill.

Build and run

The trick is just

$ ./mill -i installLocal

in the mill repository. If the build succeeds, the a mill-release executable appears in the target/ directory of the repository. One can test and play with that.


mill wants code contributions to pass a scalafmt check before merging. You build mill with mill of course, and mill makes this check easy.

To check formatting...

$ mill mill.scalalib.scalafmt.ScalafmtModule/checkFormatAll __.sources

To have mill go ahead and reformat your code...

$ mill mill.scalalib.scalafmt.ScalafmtModule/reformatAll __.sources

I have now twice, embarrassingly, forgotten to do this.


I'll probably update this entry in place over time, if I find more hints I want to keep.