Introduction
Hatsu is a self-hosted bridge that interacts with Fediverse on behalf of your static site.
Normally it can do all the:
- When a Fediverse user searches for a user of your site (
@catch-all@example.com
), redirects to the corresponding user of the Hatsu instance. - When a Fediverse user searches for your site URL (
https://example.com/hello-world
), redirects to the corresponding post on the Hatsu instance. - Accepts follow requests and pushes new posts to the follower's homepage as they become available.
- Receive replies from Fediverse users and backfeed to your static site.
Best of all, these are fully automated! Just set it up once and you won't need to do anything else.
Comparison
Hatsu is still a Work-In-Progress. It is similar to Bridgy Fed but different:
- Hatsu uses Feed (JSON / Atom / RSS) as a data source instead of HTML pages with microformats2.
- Hatsu doesn't require you to automatically or manually send Webmention reminders for create and update, it's all fully automated.
- Hatsu is ActivityPub only, which means it doesn't handle Nostr, AT Protocol (Bluesky) or other protocols.
If you don't want to self-host, you may still want to use Bridgy Fed or Bridgy in some cases:
Bridgy Fed
- You don't mind compatibility with platforms other than Mastodon.
- Your site has good microformats2 markup.
Bridgy
- You already have a Fediverse account ready to be used for this purpose.
- Your site has good microformats2 markup.
Getting Started
Setting up your static site
Once you register in Hatsu instance, it will be updated fully automatically.
However, your site will need to make some changes accordingly to take advantage of the ActivityPub features that Hatsu brings to the table.
Choose instance
After Hatsu supports public instances, there may be a list of instances here.
Until then, you'll need to self-host the instance or find an person running a Hatsu instance and have them create an account.
Feed
For Hatsu to work, your site needs to have one of the valid JSON / Atom / RSS feeds.
These feeds should be auto-discoverable on the homepage:
<!-- https://example.com -->
<!DOCTYPE html>
<html>
<head>
...
<link rel="alternate" type="application/feed+json" href="https://example.com/feed.json" />
<link rel="alternate" type="application/atom+xml" href="https://example.com/atom.xml" />
<link rel="alternate" type="application/rss+xml" href="https://example.com/rss.xml" />
</head>
<body>
...
</body>
</html>
Hatsu detects all available feeds and prioritizes them in order of JSON > Atom > RSS
.
JSON Feed
Hatsu uses serde
to parse JSON Feed directly, so you can expect it to have first-class support.
Please make sure your feed is valid in JSON Feed Validator first.
JSON Feed Items
Hatsu infers object id from item.url
and item.id
.
It will use the item.url
first, and if it doesn't exist, it will try to convert the item.id
to an absolute url.
https://example.com/foo/bar => https://example.com/foo/bar
/foo/bar => https://example.com/foo/bar
foo/bar => https://example.com/foo/bar
Ideally, your item.id
and item.url
should be consistent absolute links:
{
"id": "https://example.com/foo/bar",
"url": "https://example.com/foo/bar",
"title": "...",
"content_html": "...",
"date_published": "..."
}
JSON Feed Extension
If you can customize your site's JSON Feed, you might also want to check out the Hatsu JSON Feed Extension.
Atom / RSS
Hatsu uses feed-rs
to parse XML feeds and convert them manually.
Please make sure your feed is valid in W3C Feed Validation Service first.
This section is currently lacking testing, so feel free to report bugs.
Redirecting
There are two types of redirects required by Hatsu:
-
Well Known files, redirecting them to make your username searchable.
- before:
https://example.com/.well-known/webfinger?resource=acct:carol@example.com
- after:
https://hatsu.local/.well-known/webfinger?resource=acct:carol@example.com
- before:
-
Requests accept of type
application/activity+json
, redirecting them to make your page searchable.- before:
https://example.com/foo/bar
- after:
https://hatsu.local/posts/https://example.com/foo/bar
- before:
There are many ways to redirect them and you can pick one you like:
with Static files and Markup
This should apply to most hosting services and SSG.
with Redirects file
Works with Netlify and Cloudflare Pages.
with Platform-Specific Configuration
Works with Netlify and Vercel.
with Aoba (Lume & Hono)
SSG plugin for Lume and Server Middleware for Deno Deploy and Netlify.
Redirecting with Static files and Markup
This should apply to most hosting services and SSG.
Well Known
For the .well-known/*
files, you need to get the corresponding contents from the hatsu instance and output them as a static file.
Replace
hatsu.local
with your Hatsu instance andexample.com
with your site.
Open your Hatsu instance home page in a browser and F12 -> Console to run:
// .well-known/webfinger
await fetch('https://hatsu.local/.well-known/webfinger?resource=acct:example.com@hatsu.local').then(res => res.text())
// .well-known/nodeinfo
await fetch('https://hatsu.local/.well-known/nodeinfo').then(res => res.text())
// .well-known/host-meta
await fetch('https://hatsu.local/.well-known/host-meta').then(res => res.text()).then(text => text.split('\n').map(v => v.trim()).join(''))
// .well-known/host-meta.json
await fetch('https://hatsu.local/.well-known/host-meta.json').then(res => res.text())
This will fetch their text contents,
which you need to save to the SSG equivalent of the static files directory and make sure they are output to the .well-known
folder.
AS2 Alternate
Only Mastodon and Misskey (and their forks) is known to support auto-discovery, other software requires redirection to search correctly. w3c/activitypub#310
Make your posts searchable on Fediverse by setting up auto-discovery.
Since Hatsu's object URLs are predictable, you just need to make sure:
- The page you want to set up for auto-discovery is in the Feed.
- The actual URL of the page is the same as in the Feed. (see ./feed)
That's it! For https://example.com/foo/bar
, just add the following tag to the document.head
:
Replace
hatsu.local
with your Hatsu instance.
<link rel="alternate" type="application/activity+json" href="https://hatsu.local/posts/https://example.com/foo/bar" />
Redirecting with Redirects file
Works with Netlify and Cloudflare Pages.
Well Known
Create a _redirects
file in the SSG static files directory containing the following:
Replace
hatsu.local
with your Hatsu instance.
/.well-known/host-meta* https://hatsu.local/.well-known/host-meta:splat 307
/.well-known/nodeinfo* https://hatsu.local/.well-known/nodeinfo 307
/.well-known/webfinger* https://hatsu.local/.well-known/webfinger 307
AS2
Redirects file only applies to
.well-known
. for AS2 redirects, you need to use AS2 Alternate.
Redirecting with Platform-Specific Configuration
Works with Netlify and Vercel.
Well Known
Netlify (netlify.toml
)
Create a netlify.toml
file in the root directory containing the following:
Replace
hatsu.local
with your Hatsu instance.
[[redirects]]
from = "/.well-known/host-meta*"
to = "https://hatsu.local/.well-known/host-meta:splat"
status = 307
[[redirects]]
from = "/.well-known/nodeinfo*"
to = "https://hatsu.local/.well-known/nodeinfo"
status = 307
[[redirects]]
from = "/.well-known/webfinger*"
to = "https://hatsu.local/.well-known/webfinger"
status = 307
Vercel (vercel.json
)
Create a vercel.json
file in the root directory containing the following:
Replace
hatsu.local
with your Hatsu instance.
{
"redirects": [
{
"source": "/.well-known/host-meta",
"destination": "https://hatsu.local/.well-known/host-meta"
},
{
"source": "/.well-known/host-meta.json",
"destination": "https://hatsu.local/.well-known/host-meta.json"
},
{
"source": "/.well-known/nodeinfo",
"destination": "https://hatsu.local/.well-known/nodeinfo"
},
{
"source": "/.well-known/webfinger",
"destination": "https://hatsu.local/.well-known/webfinger"
}
]
}
AS2
Redirects file only applies to
.well-known
. for AS2 redirects, you need to use AS2 Alternate.
Redirecting with Aoba (Lume & Hono)
SSG plugin for Lume and Server Middleware for Deno Deploy and Netlify.
Aoba provides some plugins and server middleware for Lume and Hono, including Hatsu integration.
Lume
The Lume plugin will do what you did in Redirecting with Static files and Markup for you.
Replace
hatsu.local
with your Hatsu instance andexample.com
with your site.
import lume from 'lume/mod.ts'
import { hatsuPlugin } from 'aoba/lume/plugins/hatsu.ts'
export default lume({ location: new URL('https://example.com') })
.use(hatsuPlugin({
// Hatsu instance
instance: new URL('https://hatsu.local'),
// match /posts/*
match: [/^\/posts\/(.+)$/],
}))
Lume Server
On top of that, the Lume server middleware can redirect .well-known/*
and AS2 request.
Replace
hatsu.local
with your Hatsu instance.
import Server from 'lume/core/server.ts'
import site from './_config.ts'
import { hatsuMiddleware } from 'aoba/lume/middlewares/hatsu.ts'
const server = new Server()
server.use(hatsuMiddleware({
// Hatsu instance
instance: new URL('https://hatsu.local'),
// site location
location: site.options.location,
}))
server.start()
Hono
It's not published to npm, so feel free to copy and paste it if you need to use it in a Node.js.
Replace
hatsu.local
with your Hatsu instance.
import { Hono } from 'hono'
import { hatsuWellKnown, hatsuObject } from 'aoba/hono/middlewares/hatsu.ts'
const app = new Hono()
const instance = new URL('https://hatsu.local')
// https://example.com/.well-known/* => https://hatsu.local/.well-known/*
app.use('/.well-known/*', hatsuWellKnown({ instance }))
// https://example.com/posts/foo => https://hatsu.local/posts/https://example.com/posts/foo
app.use('/posts/*', hatsuObject({ instance }))
Redirecting with FEP-612d
There doesn't seem to be software currently implements FEP-612d, but that won't stop us from setting it up.
just add the following TXT record:
Replace
hatsu.local
with your Hatsu instance andexample.com
with your site.
_apobjid.example.com https://hatsu.local/users/example.com
That's it!
Backfeed
Display mentions received by Hatsu on your site.
based on KKna
based on Mastodon Comments
based on Webmention (TODO)
Backfeed based on KKna
Written by the same authors as Hatsu, KKna provides the simplest integration for Hatsu.
Examples
Replace
hatsu.local
with your Hatsu instance.
<script type="module">
import { defineConfig } from 'https://esm.sh/@kkna/context'
import { hatsu } from 'https://esm.sh/@kkna/preset-hatsu'
defineConfig({
presets: [hatsu({ instance: 'https://hatsu.local' })],
})
</script>
<script type="module" src="https://esm.sh/@kkna/component-material"></script>
<kkna-material></kkna-material>
You can use it with other presets or write your own components, see the KKna Documentation for details.
Backfeed based on Mastodon Comments
Examples
"Mastodon Comments" refers to the @oom/mastodon-components
library.
<script type="module">
import Comments from 'https://esm.run/@oom/mastodon-comments'
customElements.define('oom-comments', Comments)
</script>
<link
rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/@oom/mastodon-comments/src/styles.css"
/>
<oom-comments src="https://mastodon.gal/@misteroom/110810445656343599">
No comments yet
</oom-comments>
The basic example should look something like the above, where https://mastodon.gal/@misteroom/110810445656343599 is the link to the post in Fediverse.
Hatsu uses predictable URLs, you just need to change the src:
// trim url
// input:
// https://example.com/foo/bar#baz
// https://example.com/foo/bar?baz=qux
// output:
// https://example.com/foo/bar
const { origin, pathname } = new URL(window.location.href)
const url = new URL(pathname, origin).href
// get id (base64url encode)
// aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy
const id = btoa(url).replaceAll('+', '-').replaceAll('/', '_')
// oom-comments src
// https://hatsu.local/notice/aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy
const src = new URL(`/notice/${id}`, 'https://hatsu.local').href
So eventually it will look like this:
<script type="module">
import Comments from 'https://esm.run/@oom/mastodon-comments'
customElements.define('oom-comments', Comments)
</script>
<link
rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/@oom/mastodon-comments/src/styles.css"
/>
<oom-comments
src="https://hatsu.local/notice/aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy"
>
No comments yet
</oom-comments>
It's a real pain in the ass, but you can try to automate it.
Lume
If you're using Lume and Theme Simple Blog, it will read data.comments.src
.
So you can do this:
// _config.ts
import lume from 'lume/mod.ts'
import blog from 'https://deno.land/x/lume_theme_simple_blog@v0.14.0/mod.ts'
const site = lume()
site.use(blog())
// add this:
site.preprocess(['.md'], (pages) =>
pages
.filter((page) => page.type === 'post')
.forEach((page) => {
page.data.comments = {
src: new URL(
`/notice/${btoa(site.url(page.data.url, true))
.replaceAll('+', '-')
.replaceAll('/', '_')}`,
'https://hatsu.local' // your hatsu instance
).href,
}
})
)
export default site
How it works?
Hatsu mimics Pleroma's URL format.
@oom/mastodon-components
will extract the ID from the URL and query the corresponding API for the data.
// oom-comments src
const src = 'https://hatsu.local/notice/aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy'
// origin: 'https://hatsu.local
// pathname: '/notice/aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy'
const { origin, pathname } = new URL(src)
// id: 'aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy'
const [, id] = pathname.match(/^\/notice\/([^\/?#]+)/)
// api url: https://hatsu.local/api/v1/statuses/aHR0cHM6Ly9leGFtcGxlLmNvbS9mb28vYmFy/context
const url = new URL(`/api/v1/statuses/${id}/context`, origin)
Upon receiving a request, Hatsu's corresponding API will attempt to decode the base64url ID and return the data.
If you're interested in the code, you can also take a look at routes/statuses/status_context.rs and entities/context.rs.
Backfeed based on Webmention
This section is not yet implemented in Hatsu.
TODO
Install
Docker Installation
Binary Installation
Docker Installation
Hatsu uses the
x86-64-v3
target architecture for optimal performance.If you are using an older processor, you currently need to build locally and change the corresponding values in
.cargo/config.toml
.
You can find images on GitHub: https://github.com/importantimport/hatsu/pkgs/container/hatsu
Hatsu uses three primary tags: latest
(stable), beta
and nightly
, literally.
docker run
Replace
{{version}}
with the version you want to use.
docker run -d \
--name hatsu \
--restart unless-stopped \
-p 3939:3939 \
-v /opt/hatsu/hatsu.sqlite3:/app/hatsu.sqlite3 \
-e HATSU_DATABASE_URL=sqlite://hatsu.sqlite3 \
-e HATSU_DOMAIN={{hatsu-instance-domain}} \
-e HATSU_LISTEN_HOST=0.0.0.0 \
-e HATSU_PRIMARY_ACCOUNT={{your-static-site}} \
-e HATSU_ACCESS_TOKEN=123e4567-e89b-12d3-a456-426614174000 \
ghcr.io/importantimport/hatsu:{{version}}
You need to specify all environment variables at once. For more information, see Environments.
docker compose
The examples folder contains some sample docker compose configurations,
You can make your own modifications based on them.
Binary Installation
Hatsu uses the
x86-64-v3
target architecture for optimal performance.If you are using an older processor, you currently need to build locally and change the corresponding values in
.cargo/config.toml
.
Releases
You can download both stable and beta versions of Hatsu from the Releases page.
Artifacts
You can find the latest artifacts on the Workflow runs page.
GitHub has a document that tells you how to download artifact: https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts
Nix/NixOS Installation
Hatsu uses the
x86-64-v3
target architecture for optimal performance.If you are using an older processor, you currently need to build locally and change the corresponding values in
.cargo/config.toml
.
Hatsu is available in Nixpkgs, NUR and Flakes.
macOS (Darwin) is not supported.
Nixpkgs
Nixpkgs only has a stable version, you need nixos-24.11 or nixos-unstable.
{ pkgs, ... }: {
environment.systemPackages = with pkgs; [
hatsu
];
}
NUR (SN0WM1X)
The SN0WM1X NUR may contain beta versions, but there may be a delay.
You need to follow the instructions to set up NUR first.
{ pkgs, ... }: {
environment.systemPackages = with pkgs; [
nur.repos.sn0wm1x.hatsu
];
}
Flakes
This is untested.
Add the hatsu repository directly to your flake inputs, up to date but unstable.
{
inputs: {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
# ...
hatsu.url = "github:importantimport/hatsu";
hatsu.inputs.nixpkgs.follows = "nixpkgs";
# ...
};
}
{ inputs, pkgs, ... }: {
environment.systemPackages = [
inputs.hatsu.packages.${pkgs.system}.default;
];
}
Environments
Hatsu supports dotenv, and you can set environment variables via the .env
file.
It is required unless it has a suffix (optional).
However, it may exist as a built-in preset (in the source code) or an example preset (in .env.example
)
HATSU_LOG
- default:
info
- example:
info,tokio::net=debug,sqlx::query=warn
The format is the same as the RUST_LOG
.
HATSU_ENV_FILE
- default:
/etc/hatsu/.env
Hatsu will first try to find the dotenv file in the current directory, and if unsuccessful will try to use the path indicated by HATSU_ENV_FILE
.
HATSU_DATABASE_URL
- default:
sqlite::memory:
- example:
sqlite://hatsu.sqlite3
Should be a valid sqlite:
or postgres:
URL. see sea-ql.org
sqlite::memory:
is used by default to ensure that hatsu does not report errors as a result.
If you're not using a Postgres database, I recommend keeping the sqlite://hatsu.sqlite3
.
HATSU_DOMAIN
- default: None
- example:
hatsu.local
The domain name you assigned to this Hatsu instance.
HATSU_LISTEN_HOST
- default:
127.0.0.1
- example:
0.0.0.0
The hostname on which the Hatsu server listens.
HATSU_LISTEN_PORT
- default:
3939
- example:
3939
The port on which the Hatsu server listens.
HATSU_PRIMARY_ACCOUNT
- default: None
- example: None
The primary account for this Hatsu instance, which cannot be removed and is used as a signed_fetch_actor
.
HATSU_ACCESS_TOKEN (optional)
- default: None
- example: None
For accessing Admin API.
If this value is not set, the Hatsu Admin API will not be available.
This can be any string, but I recommend generating a random uuid v4.
echo "\nHATSU_ACCESS_TOKEN = \"$(cat /proc/sys/kernel/random/uuid)\"" >> .env
HATSU_NODE_NAME (optional)
- default: None
- example: None
Used for NodeInfo metadata.nodeName
.
HATSU_NODE_DESCRIPTION (optional)
- default: None
- example: None
Used for NodeInfo metadata.nodeDescription
.
Create Account
Ensure you set
HATSU_ACCESS_TOKEN
correctly in the previous section first, otherwise you will not be able to use the Hatsu Admin API.
just
The easiest way to create an account is the just
command line tool:
just account create example.com
If you are using docker, you need to exec to the container first.
docker exec -it hatsu /bin/bash
curl
You can also access the API via curl, as Justfile
does.
NAME="example.com" curl -X POST "http://localhost:$(echo $HATSU_LISTEN_PORT)/api/v0/admin/create-account?name=$(echo $NAME)&token=$(echo $HATSU_ACCESS_TOKEN)"
Block Instances or Actors
Ensure you set
HATSU_ACCESS_TOKEN
correctly in the previous section first, otherwise you will not be able to use the Hatsu Admin API.
Block URL
Block URL. if path is /
, it is recognized as an instance.
Each time an activity is received an origin match is performed on blocked instances and an exact match is performed on blocked actors.
BLOCK_URL="https://example.com" curl -X POST "http://localhost:$(echo $HATSU_LISTEN_PORT)/api/v0/admin/block-url?url=$(echo $BLOCK_URL)&token=$(echo $HATSU_ACCESS_TOKEN)"
Get the Actors URL for a Fediverse user
In Fediverse, we see user IDs typically as @foo@example.com
. so how do we get the corresponding URL? it's simple. here's an example of a JavaScript environment where you can run it in your browser:
const id = '@Gargron@mastodon.social'
// split id by @ symbol
// ['', 'Gargron', 'mastodon.social']
const [_, user, instance] = id.split('@')
// get webfinger json
const webfinger = await fetch(
`https://${instance}/.well-known/webfinger?resource=acct:${user}@${instance}`,
{ headers: { accept: 'application/jrd+json' }}
).then(res => res.json())
// find rel=self
const url = webfinger.links.find(({ rel }) => rel === 'self').href
// https://mastodon.social/users/Gargron
console.log(url)
That's it! you may also need to open the console on the web page of the instance the account belongs to, given cross-origin issues and such.
Unblock URL
The unblocked version of the above API, simply replaces the path /api/v0/admin/block-url
with /api/v0/admin/unblock-url
.
Prepare
Clone Repository
It will create a hatsu
subfolder in the current path.
git clone https://github.com/importantimport/hatsu.git && cd hatsu
Contributing
Go to the hatsu
folder and you can see these:
docs
- The documentation you're looking at right now, uses mdBook to build.migration
- SeaORM Migration.src
- Main application.
Local Development
You'll need to complete prepare before you do this.
Dependencies
To develop Hatsu, you should first install Rust and some dependencies.
# Arch-based distro
sudo pacman -S git cargo
# Debian-based distro
sudo apt install git cargo
Running
First copy the variables,
Set HATSU_DOMAIN
to your prepared domain
(e.g. hatsu.example.com
without https://
)
and HATSU_PRIMARY_ACCOUNT
to your desired user domain
(e.g. blog.example.com
without https://
)
# copy env example
cp .env.example .env
# edit env
nano .env
Then create the database file and run:
# create database
touch hatsu.sqlite3
# run hatsu
cargo run
Hatsu now listen on localhost:3939
, and in order for it to connect to Fediverse, you'll also need to set up a reverse proxy.
Docker Development
You'll need to complete prepare before you do this.
Dependencies
To use Docker, you only need to install Docker and Docker Compose.
# Arch-based distro
sudo pacman -S docker docker-compose
# Debian-based distro
sudo apt install docker.io docker-compose
Running
First copy the variables,
Set HATSU_DOMAIN
to your prepared domain
(e.g. hatsu.example.com
without https://
)
and HATSU_PRIMARY_ACCOUNT
to your desired user domain
(e.g. blog.example.com
without https://
)
# copy env example
cp .env.example .env
# edit env
nano .env
Then create the database file and run:
# create database
touch hatsu.sqlite3
# run hatsu
docker-compose up -d
If there is no build image, it will be built automatically at execution time. Hatsu uses cargo-chef in the Dockerfile, which caches dependencies to avoid duplicate build dependencies.
If you need to rebuild, add the --build
flag:
docker-compose up -d --build
Compatibility Chart
Hatsu is primarily geared towards the micro-blogging platform in the Fediverse.
Currently I've created a chart for all the platforms I expect to be compatible with, and hopefully it will be filled in later:
Send
Receive
Akkoma, Sharkey, etc. forks should be compatible with upstream, so they are not listed separately.
Federation in Hatsu
Supported federation protocols and standards
- ActivityPub (Server-to-Server)
- Http Signatures
- WebFinger
- NodeInfo
- Web Host Metadata
Supported FEPs
- FEP-67ff: FEDERATION.md
- FEP-f1d5: NodeInfo in Fediverse Software
- FEP-fffd: Proxy Objects
- FEP-4adb: Dereferencing identifiers with webfinger
- FEP-2c59: Discovery of a Webfinger address from an ActivityPub actor
ActivityPub
The following activities and object types are supported:
Send
Accept(Follow)
Create(Note)
,Update(Note)
Receive
Follow(Actor)
,Undo(Follow)
Create(Note)
Like(Note)
,Undo(Like)
Announce(Note)
,Undo(Announce)
Activities are implemented in way that is compatible with Mastodon and other popular ActivityPub servers.
Notable differences
- No shared inbox.
Additional documentation
Hatsu JSON Feed Extension
To allow you to customize your postings, Hatsu defines a JSON Feed extension that uses the _hatsu
key.
All extension keys for the Hatsu JSON Feed Extension are optional.
Note: everything here is experimental. It is always subject to breaking changes and does not follow semver.
Top-level
The following applies to the Top-level JSON Feed.
about
(optional but strongly recommended, string) is the URL used to introduce this extension to humans. should be https://github.com/importantimport/hatsu/issues/1 .aliases
(optional, string) is the customized username used for FEP-4adb and FEP-2c59.banner_image
(optional, string) is the URL of the banner image for the website in hatsu.
Items
The following applies to the JSON Feed Item.
about
(optional, string) is the URL used to introduce this extension to humans. should be https://github.com/importantimport/hatsu/issues/1 .
Packaging Status
If you are interested in packaging Hatsu for other distros, please to let me know!
Arch Linux
AUR
Maintainer: @Decodetalkers
Nix / NixOS
Nixpkgs
Maintainer: @kwaa
NUR (sn0wm1x)
Maintainer: @kwaa