If anything, events of the last few years have highlighted the importance of decentralization. We're seeing interesting ideas taking root in the technology field under the guise of what's being called Web3 or Web 3.0. Based on foundational technologies like blockchain, this is leading the development of next-generation decentralized (and hopefully, censorship-resistant) applications like cryptocurrency, file storage (IPFS), video streaming (Odysee), peer-to-peer messaging, and social media.
One example of the latter application is the NOSTR protocol (although an acronym of Notes and Other Stuff Transmitted by Relays, usually written like a word Nostr). To be completely frank, while the idea seems promising technically, there is very little activity on the platform. There was a bit of an uptick after some notable people expressed interest, but even that seems to have died down. Obviously it's a bit of a chicken-and-egg problem.
Almost by definition, the graphs that constitute decentralized technologies often depend on volunteers providing nodes. In the Nostr protocol such nodes are called relays. This article shows the steps needed to set up such a relay on Ubuntu for anyone so inclined.
There are only a few things to take note of when you're creating your cloud server instance. Here I'll create one (DigitalOcean calls them ‘droplets’) in the SFO region in which I'm located.
The procedure I'm using here uses Ubuntu (or at least, Debian); so pick some up-to-date version:
In my experience I never got anything near the amount of traffic that anything other than the standard shared CPU and RAM was sufficient, so I picked pretty much the cheapest available option with one shared CPU, ½ GB RAM, and 10 GB boot disk:
We will need a more significant amount of storage to hold the message database. In my experience I accumulated about ¼ TB worth of messages over the course of half a year, so I'm going with ½ TB which I can always update later if necessary.
I also needed to choose the SSH key that the server instance is authorized to accept connections from. All this will depend on the particular provider.
To minimze cost, I didn't add any of the ‘extras’ such as monitoring or backups. All in all this comes down to $9 per month.
There wasn't much to do here; I just made sure Ubuntu was up to date and that the installed packages were updated to their most recent versions.
There is no need to create a user account: the installation of the relay service will create its own service account.
Since I don't have an Intel Ubuntu machine to build the server on, I just created another (temporary) droplet for building the server using the same version of Ubuntu that is slightly beefier at 2 shared CPUs and 4 GB RAM. I wanted to use a different machine because I don't like installing compilers and other nonessential tooling on a production server. Obviously Docker would be the perfect solution if you have it available.
The build instructions are well-documented in the repository README, but I'm going to use a slightly different and smoother approach that works specifically on Debian. You will still need to install the dependencies listed there, but don't build it:
sudo apt install -y git g++ make libssl-dev zlib1g-dev liblmdb-dev \ libflatbuffers-dev libsecp256k1-dev libzstd-dev git clone https://github.com/hoytech/strfry cd strfry/ git submodule update --initNow note the
debian
directory hierarchy with everything needed to build a Debian package.
This can be used to easily build and package the server:
sudo apt install dpkg-dev debhelper dpkg-buildpackage -us -ucThe built packages (oddly) are in the parent directory. Transfer the strfry_x.y.z-1_amd64.deb to the new cloud instance.
apt install libsecp256k1-devThen just install the Debian package you just built:
dpkg --install strfry_0.9.6-1_amd64.debEdit the
/etc/strfry.confconfiguration file to taste; especially the info block. You must change the nofiles configuration or the relay will not start; I set it to zero. Now, run the service:
systemctl enable strfry.service systemctl start strfry systemctl status strfry
apt install nginxCompletely replace the default configuration in /etc/nginx/sites-available/default with:
server { server_name your.server.com; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:7777; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } listen 443 ssl; ssl_certificate /etc/ssl/your-server.crt; ssl_certificate_key /etc/ssl/your-server.key; }Note there are no entries for HTTP port 80. Now, run the service:
systemctl enable nginx.service systemctl start nginx systemctl status nginx
watch ss -t 'sport = 443'
:
All pages under this domain © Copyright 1999-2025 by: Ben Hekster