• 2 Posts
Joined 1Y ago
Cake day: Jun 10, 2023


I don’t think I’ve been banned, but I did a similar thing. I requested all my data from Reddit, then used that list of comment/post IDs to mass-edit them. I think I’m in the clear because I used the official third party API, with an official “app.” If you used the private API or instrumented this via the browser, that may be why you were banned.

Anyway, if you or someone else wants their full history, Reddit will give it to you via a data export request.

Forgejo is Gitea. It was a soft fork of Gitea, and more recently a hard fork.

You can read about why they hard forked, and decide for yourself if it’s worth switching, but the consensus is that Forgejo is in better hands than Gitea.

Currently it’s easy to migrate from Gitea to Forgejo, but the longer you wait and the more it diverges from Gitea, the harder it will become to migrate.

If you like the Forgejo direction and think it’s in better hands than Gitea, you might want to consider migrating sooner rather than later. All of your data should remain intact as it’s essentially a drop in replacement. This should only take you a few minutes if you’re using the Docker version of Gitea.

Cloudflare Tunnels are black magic and exactly what you’re looking for:


Free, no need to self host a server somewhere externally. Can even be used for SSH!

I’m scratching my head to think what Vultr could do better in this case

There was substantial room for improvement in the way they spoke publicly about this issue. See my comment above.

I still don’t like how flippant they’ve been in every public communication. I read the ToS. It’s short for a ToS, everyone should read it. They claim it was taken “out of context,” but there wasn’t much context to take it out of. The ToS didn’t make this distinction they’re claiming, there was no separation of Vultr forum data from cloud service data. It was just a bad, poorly written ToS, plain and simple.

They haven’t taken an ounce of responsibility for that, and have instead placed the blame on “a Reddit post” (when this was being discussed in way more detail on other tech forums, Vultr even chimed in on LowEndTalk).

As for this:

Section 12.1(a) of our ToS, which was added in 2021, ends with “for purposes of providing the Services to you.” This is intended to make it clear that any rights referenced are solely for the purposes of providing the Services to you.

This means nothing. A simple “we are enhancing your user experience by mining your data and giving you a better quality service” would have covered them on this.

We only got an explanation behind the ToS ransom dialog after their CMO whined in a CRN article. That information should have been right in the dialog on the website.

In both places, they’ve actively done vague things to cause confusion, and are offended when people interpret it incorrectly.

I’m not talking about that. I’m talking about this:

I agree with the sentiment here, but all the technologies mentioned allowed us to ship a working application in a timely manner. I think that should always be the first goal. Now that this is out of the way, we can start looking at improving efficiency, security, resilience etc.

“Security Second” is not good messaging for a project like this.

But I’m glad my comment was hilarious to you.

I don’t need or want replication of my private projects to a peer to peer network. That’s just extra bandwidth to and from my server, and bandwidth can be expensive. I already replicate my code to two different places I control, and that’s enough for me.

I’m not sure who Radicle is for, but I don’t think the casual hobbyist looking to self host something like Forgejo would benefit at all from Radicle.

Loading the source code for Radicle on Radicle also seems fairly slow. It seems this distributed nature comes at a speed tradeoff.

With the whole Yuzu thing going on, I can see some benefit to Radicle for high profile projects that may be subject to a takedown. In that respect, it’s a bit like “Tor for Git.”

I suspect that over time, pirate projects and other blatantly illegal activities will make use of Radicle for anti-takedown reasons. But to me, these two projects solve two different problems, for two different audiences, and are not really comparable.

Edit: There is already enough controversy surrounding Radicle, that, if I were someone looking to host a takedown-resistant, anonymous code repository, I would probably be better served hosting an anonymous Forgejo instance on a set of anonymous Njalla domains and VPSes. The blockchain aspect was already a bit odd, and what I’m now seeing from Radicle does not exactly inspire confidence. I don’t think I’ll ever use this.

Also, if you get the permission of someone in leadership to clone their voice, one angle could be to voice clone someone on ElevenLabs and make the voice say something particularly problematic, just to stress how easily voice data can be misused.

If this AI vendor is ever breached, all they have to do is robocall patients pretending to be a real doctor they know. I don’t think I need to spell out how poorly that would go.

Even if this gets implemented, I can’t imagine it will last very long with something as completely ridiculous as removing the keyboard. One AI API outage and the entire office completely shuts down. Someone’s head will roll when that inevitably happens.

The API that FDroid is using has only just come out.

Not true. Android has supported rootless unattended upgrades at a system level since Android 12 (October 4 2021). That was nearly 2 and a half years ago, so it’s been a while.

This is what Neo Store used. F-Droid only just now got around to supporting this with this recent update.

You are giving it the -d flag. -d means “detached.” There are logs, you are just preventing yourself from seeing them.

Replace the -d with an -i (for interactive) and try again.

Have you completed the podman rootless setup in order to be able to use it? You may need to edit /etc/subuid and /etc/subgid to get containers to run:


More than likely, this might have something to do with podman being unprivileged, and this wanting to bind to port 80 in the container (a privileged port). You may need to specify a --userns flag to podman.

Running in interactive mode will give you the logs you want and will hopefully point you in the right direction.

Their cheapest $5 Linux VPS isn’t any better than one from a reputable host like Vultr. This isn’t really a deal, just use a trusted hoster instead.

I plan to support this for as long as I’m using Lemmy, which should be a good while.

All the script really does is generate a docker-compose.yml stack that’s best for your desired setup. So even if I do stop supporting the script, you’re not locked into using it. You can always manage the Docker Compose stack manually and do your own updates, which is what people not using my script will have to do anyway.

Also, I don’t bake Lemmy versions directly into this script, I just pull the latest Lemmy version from GitHub and deploy that. So in theory, unless the Lemmy team changes something major, this should continue working for a long time after I stop supporting it.

If you want to be prepared, I would recommend reading up on Docker Compose and getting familiar with it!

Shameless self plug:


All you need is a server, a domain, and your DNS records set to your server’s IP address. After that, my script takes care of the rest!

Please let me know if you have any issues! I am constantly keeping this updated based on people’s feedback!

Try again with the latest version of Lemmy Easy Deploy.

I am now building multiarch images for 0.18.x, and my script will now default to my multiarch images, so there is no longer a need to build it yourself :)


I’ve got that included in a staging version I’m preparing to release, if you wanted to take a look at the changes:


I see. Thanks a lot for this!

I really don’t have the capacity to support a bunch of different email services, so it sounds like the best I can do right now is make the SMTP settings accessible without also running the postfix server. So if someone wants to run their own email somewhere else, they can configure it. But otherwise, I’ll leave it to the user to figure out what happens after an email request leaves Lemmy.

Does that sound fair, and like something you would have used? Essentially just an interface in config.env that puts the right SMTP address/credentials in lemmy.hjson.

Before this week, I would have told you no. But I have big plans for the 0.18.1 update.

The Lemmy team has completely broken ARM support with seemingly no plan to support it again. They switched to a base Docker image that only supports x86_64. This is why your build fails. I still don’t understand why they would move from a multiarch image to an x86_64-only one.

I’ve been working on this for about a week, and just yesterday I finished a GitHub Actions pipeline that builds multiarch images for x64/arm/arm64. I currently have successful builds for 0.18.1-rc.2. In a future update for my script, I will have it use these, that way ARM users don’t need to compile it anymore. I just ask for a little patience, I haven’t been able to do any work on Lemmy Easy Deploy since I’ve been working on this pipeline :)

I also do want to qualify - don’t get your hopes up until you see it running for yourself. Ultimately, I am just a DevOps guy, not a Lemmy maintainer. I haven’t tested my ARM images yet, and while I did my best to get these to build properly, I can’t fix everything. If anything else breaks due to running on ARM, it will be up to the Lemmy team to fix those issues (which is not likely anytime soon, if their updated x86_64 Dockerfiles are any indication).

But, fingers crossed everything goes smoothly! Keep an eye out for an update, I’m working hard on it, hopefully I can get it out in time for 0.18.1!


Putting my notes on this progress here:


I haven’t actually used the embedded postfix server at all, I keep mine disabled. I only include it because it’s “included” in the official Docker deployment files, and I try to keep this deployment as close to that as possible.

I’m considering adding support for an external email service, as you mentioned, but I have nearly zero experience in using managed email services, and I’m not sure if non-technical users would be able to navigate the configuration of things I can’t do for them (i.e. on a web dashboard somewhere). And if I can’t do it for them, it means more issues for me, so I hesitate to add support for it at all.

I’d love to hear your experience in setting up sendgrid and how easy that was. And the tracking stuff you mentioned as well.

I didn’t put my actual inquiry in the comment since it would have made it too long. But I wasn’t asking them about moving to Squarespace, I was very clear that I am burning a bridge with both of them and have no interest in being a customer of either of them. I told them I’ve already moved my domains out of Google Domains, and I wanted to clarify if any historical data about me and my domains (domain ownership history, purchase history, receipts, etc) would go to Squarespace. And they replied with what I put in my comment.

If I consider their reply to me, and the stuff I’m reading in the link OP posted, this isn’t really a “transition,” Squarespace is just buying the rights to all 10M+ domains Google Domains owns. But if Google Domains doesn’t own a domain anymore, it won’t be part of that transaction.

That’s what I gathered, anyway. Hopefully they can be less ambiguous before the transaction actually happens. It will probably take the better part of a year, so there is plenty of time.

The article covers this a little bit, but I thought I’d share my email response from Google when I asked them “how can I prevent Squarespace from receiving any of my data?” They responded with:

Based on the summary you have shared, I understand that you need help with your general inquiry about the Google Domains transition to Squarespace. To answer this, if you will be transferring your domains out of Google, all of the data will also be removed. This means that once the transition between Squarespace and Google happens, your data will also be removed.

I responded to this and basically said, that wording is ambiguous. Will my data be removed before or after the transition? They replied:

I’m sorry for the confusion. To be clear, Squarespace will not receive any of your Google Domains data. Only the active domain names, excluding the domain names that have been deleted or transferred out, will be affected by the data shift to Squarespace.

So if I trust their word, it means, if I’ve already transferred out my domains (which I have), Squarespace shouldn’t receive any of my customer information, or even have a record of who I am. Hopefully that’s true.

I’m glad to hear that! Thanks for letting me know, it’s nice to hear people were able to use my script to get up and running :)

Will you only be supporting yourself and maybe a small subset of users? If you don’t need your instance to scale, you can (shameless self plug) try my deployment script to get yourself running.

It just uses the recommended Postgres configuration as seen in the deployment files in Lemmy’s official repo. It would just be in a Docker volume on disk, so if you had thoughts of scaling in the future, and wanted to use a managed Postgres service, I would not recommend using my script.

I run an instance just for myself, CPU resources are so low that pretty much anything you can get in the cloud will be good. Disk space is a much more important factor. In terms of just Lemmy-created data, my personal 10-day instance has stored about 6.2GB of data. 2.4GB of this is just thumbnails. Note that this does not include other things that consume resources, such as my Docker images or my Docker build cache, which I clear manually.

So, that is roughly 640MB of new data generated per day. Your experience will vary depending on how many communities you subscribe to, but that’s a good rough estimate. Round it up to 700MB to have a safer estimate. But remember, this is with Lemmy’s current rate of activity. If the amount of posts and comments doubles, triples in the future, my storage requirements will likely go up considerably.

I am genuinely not sure what long-term Lemmy maintenance looks like in terms of releasing disk space. I can clear my thumbnail data and be fine, but I wonder what’s going to happen with the postgres database. Is there some way to prune old data out of it to save space? Will my cloud storage costs become so unreasonable in a year, that I’ll have to stop hosting Lemmy? These are the questions I don’t have answers to yet.

If there is something clever you can do to plan ahead and save yourself disk space costs in the future (like, are managed Postgres services cheaper to host than on disk ones?), I’d recommend doing that.

Yes, you will want to copy the entirety of the Lemmy-Easy-Deploy folder recursively, including the live folder.

However, all important data is also stored in Docker volumes on the system. There isn’t a great way to migrate Docker volumes between systems, but there are a few options. One I have not personally used, but seems to look good, is vackup:


You’ll want to run docker volume ls on your current system, and make sure that when you migrate them to the new system, all the volume names are exactly the same. Then, if you run deploy.sh -f, it should pick everything up and deploy.

Do note: if Docker Compose itself does not create the volume with the right tags, it will still work, but it will print some warnings to the console. Here is an issue discussing it and some potential hacks you can use to add the right tags:


Finally, if you need to re-create a volume on the new system with tags like the above issue mentions, you can try migrating data over between named volumes on the same system using this helpful oneliner (don’t forget to change the volume names in all the places in this command):


In short, it’s a bit hacky, but it can be done.

Good luck!

Sorry, I don’t have access to an unRaid system to test it with.

However, I know most NAS systems at least support CLI-style Docker and Docker Compose, so if you can manage to get Docker running, it might work? The script has some Docker detection if you’re not sure.

However, I know Synology hogs use of port 80 and 443. I’m not sure if unRaid is the same way. If it is, this might not be the best solution for you. But, if you want to give it a shot, I do have some advanced options in my config that lets you change to different ports and turn off HTTPS (so you can run a proxy in front of it). I can’t really help people who run it behind a webserver like this, but the template files in my repo can be freely modified, so you’re welcome to hack at my script any way you like to get it working!

“Private instance” and “disable registration” are not the same thing. There are separate options for both. It is possible to run a federated single-user instance with registrations disabled. That’s how I run mine.

And that is why I don’t advertise this as supporting email out of the box, and why it’s an advanced option without any support from me. The embedded postfix server is part of the official Docker Compose deployment from upstream Lemmy, and it’s part of the officially supported Ansible deployment too. Those deployment methods are what this is modeled after. That is as far as I go on email support. If upstream Lemmy started including some automatic AWS SNS configuration, I would adopt it, but they have not done so.

Everyone who has reported success to me so far are running single user instances for themselves. That is my target audience, and for that audience (and myself), email is not even close to being a hard requirement.

However, if you would like to improve this script by adding support for more robust and secure email systems, I would be happy if you submitted a PR to do just that :)

Unfortunately, Lemmy Easy Deploy isn’t well suited for running behind a reverse proxy. It is a complete “do everything for me,” and I don’t have a good way to support people running a webserver already. I’ve pushed an update a few minutes ago, so you can try playing with the ports and maybe turning off Caddy’s TLS (so that certificates are managed by your webserver instead of the one in LED), but I’m sorry to say you’re on your own in that case :(

Lemmy can basically run on a potato. Any VPS will do, but the main metric you’ll want to keep track of is disk space. Any $5/month instance will be fine.

I am a moderate-to-heavy user of Lemmy, and I go through about 700MB of new data per day. If you federate with less communities than me, this may be less for you. At my current rate of storage, I can go for about a month and a half before I have to worry about storage space.

After that, I’m thinking about clearing my thumbnail cache, and seeing if Lemmy has some way to prune old data. I haven’t been using Lemmy long enough to know what to do to clean things up, but if I figure out something clever in a month or two, I’ll share what I learn.

If you are bypassing my Caddy service, you will need to expose lemmy-ui as well. Look at my Caddyfile to see how things are supposed to be routed. Don’t forget the @-prefixed handles. Those are important.

Unfortunately, if you have a specific use case involving a webserver, Lemmy Easy Deploy may not be for you. However, you can also take a look at Lemmy’s own Docker files:


Sorry, combining this with an already-running webserver is not a use case I support for this easy deployment script. My script is intended for new deployments for people not already running servers.

The best thing you can do is change the ports in docker-compose.yml.template, and today I will make an update that gives you environment variables for them.

Unfortunately I do not have time to help you dig deeper into the issue, but hopefully these tips help you:

  • Change the ports in docker-compose.yml.template to something that won’t conflict with your webserver. Take note of what port you used for 80
  • Edit config.env and set CADDY_DISABLE_TLS to true
  • Edit your own webserver config to point to this deployment via a reverse proxy. I’ll leave it up to you to configure that. You are already using Caddy, so you can look at my Caddyfile for inspiration on how reverse proxies work.

Since you’re using your own webserver, doing it this way will not automatically retrieve certificates for you. Hopefully you have a system in place for that already.

Good luck!

Sorry, I don’t use Matrix. Please describe your issue here and I will try my best to assist.

Good catch!

I will make a note in the example file about this. You can use special characters if you want, but you’ll need to backslash escape them first. So in config.env, you probably could have done:

SETUP_SITE_NAME="Lemmy \| Server"

I’m not sure what you mean? Most people are just self hosting instances for themselves, where email isn’t needed. My instance doesn’t have an email service.

And as I explained, if email is something you want, I have an advanced option for this. It’s not the default because there is not a public VPS host out there that lets you use port 25 without special approval.

You can try changing the ports in docker-compose.yml.template. I just use Caddy in this because its HTTPS convenience is hard to beat!

I’ll add some better instructions for this to the readme.

You can do any Docker compose commands by changing to the ./live folder, then running:

docker compose -p lemmy-easy-deploy <command>

<command> can be whatever Docker Compose supports, up,down,ps, etc.

I don’t have config options for the ports, but you can just change them in docker-compose.yml.template to change what they’re listening on. As long as yourdomain.com:80 is reachable from the public, it shouldn’t matter what routing shenanigans are going on behind it.

I haven’t tested a local only use case, but you can probably set these options in config.env

  • Set LEMMY_HOSTNAME to localhost
  • Set CADDY_DISABLE_TLS to true
  • Set TLS_ENABLED to false

This will disable any HTTPS certificate generation and only run Lemmy on port 80. I don’t know if Caddy or Lemmy will act weird if the hostname is localhost, but this should work for you. Let me know if it doesn’t.

Yep, I see an issue on my repo related to ARM now. It looks like the Lemmy team doesn’t push multiarch images to the same tag, and they also don’t push ARM images consistently (last ARM image was for 0.17.3).

I’ll add a check that forces compiling from source if the user is on ARM. This will greatly increase deploy time, but since an ARM image for 0.17.4 isn’t available, it’s the best I can do for the time being.

The Lemmy maintainers themselves seem to lock it at 0.3.1, and I wanted to maintain parity with their deployment. I know pictrs is up to at least 0.3.3, and has a release candidate for 0.4, but upstream Lemmy uses 0.3.1 for whatever reason, so that’s why I lock it there.

It’s excluded from the update checker because I don’t have a stable way to check what version upstream is using. The Lemmy update checker just checks to see what the latest tag on LemmyNet/lemmy is. I could try and pull the latest Gitea tag for pictrs, but since upstream Lemmy isn’t using the latest version, that’s not really an option as something might break.

I considered trying to parse their docker-compose.yml file to see what version they use, but they seem to be restructuring their docker folder right now. The folder in main is completely different from the one tagged 0.17.4. If I assume a certain directory path for that file for every version after this, but they move it, my script will break. Sadly, until their Docker deployment files seem like they’re going unchanged for a good few versions, I’ll have to do it manually for now.

In the past few days, I've seen a number of people having trouble getting Lemmy set up on their own servers. That motivated me to create `Lemmy-Easy-Deploy`, a dead-simple solution to deploying Lemmy using Docker Compose under the hood. To accommodate people new to Docker or self hosting, I've made it as simple as I possibly could. Edit the config file to specify your domain, then run the script. *That's it!* No manual configuration is needed. Your self hosted Lemmy instance will be up and running in **about a minute or less.** Everything is taken care of for you. Random passwords are created for Lemmy's microservices, and HTTPS is handled automatically by Caddy. Updates are automatic too! Run the script again to detect and deploy updates to Lemmy automatically. If you are an advanced user, plenty of config options are available. You can set this to compile Lemmy from source if you want, which is useful for trying out Release Candidate versions. You can also specify a Cloudflare API token, and if you do, HTTPS certificates will use the DNS challenge instead. This is helpful for Cloudflare proxy users, who can have issues with HTTPS certificates sometimes. Try it out and let me know what you think! https://github.com/ubergeek77/Lemmy-Easy-Deploy

In the past few days, I've seen a number of people having trouble getting Lemmy set up on their own servers. That motivated me to create `Lemmy-Easy-Deploy`, a dead-simple solution to deploying Lemmy using Docker Compose under the hood. To accommodate people new to Docker or self hosting, I've made it as simple as I possibly could. Edit the config file to specify your domain, then run the script. *That's it!* No manual configuration is needed. Your self hosted Lemmy instance will be up and running in **about a minute or less.** Everything is taken care of for you. Random passwords are created for Lemmy's microservices, and HTTPS is handled automatically by Caddy. Updates are automatic too! Run the script again to detect and deploy updates to Lemmy automatically. If you are an advanced user, plenty of config options are available. You can set this to compile Lemmy from source if you want, which is useful for trying out Release Candidate versions. You can also specify a Cloudflare API token, and if you do, HTTPS certificates will use the DNS challenge instead. This is helpful for Cloudflare proxy users, who can have issues with HTTPS certificates sometimes. Try it out and let me know what you think! https://github.com/ubergeek77/Lemmy-Easy-Deploy