Thank you again, for all the work.
Old posts can be automatically archived (saved to S3) to free up database space
When I first set up the .env.docker, I saw a mention of this feature. Is there a sample for me to look at to set up S3? When I started on 1.2.5, I don’t believe I saw any settings.
I suppose my instance is quite small, but this could be interesting to check out anyway.








Thank you for that How-To. Now I know this isn’t what you first intended, but I tried to adapt this for DigitalOcean [DO] which is where my VPS is. I was not able to set up the CNAME, and I kind of worked out what was happening without a solution. For now I don’t have the CNAME set up and the links lead directly to my S3 Bucket.
I’ve got my set up laid out at the bottom of the post here. I’ll ask there if it’s okay to just keep this arrangement or if there are concerns.
Steps
First, for DO, I had to install s3cmd as per this How-To
https://docs.digitalocean.com/products/spaces/reference/s3cmd/
Second, once s3cmd is set up, and configured, DigitalOcean has a default permission of private, and I had to make a policy to switch the default over to public. I configured the bucket policy with this How-To
https://docs.digitalocean.com/products/spaces/how-to/configure-bucket-policies/
What I noticed for DO, is that the bucket generates a folder of the same name of the bucket, and PieFed populates the folder. So for example:
s3://your-super-unique-bucket-name/your-super-unique-bucket-name/communities s3://your-super-unique-bucket-name/your-super-unique-bucket-name/posts
Third, I took your How-To for BackBlaze that you kindly provided and changed my pyfedi environment variables - in my case a docker
https://codeberg.org/rimu/pyfedi/src/commit/8d8858fe386e32b4d1406251684ab4dc2bac782e/docs/Using Backblaze B2 with PieFed.md
For DigitalOcean the region is not needed, and for illustrative purposes the region here is tor1 for the Toronto datacentre. Perhaps the only thing that doesn’t work quite right is uploading images.
I’m currently here for now as the bucket is being used and all the images are displayed correctly.
S3_BUCKET = 'your-super-unique-bucket-name' S3_ENDPOINT = 'https://your-super-unique-bucket-name.tor1.digitaloceanspaces.com/' S3_REGION = '' S3_PUBLIC_URL = 'https://your-super-unique-bucket-name.tor1.digitaloceanspaces.com/folder-name' S3_ACCESS_KEY = 'example004819c3ba9b31b0000000003' S3_ACCESS_SECRET = 'exampleK004Uei/7Vf90FzWuN3zoGl5npK3zZc'Why I didn’t finish the CNAME setting.
For DigitalOcean, if I change the S3_PUBLIC_URL variable to ‘piefed-media.your.domain.here’, what happens is DO generates a new folder in the bucket. So what ends up happening is PieFed populates this new folder.
s3://your-super-unique-bucket-name/piefed-media.your.domain.here/communities s3://your-super-unique-bucket-name/piefed-media.your.domain.here/posts
PieFed still follows along with piefed-media.your.domain.here, and I was able to set up the CNAME as instructed. But I just wasn’t able to get things working as expected.
Am I able to just keep things as is with links pointed directly at my bucket? Or is there some kind of concern with this set up.