🇨🇦

  • 4 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I will always recommend Borg backup just because of it’s compression+de-duplication algorithms:

    550gb of raw data, 20 historical backups going back over a year (10.98tb of data total), only 400gb of disc space used to store them all…

    You can backup directly to remote servers via ssh, nfs, or directly between two borg instances, optionally encrypted in transit and at rest.

    Borg is a CLI tool normally, but there are a number of GUI frontends you can use if you really want: Vorta, BorgWeb, and BorgWarehouse for example. (I’ve not used any of these, just examples from a google search)


  • This has actually been one of the biggest reasons I’ve been hesitant too. Looking at that list, my bank isn’t on it (a regional credit union), nor is my credit card provider which also has an app for management.

    Ontop of that, there’s a provincial ID app that’s recently rolled out. It’s become somewhat important for reaching certain government services and can only really be transfered from one working device to another unless you go through the whole process to have it issued again.

    I have no idea if that will work on graphine and I don’t have a second device to transfer it to while I wipe this and put a new ROM on it.

    I do want to try GraphineOS, but I think I’m going to wait for my next device and start from scratch with that.

    That does leave me with a question though. If you do install GraphineOS or another os/rom and it’s not working out for you; how hard is it to get back to factory, or at least back to a ‘standard’ android install?






  • 95% of things I just don’t expose to the net; so I don’t worry about them.

    Most of what I do expose doesn’t really have access to any sensitive info; at most an attacker could delete some replaceable media. Big whoop.

    The only thing I expose that has the potential for massive damage is OpenVPN, and there’s enough of a community and money invested in that protocol/project that I trust issues will be found and fixed promptly.

    Overall I have very little available to attack, and a pretty low public presence. I don’t really host any services for public use, so there’s very little reason to even find my domain/ip, let alone attack it.


  • Looking at openspeedtests github page, this immediately sticks out to me:

    Warning! If you run it behind a Reverse Proxy, you should increase the post-body content length to 35 megabytes.

    Follow our NGINX config

    /edit;

    Decided to spin up this container and play with it a bit myself.

    I just used my standard nginx proxy config which enables websockets and https, but I didn’t explicitly set the max_body_size like their example does. I don’t really notice a difference in speed, switching between the proxy and a direct connection.

    So, That may be a bit of a red herring.


  • This part always confuses me, so I won’t be able to give specifics; just a general direction. Most guides explain how to route traffic from a vpn client to the lan of the vpn host. You need to route traffic from the vpn host/lan to a client of the vpn.

    You need to change the routing table on the VPS, adding a static route to route traffic heading for your VPNs subnet to the VPN host instead of out the default gateway.

    How exactly to do that I’ll have to leave to someone else unfortunately. Network config confuses the hell out of me.


  • I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.

    Each individual archive is around 550gb, but because of the de-duplication and compression it’s only ~800mb of new data each day taking around 3min to complete the backup.

    Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.

    With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.