Inspired by this comment to try to learn what I’m missing.

  • Cloudflare proxy
  • Reverse Proxy
  • Fail2ban
  • Docker containers on their own networks
  • Oderus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 hours ago

    NPM, Nginx

    If I need remote access, I just log into NPM and I have certain URL’s created for Plex, or Sonarr, Radarr etc. No issues so far.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 hours ago

    They aren’t on the internet mainly.

    My router (opnsense) has a wireguard server which is how I access things when out of the house.

    I do have a minecraft server for my friends and I, but that VM is on its own network isolated from everything else.

  • qjkxbmwvz@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    22 hours ago

    Fail2ban config can get fairly involved in my experience. I’m probably not doing it the right way, as I wrote a bunch of web server ban rules — anyone trying to access wpadmin gets banned, for instance (I don’t use WordPress, and if I did, it wouldn’t be accessible from my public facing reverse proxy).

    I just skimmed my nginx logs and looked for anything funky and put that in a ban rule, basically.

  • Chewy@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 hours ago

    Some I haven’t yet found in this thread:

    • rootless podman
    • container port mapping to localhost (e.g. 127.0.0.1:8080:8080)
    • systemd services with many of its sandboxing features (PrivateTmp, …)
    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      I assume #2 is just to keep containers/stacks able to talk to each other without piercing the firewall for ports that aren’t to be exposed to the outside? It wouldn’t prevent anything if one of the containers on that host were compromised, afaik.

      • Chewy@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        It’s mostly to allow the reverse proxy on localhost to connect to the container/service, while blocking all other hosts/IPs.

        This is especially important when using docker as it messes with iptables and can circumvent firewall like e.g. ufw.

        You’re right that it doesn’t increase security on case of a compromised container. It’s just about outside connections.

        • ikidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 hours ago

          I was getting more at stacks on a host talking, ie: you have a postgres stack with PG and Pgadmin, but want to use it with other stacks or k8s swarm, without exposing the pg port outside the machine. You are controlling other containers from interacting except on the allowed ports, and keeping those port from being available off the host.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    One thing I do is instead of having an open SSH port, I have an OpenVPN server that I’ll connect to, then SSH to the host from within the network. Then, if someone hacks into the network, they still won’t have SSH access.

    • Chewy@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      23 hours ago

      I do the same, but with Wireguard instead of OpenVPN. The performance is much better in my experience and it sucks less battery life.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago
    • Fail2ban
    • UFW
    • Reverse Proxy
    • IPtraf (monitor)
    • Lynis (Audit)
    • OpenVas (Audit)
    • Nessus (Audit)
    • Non standard SSH port
    • CrowdSec + Appsec
    • No root logins
    • SSH keys
    • Tailscale
    • RKHunter
  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    in the context of the comment you referenced:

    Definitely have the server on its own VLAN. It shouldn’t have any access to other devices that are not related to the services and I would also add some sort of security software.

    If you have a public service that you expect to have multiple users on you definitely should have some level of monitoring whether it is just the application logs from the forum that you want to host or further have some sort of EDR on the server.

    Things I would do if I was hosting a public forum:

    • Reverse proxy
    • fail2ban
    • dedicated server that does not have any personal data or other services that are sensitive
    • complete network isolation with VLAN
    • send application logs to ELK
    • clamAV

    And if the user base grows I would also add:

    • EDR such as velociraptor
    • an external firewall / ips
    • possibly move from docker to VM for further isolation (not likely)
  • jimmy90@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    use a cheap vlan switch to make an actual vlan DMZ with the services’ router

    use non-root containers everywhere. segment services in different containers

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Just tailscale really.

    My services are only exposed to the tailscale network, so I don’t have to worry about otger devices on my LAN.

    A good VPN with MFA is all you really need if you are the only user.

  • Ananace@lemmy.ananace.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Default block for incoming traffic is always a good starting point.
    I’m personally using crowdsec to good results, but still need to add some more to it as I keep seeing failed attacks that should be blocked much quicker.

  • kratoz29@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I expose some stuff through IPv6 only with my Synology NAS (I am CGNATED) and I have always wondered if I still need to use fail2ban in that environment…

    My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don’t know is if such a feature works for all my exposed services but Synology’s.

  • gamer@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    My new strategy is to block EVERY port except WireGuard. This doesn’t work for things you want to host publicly ofc, like a website, but for most self host stuff I don’t see anything better than that.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      My new strategy is to block EVERY port

      Wow! All 65535 +/-, in and out? That’s one way to skin a cat.

      • gamer@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        ez pz:

        #!/usr/sbin/nft -f
        table inet filter {
            chain input {
                type filter hook input priority raw; policy accept;
                iif "lo" accept
                ct state established,related accept
                iif "enp1s0" udp dport 51820 accept
                iif "enp1s0" drop
            }
        
            chain forward {
                type filter hook forward priority raw; policy accept;
                iif "lo" accept
                ct state established,related accept
                iif "enp1s0" udp dport 51820 accept
                iif "enp1s0" drop
            }
        
            chain output {
                type filter hook output priority raw; policy accept;
            }
        }
        
  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I don’t put it on the Internet.

    I have automatic updates enabled and once in a while I scan with Nessus. Also I have backups. Stuff dying or me breaking it is a much greater risk than getting hacked.

    • woodsb02@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I agree - I don’t expose anything to the internet other than the WireGuard endpoint.

      I’m only hosting services that my immediate family need to access, so I just set up WireGuard on their devices, and only expose the services on the LAN.

      I used to expose services to the internet, until one of my #saltstack clients was exploited through a very recent vulnerability I hadn’t yet patched (only a week or so since it was announced). I was fortunate that the exploit failed due to the server running FreeBSD, so the crontab entry to download the next mailicious payload failed because wget wasn’t available on the server.

      That’s when I realised - minimise the attack surface - if you’re not hosting services for anyone in the world to access, don’t expose them to everyone in the world to exploit.

      • Captain Janeway@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        TBF if you want, you can have a bastion server which is solely whitelisted by IP to stream your content from your local server. It’s obviously a pivot point for hackers, but it’s the level of effort that 99% of hackers would ignore unless they really wanted to target you. And if you’re that high value of a target, you probably shouldn’t be opening any ports on your network, which brings us back to your original solution.

        I, too, don’t expose things to the public because I cannot afford the more safe/obfuscated solutions. But I do think there are reasonable measures that can be taken to expose your content to a wider audience if you wanted.