• 1 Post
  • 5 Comments
Joined 2 years ago
cake
Cake day: July 21st, 2023

help-circle
  • I’m sure you are far outweighed by users like me who keep one for when traveling away from home and working in public spaces (i do two days a week on average). Most days is no bandwidth and when there is usage its pretty low as limited to the public WiFi so just syncing files for local changes and general internet use / research. I could do and sometimes use a VPN to my home server but I don’t want the risk I can’t work anywhere if something happens.


  • Sorry, the post didn’t have the formatting I expected and is generally quite unclear now I’m reading back through it. I was trying to point out a few different things that I’ve had to learn the hard way when things go wrong! You learn the terminology to search for or have to search for lots of acronyms until you learn them haha.

    Public IP

    So your server is on a fixed IP address. Do you men locally that the machine has a fixed IP within your home lan setup (like e.g 192.168.1.10) or is your public IP fixed (this will depend on your ISP)? Most home providers, like mine, have dynamic IP so every once in a while my public IP will change so everything would go down as my DNS is pointing to the wrong address. Some providers use CGNAT which is even worse and won’t accept any connections originating from outside.

    If dynamic, you can use a DDNS tool like cloudflared to keep checking your public IP and updating your DNS records if it changes. Your services will only go down for however long the polling on this is set. Note that cloudflared does a few things and this is just one one aspect of the tool.

    If you have CGNAT you have to use cloudflare tunnels or similar to create the permanent bridge to your server that all external requests can pass through even if originating from outside.

    Docker bridge networks

    Note this is not essential but can be actually easier to manage and keep more secure. It was hard to get my head around but once I did it was easier.

    You can create a bridge network so the containers you add to that network can talk to each other but the other containers can’t. It also means not opening ports in the docker compose so the system can’t access those containers directly using up ports. A container can have multiple networks too.

    For instance, my nextcloud main server is on proxy and nextcloud-internal networks. The other containers in that docker compose are on nextcloud-internal. My proxy manager (caddy) is on proxy. The various nextcloud containers can talk to each other on the internal network. My proxy and the nextcloud server can also talk to each other through the proxy network. My server cannot talk to any of them directly (unless you also expose ports). Caddy cannot directly talk to my nextcloud database container. Hope it make sense, I can share my docker compose files if helpful. After this info, my original message may make more sense.

    You probably expose ports for jellyfin so can access it locally through 192.168.1.10:8080 or whatever it might be.

    Reverse proxy

    This is separate to a tunnel but tools like cloudflared tunnels and pangolin combine them.

    The reverse proxy is something you setup to pick up a server domain address and deliver it to the requesting computer. It turns cloud.domain.com to 192.168.1.10:8000 and for a website delivers the HTML, images, php etc to client browsers. In the self hosting space it let’s you access different services on one domain (like www.domain.com, cloud.domain.com, request.domain.com as much as you like)

    I have caddy on docker but previously used nginx proxy manager, and for each public service I would setup a reverse proxy to the actual service. For my business website I tell it to send and domain.com and www.domain.com requests to my website in a different docker container. For nextcloud I tell it to send cloud.domain.com requests to my nextcloud server container on its port (on proxy network - see above, in caddy I say reverse proxy to nextcloud-server:80 but if exposing ports it could be your internal server IP like 192.168.1.10:8000 or whatever you are using).

    Tunnel

    This is just connecting two servers or clients and gives them a local IP on each end that can be used to encrypt and tunnel those connections over the internet.

    I don’t actually have a tunnel for my external services as I use my business VPS. I do have a tunnel between my home server and my VPS to create an encrypted and usable tunnel between those separate internal networks.

    I believe cloudflare tunnels and pangolin work the same way, where a user visits your service.domain.com and the service expects you to login. If logged in, it will forward the requests to your home server through an encrypted tunnel (so your ISP and others can’t see it, and your users never see your public IP address), and it also reverse proxies the request to the correct service on your server (like nextcloud). It does both jobs for you. The authentication stage might be optional, I’m not sure.

    It is easier to use these but you’re more tied in to one service.

    Cloudflare proxy

    If you use cloudflare DNS and opt into their proxy, they will hide your home server’s public IP from external users using services through your domain. If you lookup a domain like “dig domain.com” in the CLI, you will see Cloudflared public IP instead of your own. The connection packets will go to Cloudflare, who internally change it to your public IP so the end client cannot see it. It does mean they can track all your header information and unencrypted traffic, and if it goes down nobody can access your services externally using the domain.

    Incidentally, I notice some IPTV services use this to try to hide their public IP but in reality, broadcasters could get the real IP from Cloudflare, especially with a court case.



  • Did you open ports in docket for 80, 443 for nginx and a port for jellyfin (in docker compose under services add these but with tabs not spaces ports: - 443:443)

    Do you have ufw or a firewall running? This might be blocking the ports for jellyfin and/or nginx.

    It might be easier to create a bridge network called proxy (docker network create proxy) then in docker compose add the following under services networks: - proxy

    And at the bottom of the compose file

    networks: proxy: external: true

    Then in your nginx setting redirect to jellyfin:8096 (service name in docker compose: internal port jellyfin uses I.e. right hand side of ports mapping. Are you using straight nginx or nginx proxy manager (might be worth using this).

    Can you access jellyfin locally on your network (http://internal-ip-of-server:8096/ on a browser)?

    Has your DNS been setup to point to the correct ip your router is on? Are you behind a dynamic IP or cgnat? If cgnat, you have to use cloudflare tunnels. If ddns look into cloudflared docker image.

    Does your router forward those ports to the correct internal ip of your server? Have you fixed the internal IP of the server machine?

    Don’t share your certificate details but you can share your docker compose with personal information redacted or replaced

    It’s probably not a good idea to publish jellyfin to the internet. Look into tailscale or cloudflare tunnel with login security, or wireguard.


  • brewery@feddit.ukOPtoSelfhosted@lemmy.worldShould I replace NPM?
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    Thanks for this. To be honest it just did not cross my mind! Horserace, I am not sure I want to rely on Cloudflare too much though in case they so something in the future like put those things behind paywalls. My domains are through someone else so can easily switch nameservers to them for DNS. It does sound much easier and safer though so will have to consider it