Joplin server support mutiple domains

I have Joplin Server 2.6.10 running on a Synology Diskstation via this Docker.

Using the APP_BASE_URL I can successfully browse, login, sync with my devices, etc.

However, my ISP doesn't allow my external domain to be hit while I am connected to my internal network. So internally I can get to my Joplin server with an internal-only domain, but obviously need to use a different, external-domain when outside my network.

Does Joplin server allow for such a setup? I'd prefer to not have the server be so picky about pinning itself to a particular domain if possible.

2 Likes

Actually I quite like this idea...it would be indeed quite nice if the server was reachable with it's local IP...that would give the opportunity to speed up the syncing at home a lot. At least for self hosted instances.

Regards

I don't quite understand what you mean by that. I think this is rather a problem with your router. Anyway, you can work around this by using something like pi-hole. In such a case just create a CNAME entry your.external.domain.com that points to synology.local

You can use local DNS to achieve this.

Say your external domain name is joplin.example.com which points to your external address of 101.102.103.104. Your router then forwards the traffic to your self hosted server which is 192.168.0.100 on your local network.

If you add the entry 192.168.0.100 joplin.example.com to your hosts file on your computer the traffic will be routed internally instead. If you have certificates such as Let's Encrypt they will still work as they are tied to the domain and not the IP address.

As @tessus says, if you have a Pi-Hole it's even easier and you can set the DNS for the entire network. So any machine on your network requesting joplin.example.com will be sent directly to 192.168.0.100 rather than via 101.102.103.104.

My ISP and the router they provide me does not allow NAT loopbacks.

Internally I cannot hit a domain name that points to my home routers external IP address.

Yes, this is a problem with my ISP / router, but rarely have I ever come across a software project that was so strict about which domain it would respond to.

Yes, I can use pi-hole (and I do), I can also edit the /etc/hosts on all my machines, phones, tablets, etc.. but juggling all that is pretty no-bueno for me long term.

It would be best to just not have Joplin server be so strict about which domain it will respond to. In my experience we pin domain names to applications via infrastructure. Applications like Rails can be setup to be pinned to a particular domain but they lose a lot of flexibility that way and only cause more issues long term.

1 Like

This is usually the case with cotainerized apps, or more exactly when also using a reverse proxy setup. As soon as an app requires you to set a BASE URL, you already know that you are out of luck using multiple base URLs at the same time.

Great. In this case the solution is very easy (and nothing to juggle around): Local DNS > CNAME Records:

Domain: your.external.domain.com
Target: internal.synology.host.name.local

No need to edit any /etc/hosts files in such a case. You make this entry and you are good to go.

(The APP_BASE_URL has to be set to your.external.domain.com.)

Update: If you really wanted to use multiple hostnames, you would have to use a reverse proxy and mod_rewrite to translate your requests. This can be rather tricky though.

Thanks for the responses, I really do appreciate the time and thought put into them.

Is there any way to not use a reverse proxy setup? I run lot's of containerized apps via Docker:

  • Minecraft Bedrock
  • Joplin
  • Pihole
  • HomeAssistant
  • ...and other various applications.

Only Joplin in this case forces me to tie it to a domain. All the others can be reached via IP, aliased domains, port forwarding from the outside, etc.

1 Like

I don't think there's an option to terminate the TLS in the container. But maybe I am wrong.
But if there is, you can run it without a reverse proxy.

The reason for a base URL is the following:

If an app were to use a URL like /test/here.html or /another/dir and it runs on localhost:8888 and I reverse proxy it to https://myserver.com/subdir/, I will run into an issue as soon as the app or a link returns /test/here.html, because https://myserver.com/test/here.html does not exist and it should be https://myserver.com/subdir/test/here.html.
(However, it would work if you created a subdomain and reverse proxied it to https://joplin.myserver.com, but not all people can create subdomains and others rather like to use subdirs (aliases) instead of subdomains.)

The thing is that afaik all nodejs apps actually use absolute paths (which I believe has to do with the way routes are built in nodejs).

Thus an APP URL is required for situations where the app is not run in the root of a separate sub domain. Unless there's another reason why the server needs the BASE_APP_URL.
Also, sometimes the BASE URL only denotes the dir relative to the root and does not require the domain name to be part of it. I have no idea why in this case it is required. I don't know the server code.

Laurent can speak to the specifics since he wrote the code.

Very neat idea! Thank you @dpoulton & @tessus !

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.