I have a project which runs a variety of sites for our network - currently ~900 different domains. Restricting keys by domain name is not really practical for us.
I'm currently using LS, but one of the problems I have is that it doesn't support wildcard domain rules. This means ephemeral hosts quickly build up a large number of rules which soon become redundant.
I suppose so, or maybe *.tld? Im thinking it would depend on your clients behavior. Clients dont go to that many unique fqdns, dynamic creation + caching should quite achievable.
That can definitely be better if you have full control over a domain. But what stopped me from going that far is dictionary attacks. There are plenty of spammers who will just try hundreds of common-ish addresses at a domain.
I considered going even further and building something where I'd register each new address as I gave it out, but for now, just adding a tag on to my regular address has been good enough for me.
Yes. Caddy is what we use, since not much else can do it as easily as Caddy can. And it's our go-to tool for several projects that require custom domains. And we really, really, appreciate it!
I'm just saying that it's not something that is documented well or purpose built for that scenario.
So. The question I'm asking myself now is how to fix this. Giving <wildcard>.domain.com a shared quota will allow one tumblr or github pages user to monopolize all storage, effectively removing local storage for this kind of scenario (also removing it for the host which is even more annoying).
A maybe workable solution would be to only allow creation of new keys for the first-party origin. What I mean is that whatever.example.com has full access if that's what the user is currently viewing directly in their browser.
<wildcard>.example.com embedded via iframes could either get read-only access, or read-write access for existing keys. Also maybe limited to, lets say, 4K.
This sounds like a really complicated solution though. Any better ideas?
You still need a file to generate them. Unless you could get all the domains to point to the same documentroot or use a symlink or something.
Anyway, hosting companies like that massively oversell their capacity knowing that most people won't use it. If you actually tried to put up 30,000 (or, heck, even 300) active domains, the server would probably just die under the load.
I was thinking about 1) and it seems like a really cool idea. If you could also query the sites trusted by some person, that would be like a HN with domains only to some extent (and if you can have any domain you want, you'd probably use one per site, rather than making a hierarchy again). Realistically I probably need only 20 or so domains on a typical day anyways, so that would only require trusting 5 or so other developers.
I launched a side-project to manage all the domain names I've registered over the years. This service provides a simple DNS records manager, an email account that covers all the domains, auto-generates LE certs and either serves a single page HTML website or redirects to a Github repo page. No limit how many domains added to account. https://projectpending.com/
I don't believe one of those cheap one's will work as they are only for one domain. The wildcard won't work as it's only for one domain with many subdomains. I want many domains with many subdomains.
reply