Whether you call such a list a registry or not is fairly trivial, I think. Their web server uses such a mapping, and for some reason usernames with suffixes on that list confused something. Call it a registry, or a mapping, or a list, the meaning of the statement is the same, and I don't think it prevents most people from understanding what's going on.
Name registries still exist in GNS and they are still usefull but anyone can be a registry for anybody. So in a cultural context where a registry is mainstream, names are almost unique. But registries became contestable, you can avoid them and we have to require them to be libre, open source, and forkable.
Look this chapter in this document :
> 3.3 Relative Names for Transitivity of Delegations
> Users can delegate control over a subdomain to another user’s zone by indicating this in a new record
> Then when the centralized registry decides it wants everyone to start using their real names, we're back to where we started.
Not really. Directory lookup should use a standardized API so that when one directory service goes rouge, clients can just switch to another they trust.
I do think that it's essential for this to be standardized. Not only for this reason, but clients/hosts shouldn't be forced with the responsibility of indexing the entire social web themselves.
Ok, but how is a domain name essentially different from 'any form of text string'? There needs to be some central registry, unless you are ok with such long strings that nobody will effectively double check, that they are correct.
The idea of "removing" a name seems technically unreasonable too. Lists may legitimately be pattern-based (like "ads.*.net" for instance, where ads.targetdomain.net technically matches too).
Yeah, that makes sense. Basically namespacing on some unique identifier, such as a username/org name. That would alleviate a lot of package squatting concerns.
> If you can share those tags (and why not?), now you have decentralized "authorities" for names.
Until an indexing service becomes a defacto standard, once again making it centralized. This is what I meant about federation (i.e. sharing your name lists) gravitating towards centralization.
> Squatting on previously-unused-in-the-service IRL identities doesn't matter at all.
It does when I can register thousands and thousands of names. This is why it must be too difficult for squatters and not too difficult for users.
The public suffix list is an abomination --- a useful, pragmatic, largely successful abomination, but an abomination nevertheless. The PSL centralizes and makes static a database that should be dynamic and distributed. It's a throwback to the bad old pre-DNS internet where everyone would copy around /etc/hosts files and rely on ad hoc human updating to keep host->address mapping up to date.
The information in the public suffix list belongs in DNS.
IANA (not w3c) just maintains a directory of who uses them to avoid collisions.
As should be obvious from the collection, there’s nobody in charge here. You can just ask for whatever prefix you want. I guess in that way, I agree with you: it is nearly anarchy.
I personally think that’s fine. Why does it matter if the names aren’t consistent? The different protocols using well-known don’t have to talk to each other. Any individual program is likely to just care about one for whatever protocol it’s speaking. All that matters is they don’t overlap.
It's similar to what ideas 2.1 and 2.2 are in the blog post - a local directory that is maintained and authenticated centrally and then distributed to browsers that perform a central lookup.
The downsides are that it is too centralized - it isn't difficult to imagine that a government agency would want to sinkhole silkroad.tor from the default registry.
With an alternate registry, you have the balance between knowing enough about the directory provider so that you can trust them, but not enough known about then where they are open to legal recourse.
ie. i'd trust a registry from riseup or duckduckgo, but that same registry is likely going to be the target of legal and hacking attempts. Likewise any provider who is sufficiently protected from those threats likely isn't well-known enough to be trustworthy.
One of the benefits of the existing names is that they also authenticate the site (assuming you check it correctly, usually out of band from a trusted source like a directory or search engine) - this part can be replaced with certificates and an issuance model that can be identical to what LetsEncrypt does
In terms of hosting the directory - that almost has to be decentralized using a p2p network. Similar to namecoin. Namecoin also solves the issue of distributing names and typosquatting - and it could be adapted to auction names.
> A petname is a name that can be freely chosen by the user. This results in non-unique name-value mappings as www.bob.gnu to one user might be www.friend.gnu for someone else.
If names aren't fixed, how do you (for example) link from one website to another? Or share a name with another user?
One of the advantages of the public DNS system, with unique, canonical names, is that the same name should mean the same thing to everyone.
Name canonicalization gets you part of the way there. But unless you want to go full-on namespacing then you must realize you are fighting a pointless battle.
You cannot reasonably expect to save users from themselves in every way.
Whether you call such a list a registry or not is fairly trivial, I think. Their web server uses such a mapping, and for some reason usernames with suffixes on that list confused something. Call it a registry, or a mapping, or a list, the meaning of the statement is the same, and I don't think it prevents most people from understanding what's going on.
reply