"import" specification

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: "import" specification

Post by biolizard89 »

Examples of how a spec could behave:

Code: Select all

d/host: {"#import": "dd/target"}
dd/target: "1.2.3.4"
yields: "1.2.3.4"

d/host: {"#import": "dd/target"}
dd/target: {"ip": "1.2.3.4"}
yields: {"ip": "1.2.3.4"}
// Target can introduce other fields than "ip"

d/host: {"ip": {"#import": "dd/target"} }
dd/target: "1.2.3.4"
yields: {"ip": "1.2.3.4"}
// Target cannot introduce other fields than "ip"

d/host: {"ip": {"#import": "dd/target"} }
dd/target: ["1.2.3.4"]
yields: {"ip": ["1.2.3.4"]}
// Target controls all IP's

d/host: {"ip": [ {"#import": "dd/target"} ] }
dd/target: "1.2.3.4"
yields: {"ip": ["1.2.3.4"]}
// Target controls only 1 IP

d/host: {"ip": [ {"#import": ["dd/target", ["ipx", 0] ] } ] }
dd/target: {"ipx": ["1.2.3.4"] }
yields: {"ip": ["1.2.3.4"]}
// Only chooses part of the target

d/host: {"#import": [ ["dd/target1", [] ], ["dd/target2", ["path", 0] ] ]}
dd/target1: {"ip": "1.2.3.4"}
dd/target2: {"path": [ {"ip6": "::1"} ] }
yields: {"ip": "1.2.3.4", "ip6": "::1"}
// Both targets can introduce other fields; solely for space purposes rather than control delegation.  Earlier imports have precedence in the case of conflict.

d/host: {"#import": [ ["dd/target1", [] ], ["dd/target2", ["path", 0] ] ]}
dd/target1: "HELLO "
dd/target2: {"path": [ "WORLD" ] }
yields: "HELLO WORLD"
// String concatenation

d/host: {"#import": [ ["dd/target1", [] ], ["dd/target2", ["path", 0] ] ]}
dd/target1: ["HELLO"]
dd/target2: {"path": [ ["WORLD"] ] }
yields: ["HELLO", "WORLD"]
// Array concatenation
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

hla
Posts: 46
Joined: Mon Nov 10, 2014 12:01 am
os: linux
Contact:

Re: "import" specification

Post by hla »

I don't think import can or should be implemented in a namespace-agnostic fashion. Different namespaces are liable to have different merge and precedence rules.

For example, if you define import as performing a JSON-level merger of objects, this breaks some features of d/, such as SRV records.

Code: Select all

dd/a
{
  "service": [["http","tcp",10,10,80,"www1"],["xmpp-client","tcp",10,10,5222,"xmpp1"]]
}

d/example
{
  "import": [["dd/a"]],
  "service": [["http","tcp",10,10,80,"www2"],["https","tcp",10,10,443,"www2"]]
}
How should an agnostic import mechanism transform this? Either it assumes that "service" occludes fully, or it appends the arrays. Both of these result in the wrong behaviour, because SRV records must be occluded by DNS owner name. The correct set of SRV records at d/example is _http._tcp IN SRV 10 10 80 www2; _https._tcp IN SRV 10 10 443 www2; _xmpp-client._tcp IN SRV 10 10 5222 xmpp1

Jeremy proposed to change the subdomain selector field of the import directive to be more generic, but for the reason above I think pursing generic import is a fools' errand. Even if it were pursued, creating a fully agnostic language for the expression of mergers could easily become some monstrosity like XSLT. The important consideration here, I think, is whether the effort saved from agnostic import significantly exceeds the effort required to analyze the problem and develop it. Since the Namecoin project is currently advocating only two namespaces, d/ and id/, and most of the interest in import relates to d/ and not id/, I don't think it's worth the trouble even if it is possible. And even if you did develop an import system, you'd essentially be betting that it doesn't conflict with the requirements of any future namespace, which I think is troubling.

There may also be security issues caused by applying generic transformations to unknown structures, especially if those structures require custom merge or occlusion rules. As one example, my proposed syntax for TLSA records uses a similar syntax to the above; a generic import mechanism could result in a set of TLSA records which is incorrect, and nominates certificates which should not be nominated. While software which uses TLSA records should not make these errors (because they would be aware of the custom merge and occlusion rules, again detracting from the generality of any import mechanism), it is possible that import may sometimes be processed by an intermediate entity which is not the TLSA consumer (which could be on the same machine, a different machine, etc.) Software consuming data provided by that intermediate entity may assume that the records so provided have been merged in a fashion which is aware of the semantics of a particular structure, which may be an unsafe assertion. Regardless of any reservations one might have about the trustworthiness of such an arrangement, I think it is a clear hazard, and I suspect other hazards would be posed as well.

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: "import" specification

Post by biolizard89 »

@hla I think the specific concerns you bring up with the "service" field can be fixed by redefining the "service" field. As far as I know, the "service" spec is experimental and no one is actively using it, so it's not a problem to change it. Something like this:

Code: Select all

"service": {
    "tcp": {
        "http": [10,10,80,"www2"]
    }
}
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

hla
Posts: 46
Joined: Mon Nov 10, 2014 12:01 am
os: linux
Contact:

Re: "import" specification

Post by hla »

The following transcript of a productive discussion regarding import, its genericity, NMControl's handling of it, etc. follows. It should probably be considered recommended reading for anyone working on NMControl.

Code: Select all

2015-05-14 17:57:45	@Jeremy_Rand	hey hl
2015-05-14 17:57:55	@Jeremy_Rand	quick question about DANE if you've got a sec
2015-05-14 17:58:18	@Jeremy_Rand	Let's say I have a public key
2015-05-14 17:59:11	@Jeremy_Rand	and I want to specify that the public key must show up somewhere in the cert chain, but I don't want to specify whether it's the root cert, an intermediate cert, or the actual website cert
2015-05-14 17:59:22	@Jeremy_Rand	in other words, the same semantics as what HPKP does
2015-05-14 17:59:34	@Jeremy_Rand	what do I put for the certificate usage field?
2015-05-14 18:00:03	@Jeremy_Rand	Doea "Private CA" do that?  Or do I have to list 2 identical public keys, one with Private CA and the other with End-Entity Cert?
2015-05-14 18:02:35	hl	"""0 -- Certificate usage 0 is used to specify a CA certificate, or
2015-05-14 18:02:35	hl	      the public key of such a certificate, that MUST be found in any of
2015-05-14 18:02:35	hl	      the PKIX certification paths for the end entity certificate given
2015-05-14 18:02:35	hl	      by the server in TLS.  This certificate usage is sometimes
2015-05-14 18:02:35	hl	      referred to as "CA constraint" because it limits which CA can be
2015-05-14 18:02:35	hl	      used to issue certificates for a given service on a host.  The
2015-05-14 18:02:35	hl	      presented certificate MUST pass PKIX certification path
2015-05-14 18:02:35	hl	      validation, and a CA certificate that matches the TLSA record MUST
2015-05-14 18:02:35	hl	      be included as part of a valid certification path.
2015-05-14 18:02:49	hl	It sounds to me like Usage 0 allows matching via intermediates.
2015-05-14 18:03:37	hl	I thin what it's saying is: valid := (validUnderConventionalCABasedValidationRules && pathContainsMatchingCertificateAtAnyPoint(...))
2015-05-14 18:04:22	@Jeremy_Rand	hl: but the server's cert isn't a CA cert.  So I'm unsure of whether a match in the server cert is sufficient for 0.
2015-05-14 18:04:33	hl	Yes, I don't think it would match end certificates.
2015-05-14 18:04:37	hl	You'd need usage 1 for that.
2015-05-14 18:05:12	@Jeremy_Rand	But let's say I want to match either.  Do I need to list 2 different TLSA records with the same pubkey but different cert usage field to achieve that?
2015-05-14 18:06:26	hl	I'm scanning the RFC to see if it explains the semantic meaning of multiple TLSA records for the same port.
2015-05-14 18:07:00	@Jeremy_Rand	surely it allows multiple certs per port, right?
2015-05-14 18:07:09	@Jeremy_Rand	load balancers sometimes require that
2015-05-14 18:08:45	hl	"""A DNS query can return multiple certificate associations, such as in
2015-05-14 18:08:45	hl	   the case of a server that is changing from one certificate to another
2015-05-14 18:08:45	hl	   (described in more detail in Appendix A.4).
2015-05-14 18:08:51	hl	I think this implies it's OR.
2015-05-14 18:09:20	@Jeremy_Rand	yeah, that's the only thing that would make sense
2015-05-14 18:09:41	@Jeremy_Rand	so, I should have 2 TLSA records per pubkey in my usecase?
2015-05-14 18:09:50	hl	I guess.
2015-05-14 18:10:24	@Jeremy_Rand	ok.  So the problem is that this means that to simulate HPKP with TLSA, you need a total of 4 TLSA records.  That's a lot to shove into the blockchain
2015-05-14 18:10:33	@Jeremy_Rand	Because HPKP requires at least 2 pubkeys
2015-05-14 18:10:41	@Jeremy_Rand	and they can match either the CA or the server cert
2015-05-14 18:10:57	hl	Umm...
2015-05-14 18:11:12	hl	But surely if you're setting up HPKP you know if the cert you're specifying is end-user or intermediate.
2015-05-14 18:11:30	hl	I mean you _have_ to know, if you have the cert and not just a hash handy you can just parse the certificate and see.
2015-05-14 18:12:14	@Jeremy_Rand	hl: yes.  But Firefox can't enforce that constraint.  So to guarantee the same behavior between DANE implementations and HPKP implementations, you have to require that the TLSA records support both
2015-05-14 18:12:40	@Jeremy_Rand	otherwise you get situations where something loads correctly in Firefox but is rejected by a DANE implementation
2015-05-14 18:12:52	@Jeremy_Rand	which would be hard to debug for domain owners
2015-05-14 18:13:00	hl	What are you implementing here?
2015-05-14 18:13:36	@Jeremy_Rand	hl: I'm trying to generate HPKP key pins using the TLSA records in Namecoin, in such a way that it's consistent behavior with a DANE-compliant implementation
2015-05-14 18:14:32	hl	Oh, right.
2015-05-14 18:14:47	hl	Well...
2015-05-14 18:15:42	hl	I don't think mapping either usage to HPKP is a problem in itself. They're all adequately constraining.
2015-05-14 18:16:16	hl	Ultimately it's just another 'site operator beware' issue.
2015-05-14 18:19:14	@Jeremy_Rand	hl: well, if the domain owner uses both usages, then the Namecoin client can be certain that it's consistent between DANE and HPKP.  If the domain owner doesn't use both, then the Namecoin client can warn him.
2015-05-14 18:19:39	hl	I guess. But it should still issue HPKP for single DANE records.
2015-05-14 18:19:39	@Jeremy_Rand	I dislike having situations where something will work in some cases and not in others
2015-05-14 18:19:53	hl	Jeremy_Rand: recall that I made a validator for d/
2015-05-14 18:20:51	@Jeremy_Rand	hl: But, if it issues HPKP for a single record, then Firefox's behavior will permit a superset of the correct situations to validate correctly
2015-05-14 18:21:06	hl	Jeremy_Rand: As opposed to doing no validation at all?
2015-05-14 18:21:41	@Jeremy_Rand	hl: I was planning on blacklisting all HTTPS certs for .bit sites that don't have both cert usages in the blockchain
2015-05-14 18:21:57	@Jeremy_Rand	that way we accept a subset of DANE rather than a superset
2015-05-14 18:22:01	@Jeremy_Rand	which seems safer
2015-05-14 18:22:45	hl	Hold on, though.
2015-05-14 18:23:19	hl	If an operator nominates an end certificate in DANE, is there any meaningful risk to allowing that certificate to appear as an intermediate or root certificate? In the first place, don't they have different X509 bits set, so they can't be used in that rule?
2015-05-14 18:23:20	hl	role.
2015-05-14 18:23:51	@Jeremy_Rand	hl: it's by public key hash, not cert hash
2015-05-14 18:24:15	hl	ah.
2015-05-14 18:24:53	hl	But if an operator constrains to an end certificate, we might assume they're in control of the corresponding key, and thus not going to do anything absurd with it.
2015-05-14 18:26:16	@Jeremy_Rand	hl: while that is correct from a security standpoint, I guess I just don't want people to see that HPKP implementations like Firefox load their site correctly, and then have DANE implementations reject the same configuration
2015-05-14 18:26:41	@Jeremy_Rand	particulaly because some people don't read the documentation
2015-05-14 18:26:45	hl	Also, if an intermediate/root public key constraint is used, how does that meaningfully bind the holder of the root/intermediate private key? It just means they can issue whatever certificate they want with their (say, illicitly obtained) root/intermediate CA.
2015-05-14 18:27:26	hl	"I'll accept any site authenticated by this root/intermediate, but only if it's a root/intermediate! No using that pubkey as an end key!"
2015-05-14 18:27:44	hl	...yes and it's a root/intermediate CA key, so you can issue whatever you want...
2015-05-14 18:28:01	@Jeremy_Rand	yes, it doesn't make much sense from a security standpoint
2015-05-14 18:28:11	@Jeremy_Rand	I suppose I just don't like surprises in behavior
2015-05-14 18:28:24	hl	-->I think the risk of just issuing HPKP for DANE0/DANE1 is fine.
2015-05-14 18:28:31	hl	We can put a warning in whatever validator we offer.
2015-05-14 18:28:46	@Jeremy_Rand	ok, I guess you've convinced me
2015-05-14 18:29:01	hl	ok, so, forum post
2015-05-14 18:29:10	@Jeremy_Rand	yes
2015-05-14 18:29:16	hl	As for the SRV syntax you propose, I really think it should be homogenized:
2015-05-14 18:29:43	hl	{"map":{"_tcp":{"map":{"_http":{"srv":[[10,10,80,"www1.@"]]}}}}}
2015-05-14 18:29:55	hl	Yes, this is unpleasant, but it's consistent and thus makes for the simplest import algorithm.
2015-05-14 18:31:06	@Jeremy_Rand	hl: is there a reason for that syntax compared to the one I suggested?  In terms of handling imports on the JSON level it seems equally robust either way
2015-05-14 18:32:33	hl	It's consistent with the existing spec and doesn't introduce new rules...
2015-05-14 18:32:36	hl	Come to think of it.
2015-05-14 18:33:24	hl	I think you're working on the assumption of a generic import system that basically merges any JSON maps and occludes anything else, but there should be situations where that doesn't make sense.
2015-05-14 18:33:42	hl	For example, the WHOIS schema:
2015-05-14 18:34:10	hl	"info": { "name": "John Smith" }, "import": [["dd/extra"]]
2015-05-14 18:34:13	hl	dd/extra:
2015-05-14 18:34:27	hl	"info": { "email": "janedoe@example.com" }
2015-05-14 18:34:33	hl	A generic map-merge rule results in:
2015-05-14 18:34:46	hl	"info": {"name":"John Smith","email":"janedoe@example.com"}
2015-05-14 18:35:36	@Jeremy_Rand	hl: I actually wouldn't merge into items that already have content
2015-05-14 18:35:49	@Jeremy_Rand	so your example wouldn't affect info at all by my logic
2015-05-14 18:35:51	hl	Errr, that doesn't work either, though, does it?
2015-05-14 18:36:09	@Jeremy_Rand	I would require an import inside info
2015-05-14 18:36:12	@Jeremy_Rand	if you want to do that
2015-05-14 18:36:29	hl	{ "map": { "a": {"ip":"1.2.3.4"} }, "import": [["dd/extra"]] }
2015-05-14 18:36:48	hl	dd/extra: { "map": { "b": {"ip":"1.2.3.4"} } }
2015-05-14 18:36:49	@Jeremy_Rand	"info": {"name": "John Smith", "email": {"import": http://en.wikipedia.org/wiki/Special:Search?go=Go&search="dd/extra"}}
2015-05-14 18:37:06	hl	^
2015-05-14 18:37:53	@Jeremy_Rand	lolwat, did my IRC client just insert a WP link?
2015-05-14 18:37:58	hl	...Yes.
2015-05-14 18:38:08	@Jeremy_Rand	I have no idea why it did that
2015-05-14 18:38:17	@Jeremy_Rand	that is creepy
2015-05-14 18:38:26	hl	What IRC client?
2015-05-14 18:38:31	@Jeremy_Rand	Konversation
2015-05-14 18:38:57	@Jeremy_Rand	wow, that is the funniest bug I've seen in a long time
2015-05-14 18:39:01	@Jeremy_Rand	no idea how I triggered it
2015-05-14 18:40:00	@Jeremy_Rand	anyways
2015-05-14 18:40:20	@Jeremy_Rand	ignoring the URL that Konversation decided we needed to see
2015-05-14 18:40:31	@Jeremy_Rand	what I typed is roughly how I would envision it being handled
2015-05-14 18:40:49	hl	Yes, and what about my example?
2015-05-14 18:40:53	@Jeremy_Rand	hang on, let me try something
2015-05-14 18:40:56	hl	          hl | { "map": { "a": {"ip":"1.2.3.4"} }, "import": [["dd/extra"]] }
2015-05-14 18:40:56	hl	          hl | dd/extra: { "map": { "b": {"ip":"1.2.3.4"} } }
2015-05-14 18:40:57	@Jeremy_Rand	http://en.wikipedia.org/wiki/Special:Search?go=Go&search=test
2015-05-14 18:41:04	@Jeremy_Rand	ha, I figured it out
2015-05-14 18:41:15	@Jeremy_Rand	Putting something inside double brackets triggers it
2015-05-14 18:41:18	hl	Aah.
2015-05-14 18:42:46	@Jeremy_Rand	hmm
2015-05-14 18:43:41	@Jeremy_Rand	ok, so I guess it makes sense to have imports apply the way your example has.  What does doing so break?
2015-05-14 18:43:59	hl	The previous example with WHOIS.
2015-05-14 18:44:11	hl	I'm saying there is no semantically appropriate universal import merger algorithm.
2015-05-14 18:45:24	@Jeremy_Rand	hmm
2015-05-14 18:45:26	 *	Jeremy_Rand thinks
2015-05-14 18:45:41	@Jeremy_Rand	that's probably why I wanted "import" to always appear by itself
2015-05-14 18:45:54	hl	I think that really limits its utility.
2015-05-14 18:46:01	hl	Oh, wait, do you mean delegate?
2015-05-14 18:46:15	hl	But even then if you have multiple imports, they're applied in order.
2015-05-14 18:47:16	@Jeremy_Rand	So, all of my examples had "import" as the only field in its dict.  So you would have to explicitly define what fields you allowed to be imported, by setting them equal to {"import": ...}
2015-05-14 18:47:26	@Jeremy_Rand	That seems to be the safest way
2015-05-14 18:47:44	@Jeremy_Rand	but it also means you need to insert import in multiple places for some use cases
2015-05-14 18:47:52	hl	But... then you have to know what fields you're importing. So if you update the importee to add new directives, you have to update the importer.
2015-05-14 18:47:57	hl	That seriously limits its utility.
2015-05-14 18:48:10	@Jeremy_Rand	So, there are two different use cases here
2015-05-14 18:48:19	hl	If I'm delegating to a CDN, say cloudflare or w/e, cloudflare needs to be able to maintain its directives without my intervention.
2015-05-14 18:48:24	@Jeremy_Rand	one is for security, the other is for storage expansion
2015-05-14 18:49:53	@Jeremy_Rand	for security purposes, it is useful to be able to limit which parts of your JSON tree can be affected by the other name
2015-05-14 18:50:22	hl	Perhaps, but not everyone is going to need or want that.
2015-05-14 18:50:33	hl	And this expressly prevents general delegation, doesn't it?
2015-05-14 18:50:47	@Jeremy_Rand	like, I don't want a hot wallet to be able to insert TLS keys into my name, I only want the TLS keys to be affected by the cold wallet, the hot wallet is specifically for IP addresses and similar config
2015-05-14 18:51:52	hl	So there might be a case for letting import restrict what is imported. hmm.
2015-05-14 18:52:10	@Jeremy_Rand	yay for unforeseen complexity
2015-05-14 18:52:32	hl	This is my point, import is complicated and trying to generalize it will make it more complicated, not less.
2015-05-14 18:53:02	@Jeremy_Rand	I honestly think that the security delegation (an untrusted name can't mess with a trusted name) is the most important use case of import
2015-05-14 18:53:35	@Jeremy_Rand	expanding the value size limit can be done by multiple pushdatas if we really want
2015-05-14 18:53:39	hl	I guess.
2015-05-14 18:53:45	hl	But your proposal prevents that, doesn't it?
2015-05-14 18:53:51	hl	I mean.
2015-05-14 18:54:02	@Jeremy_Rand	hmm
2015-05-14 18:54:08	hl	I think most of the time, if people are delegating records to another entity, they're going to be delegating key management too.
2015-05-14 18:54:28	hl	Of TLSA, that is.
2015-05-14 18:54:41	@Jeremy_Rand	well, I certainly wouldn't be doing so when I'm keeping TLSA in a cold wallet and the IP in a hot wallet
2015-05-14 18:55:05	hl	Oh, but non-restricted "import" is secure for that case, isn't it?
2015-05-14 18:55:12	hl	Because the TLSA occludes. 
2015-05-14 18:55:19	hl	Although an attacker could still specify TLSA for new ports.
2015-05-14 18:55:47	@Jeremy_Rand	yeah, I don't want the attacker adding TLSA for new ports.  Or adding a .onion pubkey, for example
2015-05-14 18:56:23	hl	"import": [["dd/extra","",{"allow/deny":["tlsa"]}]]
2015-05-14 18:57:36	@Jeremy_Rand	I'm hesitant to do a blacklist because new spec features could break security, e.g. when fingerprint became tls
2015-05-14 18:57:40	@Jeremy_Rand	A whitelist seems safer
2015-05-14 18:57:47	hl	Yeah.
2015-05-14 18:58:03	@Jeremy_Rand	But yeah, a whitelist seems ok
2015-05-14 18:58:10	@Jeremy_Rand	so, what about this
2015-05-14 19:00:16	@Jeremy_Rand	"import": [ [ "dd/extra", [ ["map","untrusted"], ["ip"], ["ip6"] ] ] ]
2015-05-14 19:00:49	@Jeremy_Rand	this lets dd/extra control the ip and ip6 fields of the main domain, and anything in the untrusted subdomain
2015-05-14 19:01:12	hl	Actually, I'm not sure there are enough use cases to justify a general whitelist? I think we just need one optional setting on import: nonkey. If set, TLSA,DS,onion and any other 'cryptographic' directives are forbidden.
2015-05-14 19:01:29	hl	*nokey.
2015-05-14 19:02:50	@Jeremy_Rand	hl: by that logic, is onion still considered a key if TLSA is also set and enabled for .onion?
2015-05-14 19:02:57	@Jeremy_Rand	or would onion be like an IP?
2015-05-14 19:03:09	@Jeremy_Rand	it's not clear which behavior makes sense
2015-05-14 19:03:17	hl	Well... we can discuss that, but let's just say that it would be forbidden since not everything supports TLSA.
2015-05-14 19:03:43	hl	Essentially, "nokey" would prohibit anything which expresses a cryptographic binding or identity.
2015-05-14 19:04:03	@Jeremy_Rand	But there are cases where I want to delegate my onion but not my TLSA
2015-05-14 19:04:36	@Jeremy_Rand	I guess those cases are rare
2015-05-14 19:04:41	hl	But aren't you assuming that everything supports TLSA then?
2015-05-14 19:04:48	hl	I don't think that's wise.
2015-05-14 19:05:09	@Jeremy_Rand	I'm assuming that my .onion HTTP servr is only responding with TLS enabled
2015-05-14 19:05:23	@Jeremy_Rand	meaning it will fail unless the TLSA is validated
2015-05-14 19:05:40	hl	Wait, what?
2015-05-14 19:06:00	@Jeremy_Rand	well, assuming that no CA issued a cert for my .onion
2015-05-14 19:06:07	@Jeremy_Rand	which I guess isn't always true
2015-05-14 19:06:11	@Jeremy_Rand	but usually is
2015-05-14 19:06:32	hl	Ah.
2015-05-14 19:06:55	@Jeremy_Rand	I think a whitelist is a bit more flexible
2015-05-14 19:07:09	@Jeremy_Rand	and also simpler to implement in NMControl
2015-05-14 19:07:17	hl	Okay, how about a whitelist, but not naming directives, but general categories? Like "key", "onion" and "other".
2015-05-14 19:08:06	@Jeremy_Rand	maybe "auth", "loc", and "hybrid"?
2015-05-14 19:08:22	@Jeremy_Rand	eh, I guess key is better than auth
2015-05-14 19:08:23	hl	hybrid?
2015-05-14 19:08:31	@Jeremy_Rand	onion is hybrid
2015-05-14 19:08:36	@Jeremy_Rand	because it's both a locator and a key
2015-05-14 19:08:43	hl	Not all non-key non-onion data is locational, so I'd just call it 'other'.
2015-05-14 19:08:53	@Jeremy_Rand	I guess
2015-05-14 19:09:09	@Jeremy_Rand	I'm just skeptical that all use cases fit into those 3 categories neatly
2015-05-14 19:09:14	hl	...Though if we ever add other categories, the meaning of other could change.
2015-05-14 19:09:23	@Jeremy_Rand	yeah
2015-05-14 19:09:26	hl	Alright, let's return to a directive-based whitelist/blacklist for now.
2015-05-14 19:09:55	@Jeremy_Rand	I'd prefer just whitelist over whitelist+blacklist
2015-05-14 19:10:07	@Jeremy_Rand	unless there's a use case you're thinking of?
2015-05-14 19:10:36	hl	That's too inflexible for delegation to other parties, I think. There are cases for blacklists. The usual reflexive opposition to blacklists doesn't really apply, since there's a very limited number of directives in existence.
2015-05-14 19:10:43	hl	At any rate, it may as well be left to the operator.
2015-05-14 19:10:59	@Jeremy_Rand	Hmm
2015-05-14 19:11:25	@Jeremy_Rand	Things do get renamed, e.g. fingerprint to tls
2015-05-14 19:11:43	@Jeremy_Rand	I realize it's rare
2015-05-14 19:12:08	hl	I suppose. But I think people who care enough to blacklist directives can keep on top of these things, and if we really need to we can alias them for the purpose of blacklisting.
2015-05-14 19:12:46	@Jeremy_Rand	Yes, but that requires the merge algorithm to keep track of aliases, which is against the idea of namespace neutral
2015-05-14 19:13:03	hl	Yes, but I've just demonstrated that namespace neutrality can't work, haven't I?
2015-05-14 19:13:17	@Jeremy_Rand	I'm not entirely convinced of that
2015-05-14 19:13:39	@Jeremy_Rand	You've demonstrated that it's tricky
2015-05-14 19:13:43	hl	But it can't. The semantics of JSON are arbitrary. But import has to be done in a semantics-aware manner.
2015-05-14 19:14:10	hl	Any import mechanism you specify is going to restrict the design of any future namespaces.
2015-05-14 19:14:26	@Jeremy_Rand	I think we can require Namecoin namespaces to use JSON semantics that are consistent with whatever import method we use, if they want imports to work in their namespace
2015-05-14 19:14:58	@Jeremy_Rand	i.e. import doesn't have to work with arbitrary JSON
2015-05-14 19:15:07	@Jeremy_Rand	just arbitrary semantics, with JSON structure that we choose
2015-05-14 19:15:11	hl	Yes, but to even do that you're assuming that the very presence of "import" means that an import should occur, and not some random string "import".
2015-05-14 19:15:55	@Jeremy_Rand	we could rename import to something that doesn't appear in common usage
2015-05-14 19:16:09	@Jeremy_Rand	e.g. stick a symbol in there
2015-05-14 19:16:11	hl	I mean what about this in d/: {"map": {"import": "1.2.3.4"}}  <- is this a subdomain import pointing to 1.2.3.4, or a request to import from Namecoin name "1.2.3.4"?
2015-05-14 19:16:27	hl	But this is still potentially insecure if an attacker is in a position to control the data inserted.
2015-05-14 19:17:08	@Jeremy_Rand	how so?
2015-05-14 19:17:48	hl	Well, maybe in some other namespace, someone maintains a name via data received automatically from an untrusted party, under the assumption that that data is 'inert'
2015-05-14 19:18:14	@Jeremy_Rand	hmm
2015-05-14 19:18:38	hl	Even if you make the import string "importPrettyPleaseIAmAMonkey", you're saying that anyone in any namespace should have to search arbitrarily deep in any JSON object of any kind and regardless of its intended semantic meaning to ensure that "importPrettyPleaseIAmAMonkey" is not one of the keys?
2015-05-14 19:19:36	hl	And if it is, what are they supposed to do? Just refuse to encode that JSON subobject as part of the name? But it could be a perfectly legitimate piece of data which needs to be included under some namespace.
2015-05-14 19:20:11	@Jeremy_Rand	I guess that's a good point
2015-05-14 19:20:20	<--	sugarpuff (~sugarpuff@unaffiliated/sugarpuff) has quit (Quit: Leaving.)
2015-05-14 19:21:48	@Jeremy_Rand	hmm, I wish we had had this discussion a year ago when Daniel and I were setting up NMControl... right now imports are handled independent of namespace
2015-05-14 19:22:13	@Jeremy_Rand	not necessarily too late to change that
2015-05-14 19:23:09	@Jeremy_Rand	that said, it is somewhat nice for new namespaces to not have to code the relevant semantics into NMControl just to get imports
2015-05-14 19:24:20	hl	Hold on.
2015-05-14 19:24:33	hl	What use cases are you envisioning for NMControl that its ability to handle arbitrary namespaces is important?
2015-05-14 19:24:36	hl	It serves DNS, and...?
2015-05-14 19:25:23	@Jeremy_Rand	hl: well, NMControl is intended to be a swiss army knife of Namecoin namespace functionality.  Including d/ and id/, and whatever other namespaces get created later
2015-05-14 19:25:33	hl	Right.
2015-05-14 19:25:34	@Jeremy_Rand	right now most of its functionality is d/
2015-05-14 19:25:39	hl	But how does NMControl provide access to id/?
2015-05-14 19:25:49	@Jeremy_Rand	but that's just because of limited development time
2015-05-14 19:26:16	@Jeremy_Rand	hl: well, Daniel was suggesting having NMControl run a GPG keyserver for id/
2015-05-14 19:26:49	@Jeremy_Rand	which would be pretty cool
2015-05-14 19:27:19	hl	That's an interesting idea, but I have to wonder if just a command line tool to import from id/ into GPG wouldn't work. Although a keyserver would work with any GUI frontends for GPG...
2015-05-14 19:27:50	hl	But even then, you're not importing by id, are you? You're just looking for key fingerprints, or searching by email.
2015-05-14 19:28:38	hl	So I don't see how Namecoin significantly boosts the security of that? Although it does provide a solution to things like pgp.mit.edu and all the ancient keys it has on it.
2015-05-14 19:31:45	@Jeremy_Rand	hmm.  I'm not sure how the keyserver protocol works, is it possible to search by id/ name and have the server give back keys whose email addresses aren't equal to the id/ name?  Will GPG accept that?
2015-05-14 19:32:20	hl	I'm guessing it will.
2015-05-14 19:32:30	hl	But that seems complicated...
2015-05-14 19:32:31	hl	I mean.
2015-05-14 19:32:46	@Jeremy_Rand	other fun thing you could do
2015-05-14 19:33:07	@Jeremy_Rand	is sign your id/ name with a DomainKeys sig for the email address it's for
2015-05-14 19:33:28	@Jeremy_Rand	and place that sig in a name, indexed by email address
2015-05-14 19:33:37	hl	You can search keyservers by keyID, name, e. mail, right? So GPG is going to expect to be able to do that. But in order for a Namecoin keyserver to be 'secure', it would have to only service requests for queries of the form "id/...", which GPG probably isn't in the habit of issuing. It'll probably work if you tell it to search for that as a name, but for all I know you can't stop it from sending queries
2015-05-14 19:33:37	hl	for e.g. keyIDs in other circumstances.
2015-05-14 19:34:15	@Jeremy_Rand	yeah, I don't know enough about how GPG handles those things
2015-05-14 19:34:29	hl	Basically, it's something for further investigation.
2015-05-14 19:34:34	hl	But I think the utility is small.
2015-05-14 19:34:34	@Jeremy_Rand	yep
2015-05-14 19:34:59	hl	If the utility for id/ is small, I suspect the utility for NMControl supporting yet other namespaces is even smaller. Unless someone can come up with a new namespace that proves me wrong.
2015-05-14 19:35:33	@Jeremy_Rand	it is true that the number of namespaces in wide use still stands at 2 after 4 years
2015-05-14 19:35:49	@Jeremy_Rand	unless you count u/, which needs to die
2015-05-14 19:35:50	hl	Yep.
2015-05-14 19:36:35	@Jeremy_Rand	ok, so can you post a brief summary of this discussion in the forum thread and see what domob and phelix think?
2015-05-14 19:37:03	@Jeremy_Rand	I don't have a huge problem with making it namespace specific, if it's a better design
2015-05-14 19:38:24	hl	Maybe I should just post the log? We covered quite a lot.
2015-05-14 19:38:34	@Jeremy_Rand	sure, that's fine
2015-05-14 19:38:43	@Jeremy_Rand	I hope Daniel and Phelix like reading
2015-05-14 19:38:47	@Jeremy_Rand	:-)

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: "import" specification

Post by phelix »

IMHO import should be generic and should be used if possible. If it does not do the job namespaces can define their own "importn" or something, but I think it should not really be necessary.

Also for Nmcontrol and other Python tools I strongly advocate separating "import" and similar operations to a separate module like this:

https://github.com/phelixnmc/nmcapi/blo ... process.py
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: "import" specification

Post by domob »

phelix wrote:IMHO import should be generic and should be used if possible. If it does not do the job namespaces can define their own "importn" or something, but I think it should not really be necessary.
I'm with phelix here (more details expressed on Github).
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: "import" specification

Post by biolizard89 »

phelix wrote:IMHO import should be generic and should be used if possible. If it does not do the job namespaces can define their own "importn" or something, but I think it should not really be necessary.

Also for Nmcontrol and other Python tools I strongly advocate separating "import" and similar operations to a separate module like this:

https://github.com/phelixnmc/nmcapi/blo ... process.py
Could you provide an actual argument against what Hugo said, rather than just an IMHO? I'm not saying that Hugo is definitely right, but he is approaching this far more analytically than other people appear to be. For example: what do you do if someone has a subdomain named "import"?
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: "import" specification

Post by phelix »

biolizard89 wrote:
phelix wrote:IMHO import should be generic and should be used if possible. If it does not do the job namespaces can define their own "importn" or something, but I think it should not really be necessary.

Also for Nmcontrol and other Python tools I strongly advocate separating "import" and similar operations to a separate module like this:

https://github.com/phelixnmc/nmcapi/blo ... process.py
Could you provide an actual argument against what Hugo said, rather than just an IMHO? I'm not saying that Hugo is definitely right, but he is approaching this far more analytically than other people appear to be.
It complicates things. It is difficult to update and maintain. It is not necessary.
For example: what do you do if someone has a subdomain named "import"?
[/quote]
Restructure or replace "import" by something with reserved characters (escape / control character - might be a good idea actually)? But this seems to be a slightly different issue.
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: "import" specification

Post by biolizard89 »

So, I see 4 ways to approach the "import subdomain" issue.
  1. Have entirely separate import logic per namespace, e.g. implying the traversal of the "map" field. This appears to be what Hugo is suggesting.
  2. Have a namespace-specific set of contexts where "import" can appear.
  3. Have a namespace-specific name for the "import" field. For d/ this could be something that isn't a valid domain label.
  4. Make "import" global regardless of namespace, with a name that doesn't collide with anything.
These are in decreasing complexity of implementation.

Option 2 seems to be uniformly simpler than Option 1, and I don't see any problem with it.

Option 3 will impose the restriction that no field in a namespace spec can accept arbitrary keys for dicts. In my opinion it doesn't make sense for a namespace to accept arbitrary data in any form, since base64 is better suited for that.

Option 4 will impose a restriction on the character set of all namespaces. To be honest, I don't see a problem here either. There are a few specific applications that Namecoin targets, and very few of them, if any, need the full character set. For example, the following rules could be used:
  • Don't reserve anything that has a meaning in a DNS name or a URL.
  • Don't reserve anything that has a meaning in common identity formats, e.g. real names, email addresses, etc.
  • Don't reserve anything that has a meaning in base64 encoding.
A more thorough list could (and should) be put together. From these rules, find a specific prefix to reserve for JSON operations, such as importing, decrypting, expiring, etc. Any namespace that actually needs that prefix is clearly a weird use case, and can use base64 to encode it.

Thoughts on this? Phelix, Hugo, Daniel?
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

domob
Posts: 1129
Joined: Mon Jun 24, 2013 11:27 am
Contact:

Re: "import" specification

Post by domob »

Sounds good, and along the lines of my own thoughts (although you have taken more time to work it out nicely :)). I'm for 2 or 4, and think that should be fine. We can even define a kind of "escape syntax" that is used to translate keys after processing the generic fields. E. g., "@import" (or whatever) does importing, while "@@import" gets translated to the "@import" key in the dict. This way, we get full flexibility also for use-cases that may need those keys without too much complication.
BTC: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS
Use your Namecoin identity as OpenID: https://nameid.org/

Post Reply