phelix wrote:It seems these both are somewhat out of context as they are talking about tricking users into a mitm. Also with these implementations we are not actually writing the juicy parts ourselves. Still, it is good to keep this in mind.
I don't see where it says anything about tricking users. Some of the intercepting proxies they refer to are part of antivirus packages, which presumably are installed with the users' knowledge. The point they are making is that it is difficult to code securely and introduces risks for the user; they're not primarily arguing that transparency would fix it. Yes, a proper implementation would reuse critical code from existing projects that have received extensive code review in the context of end-user browsing. I am not aware of any such existing projects. (Convergence was the closest, and it doesn't work anymore.)
Also, I don't trust Python's TLS implementation. According to https://lwn.net/Articles/582065/
they haven't even been able to get CA-based checking enabled by default yet, which doesn't speak well of how much they prioritize security. I don't have any empirical data on how good Python's TLS is, but I wouldn't want to rely on Python's TLS without a thorough audit done of all the code. Firefox and Chromium have *way* more reviewers looking at their TLS code than Python does.
It would be great to get Ryan's input here too.
The article is outdated. They have done it starting with 2.7.9 (and some improvements in 2.7.10) https://www.python.org/dev/peps/pep-0476/
Um. Did you actually read the link you posted? As of 2014 (when your link was written), the HTTPS implementation wasn't doing CA checking by default. Web browsers have been doing CA checking since... the 90's? That confirms my point that Python is years (or, in this case, decades) behind web browsers on TLS security. I believe, but have not checked, that the ssl module in 2.7.9 doesn't do CA checks by default either (you need to create a context if you want CA checks).
(For the record: all solutions to Namecoin TLS are going to suck in some way.
I agree it is not pretty
I'm glad we're in agreement on this.
In cases where an intercepting proxy is the *only* way to do it, or other ways are even worse, e.g. leaking cookies, then I accept that it's a necessary evil. I am not convinced that that's the case.)
Then bring it on. It would be nice to have something. As long as we don't have something else it is OK to use an intercepting proxy. The alternative is to go plain old cleartext http.
Hugo, Ryan, Rene, and I have all been working on this to some extent. Doing this "well enough" is not easy. Doing this "right" is probably impossible with current browsers. I'm hoping to have something usable within a month or two, at least for a subset of browsers (starting with Chrome on Windows, because I got tired of dealing with Firefox's stupidity... I'll come back to Firefox later).
Maybe they did a better implementation since they started from scratch?
I don't understand what you're arguing; that makes no sense to me.
phelix wrote:* extra implementation and user effort necessary for user to see details about actual website certificate should be possible to pass through
edit: note: it would be good to only route .bit requests through the proxy (e.g. via pac file)
Well, I certainly hope that you're only intercepting .bit requests. That was one of the first things I implemented in both Convergence and mitmproxy. It would also be wise to put a name constraint for .bit on the CA certificate, so that (assuming that name constraints are honored) a compromise of the intercepting proxy doesn't expose non-.bit traffic.