Security Implications of MitM based Local Proxies

Post Reply
phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Security Implications of MitM based Local Proxies

Post by phelix »

As this keeps popping up in other threads I would like to discuss it separately here.
phelix wrote:For TLS support it is not necessary for the browser to support anything but a (socks) proxy.
I am assuming you are advocating MitM-based support for custom validation. This is a highly undesirable hack and certainly should not be an end aspiration. Jeremy Rand and I are both opposed to the use of MitM-based methods if at all possible.
I would prefer to avoid intercepting proxies where feasible. I am not certain that we can completely avoid it for all use cases, but we should definitely be trying to implement TLS without intercepting proxies.
What is so bad about it? It is relatively easy to implement including Namecoin TLS.

Also the Lenovo/Superfish incident was mentioned but we would differ in some points:
* Only local connections used
* separate root cert for every user
* Not using an easy to break password

Is it not so that somebody that could mess with a local MitM proxy could also mess with the system setup in general?
Last edited by biolizard89 on Fri Sep 25, 2015 6:32 pm, edited 1 time in total.
Reason: Partially fixed quote markup.
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: Security Implications of MitM based Local Proxies

Post by biolizard89 »

Ryan will probably like to chime in with a long rant about how horrible intercepting proxies are... but the brief summary is that intercepting proxies offload the entirety of the TLS protocol implementation to the proxy. TLS is a tricky protocol to get right (LibreSSL is trying to improve this, but they're not there yet). The only significant cases of actually getting it right consistently are the major web browsers. If there is a bug in the TLS implementation of the proxy, it could be used to allow bad certificates to validate successfully.

SuperFish is an extreme example, and presumably we would be smarter than SuperFish. However, a lot of products have used intercepting proxies, and many of them have been found to have problems. This is why many security experts (including an expert from CloudFlare and an expert from EFF) have said that intercepting proxies should not be used, full stop.

Of course, we're in a different situation from SuperFish. SuperFish's functionality could easily have been implemented in a browser extension (or even a Greasemonkey script). In contrast, changing how TLS validation works is something that is not as easily doable in a browser extension. This is why, for example, Convergence uses an intercepting proxy -- at the time, the only other way (hooking the HTTP response event) would have leaked cookies.

However, things are more complicated now. There are ways of doing this in Firefox that don't involve intercepting proxies. Firefox will deprecate those API's in 2016 or 2017, at which point we'll have to adapt. There are a lot of other methods that have been suggested to work with other browsers, which have different tradeoffs compared to intercepting proxies (e.g. some of them trust that no ICANN CA will issue a cert for .bit, but they let the browser do TLS as normal).

I am not fully opposed to intercepting proxies, since they do offer a solution that has fairly wide compatibility. But they are a last resort in my opinion. And there are other methods, particularly in Firefox, which I think make sense to pursue.

FWIW, someone needs to make a giant list of all the TLS solutions that have been proposed. It would make a good introduction to the topic for novices. I might do this sometime.
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: Security Implications of MitM based Local Proxies

Post by phelix »

biolizard89 wrote:Ryan will probably like to chime in with a long rant about how horrible intercepting proxies are... but the brief summary is that intercepting proxies offload the entirety of the TLS protocol implementation to the proxy. TLS is a tricky protocol to get right (LibreSSL is trying to improve this, but they're not there yet). The only significant cases of actually getting it right consistently are the major web browsers. If there is a bug in the TLS implementation of the proxy, it could be used to allow bad certificates to validate successfully.
I would trust e.g. the latest Python's TLS implementation as much as Firefox any day. This is just a gut feeling, though.
SuperFish is an extreme example, and presumably we would be smarter than SuperFish. However, a lot of products have used intercepting proxies, and many of them have been found to have problems. This is why many security experts (including an expert from CloudFlare and an expert from EFF) have said that intercepting proxies should not be used, full stop.
Hmm, one may need to differentiate between what the proxies are actually doing.
Of course, we're in a different situation from SuperFish. SuperFish's functionality could easily have been implemented in a browser extension (or even a Greasemonkey script). In contrast, changing how TLS validation works is something that is not as easily doable in a browser extension. This is why, for example, Convergence uses an intercepting proxy -- at the time, the only other way (hooking the HTTP response event) would have leaked cookies.

However, things are more complicated now. There are ways of doing this in Firefox that don't involve intercepting proxies. Firefox will deprecate those API's in 2016 or 2017, at which point we'll have to adapt. There are a lot of other methods that have been suggested to work with other browsers, which have different tradeoffs compared to intercepting proxies (e.g. some of them trust that no ICANN CA will issue a cert for .bit, but they let the browser do TLS as normal).

I am not fully opposed to intercepting proxies, since they do offer a solution that has fairly wide compatibility. But they are a last resort in my opinion. And there are other methods, particularly in Firefox, which I think make sense to pursue.
The disadvantages still seem somewhat vague to me. It may not be the best solution but there are some clear advantages:

pro
* easy implementation
* independent from browser changes
* works on all browsers

con
* extra implementation and user effort necessary for user to see details about actual website certificate
* ?

FWIW, someone needs to make a giant list of all the TLS solutions that have been proposed. It would make a good introduction to the topic for novices. I might do this sometime.
+1
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: Security Implications of MitM based Local Proxies

Post by phelix »

For the record: previous discussion here: https://forum.namecoin.info/viewtopic.php?f=30&t=2221
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: Security Implications of MitM based Local Proxies

Post by biolizard89 »

Some background reading.

From the EFF: https://www.eff.org/deeplinks/2015/02/d ... -encrypted
But the most important lesson is for software vendors, who should learn that attempting to intercept their customers’ encrypted HTTPS traffic will only put their customers’ security at risk. Certificate validation is a very complicated and tricky process which has taken decades of careful engineering work by browser developers. Taking certificate validation outside of the browser and attempting to design any piece of cryptographic software from scratch without painstaking security audits is a recipe for disaster.
From CloudFlare: https://blog.filippo.io/komodia-superfi ... is-broken/
First, don't make intercepting proxies. They are impossible to write correctly, and by their very nature lower the security of the whole Internet.
Also, I don't trust Python's TLS implementation. According to https://lwn.net/Articles/582065/ they haven't even been able to get CA-based checking enabled by default yet, which doesn't speak well of how much they prioritize security. I don't have any empirical data on how good Python's TLS is, but I wouldn't want to rely on Python's TLS without a thorough audit done of all the code. Firefox and Chromium have *way* more reviewers looking at their TLS code than Python does.

It would be great to get Ryan's input here too.

(For the record: all solutions to Namecoin TLS are going to suck in some way. In cases where an intercepting proxy is the *only* way to do it, or other ways are even worse, e.g. leaking cookies, then I accept that it's a necessary evil. I am not convinced that that's the case.)

EDIT: Some Googling shows that Python removed SSLv2 in 2014. Firefox did so in 2005. Python is 9 years behind Firefox on properly implementing TLS. See https://bugs.python.org/issue20207 and https://blog.hboeck.de/archives/184-Fir ... pport.html .
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: Security Implications of MitM based Local Proxies

Post by phelix »

biolizard89 wrote:Some background reading.

From the EFF: https://www.eff.org/deeplinks/2015/02/d ... -encrypted
But the most important lesson is for software vendors, who should learn that attempting to intercept their customers’ encrypted HTTPS traffic will only put their customers’ security at risk. Certificate validation is a very complicated and tricky process which has taken decades of careful engineering work by browser developers. Taking certificate validation outside of the browser and attempting to design any piece of cryptographic software from scratch without painstaking security audits is a recipe for disaster.
From CloudFlare: https://blog.filippo.io/komodia-superfi ... is-broken/
First, don't make intercepting proxies. They are impossible to write correctly, and by their very nature lower the security of the whole Internet.
It seems these both are somewhat out of context as they are talking about tricking users into a mitm. Also with these implementations we are not actually writing the juicy parts ourselves. Still, it is good to keep this in mind.
Also, I don't trust Python's TLS implementation. According to https://lwn.net/Articles/582065/ they haven't even been able to get CA-based checking enabled by default yet, which doesn't speak well of how much they prioritize security. I don't have any empirical data on how good Python's TLS is, but I wouldn't want to rely on Python's TLS without a thorough audit done of all the code. Firefox and Chromium have *way* more reviewers looking at their TLS code than Python does.

It would be great to get Ryan's input here too.
The article is outdated. They have done it starting with 2.7.9 (and some improvements in 2.7.10) https://www.python.org/dev/peps/pep-0476/
(For the record: all solutions to Namecoin TLS are going to suck in some way.
I agree it is not pretty :)
In cases where an intercepting proxy is the *only* way to do it, or other ways are even worse, e.g. leaking cookies, then I accept that it's a necessary evil. I am not convinced that that's the case.)
Then bring it on. It would be nice to have something. As long as we don't have something else it is OK to use an intercepting proxy. The alternative is to go plain old cleartext http.
EDIT: Some Googling shows that Python removed SSLv2 in 2014. Firefox did so in 2005. Python is 9 years behind Firefox on properly implementing TLS. See https://bugs.python.org/issue20207 and https://blog.hboeck.de/archives/184-Fir ... pport.html .
Maybe they did a better implementation since they started from scratch? :mrgreen:


pro
* easy implementation
* independent from browser changes
* works on all browsers
* good error pages (compared to DNS only methods)
* better than going naked

con
* additional elements that might break
* library TLS implementation might be less secure than browser implementation

* extra implementation and user effort necessary for user to see details about actual website certificate should be possible to pass through

edit: note: it would be good to only route .bit requests through the proxy (e.g. via pac file)
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

biolizard89
Posts: 2001
Joined: Tue Jun 05, 2012 6:25 am
os: linux

Re: Security Implications of MitM based Local Proxies

Post by biolizard89 »

phelix wrote:It seems these both are somewhat out of context as they are talking about tricking users into a mitm. Also with these implementations we are not actually writing the juicy parts ourselves. Still, it is good to keep this in mind.
I don't see where it says anything about tricking users. Some of the intercepting proxies they refer to are part of antivirus packages, which presumably are installed with the users' knowledge. The point they are making is that it is difficult to code securely and introduces risks for the user; they're not primarily arguing that transparency would fix it. Yes, a proper implementation would reuse critical code from existing projects that have received extensive code review in the context of end-user browsing. I am not aware of any such existing projects. (Convergence was the closest, and it doesn't work anymore.)
phelix wrote:
Also, I don't trust Python's TLS implementation. According to https://lwn.net/Articles/582065/ they haven't even been able to get CA-based checking enabled by default yet, which doesn't speak well of how much they prioritize security. I don't have any empirical data on how good Python's TLS is, but I wouldn't want to rely on Python's TLS without a thorough audit done of all the code. Firefox and Chromium have *way* more reviewers looking at their TLS code than Python does.

It would be great to get Ryan's input here too.
The article is outdated. They have done it starting with 2.7.9 (and some improvements in 2.7.10) https://www.python.org/dev/peps/pep-0476/
Um. Did you actually read the link you posted? As of 2014 (when your link was written), the HTTPS implementation wasn't doing CA checking by default. Web browsers have been doing CA checking since... the 90's? That confirms my point that Python is years (or, in this case, decades) behind web browsers on TLS security. I believe, but have not checked, that the ssl module in 2.7.9 doesn't do CA checks by default either (you need to create a context if you want CA checks).
phelix wrote:
(For the record: all solutions to Namecoin TLS are going to suck in some way.
I agree it is not pretty :)
I'm glad we're in agreement on this.
phelix wrote:
In cases where an intercepting proxy is the *only* way to do it, or other ways are even worse, e.g. leaking cookies, then I accept that it's a necessary evil. I am not convinced that that's the case.)
Then bring it on. It would be nice to have something. As long as we don't have something else it is OK to use an intercepting proxy. The alternative is to go plain old cleartext http.
Hugo, Ryan, Rene, and I have all been working on this to some extent. Doing this "well enough" is not easy. Doing this "right" is probably impossible with current browsers. I'm hoping to have something usable within a month or two, at least for a subset of browsers (starting with Chrome on Windows, because I got tired of dealing with Firefox's stupidity... I'll come back to Firefox later).
phelix wrote:
EDIT: Some Googling shows that Python removed SSLv2 in 2014. Firefox did so in 2005. Python is 9 years behind Firefox on properly implementing TLS. See https://bugs.python.org/issue20207 and https://blog.hboeck.de/archives/184-Fir ... pport.html .
Maybe they did a better implementation since they started from scratch? :mrgreen:
I don't understand what you're arguing; that makes no sense to me.
phelix wrote:* extra implementation and user effort necessary for user to see details about actual website certificate should be possible to pass through

edit: note: it would be good to only route .bit requests through the proxy (e.g. via pac file)
Well, I certainly hope that you're only intercepting .bit requests. That was one of the first things I implemented in both Convergence and mitmproxy. It would also be wise to put a name constraint for .bit on the CA certificate, so that (assuming that name constraints are honored) a compromise of the intercepting proxy doesn't expose non-.bit traffic.
Jeremy Rand, Lead Namecoin Application Engineer
NameID: id/jeremy
DyName: Dynamic DNS update client for .bit domains.

Donations: BTC 1EcUWRa9H6ZuWPkF3BDj6k4k1vCgv41ab8 ; NMC NFqbaS7ReiQ9MBmsowwcDSmp4iDznjmEh5

phelix
Posts: 1634
Joined: Thu Aug 18, 2011 6:59 am

Re: Security Implications of MitM based Local Proxies

Post by phelix »

biolizard89 wrote:
phelix wrote:
Also, I don't trust Python's TLS implementation. According to https://lwn.net/Articles/582065/ they haven't even been able to get CA-based checking enabled by default yet, which doesn't speak well of how much they prioritize security. I don't have any empirical data on how good Python's TLS is, but I wouldn't want to rely on Python's TLS without a thorough audit done of all the code. Firefox and Chromium have *way* more reviewers looking at their TLS code than Python does.

It would be great to get Ryan's input here too.
The article is outdated. They have done it starting with 2.7.9 (and some improvements in 2.7.10) https://www.python.org/dev/peps/pep-0476/
Um. Did you actually read the link you posted? As of 2014 (when your link was written), the HTTPS implementation wasn't doing CA checking by default.
Um. Is that not the nature of a PEP?
Web browsers have been doing CA checking since... the 90's? That confirms my point that Python is years (or, in this case, decades) behind web browsers on TLS security. I believe, but have not checked, that the ssl module in 2.7.9 doesn't do CA checks by default either (you need to create a context if you want CA checks).
I'm quite certain it did in 2.7.9 and I will show you it does in 2.7.10:

Code: Select all

>>> import urllib2
>>> urllib2.urlopen("https://dot-bit.org")
[...]
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
phelix wrote:
(For the record: all solutions to Namecoin TLS are going to suck in some way.
I agree it is not pretty :)
I'm glad we're in agreement on this.
phelix wrote:
In cases where an intercepting proxy is the *only* way to do it, or other ways are even worse, e.g. leaking cookies, then I accept that it's a necessary evil. I am not convinced that that's the case.)
Then bring it on. It would be nice to have something. As long as we don't have something else it is OK to use an intercepting proxy. The alternative is to go plain old cleartext http.
Hugo, Ryan, Rene, and I have all been working on this to some extent. Doing this "well enough" is not easy. Doing this "right" is probably impossible with current browsers. I'm hoping to have something usable within a month or two, at least for a subset of browsers (starting with Chrome on Windows, because I got tired of dealing with Firefox's stupidity... I'll come back to Firefox later).
We have been waiting for the second best solution for half a year or so... maybe it's time to think about the third best solution.
phelix wrote:
EDIT: Some Googling shows that Python removed SSLv2 in 2014. Firefox did so in 2005. Python is 9 years behind Firefox on properly implementing TLS. See https://bugs.python.org/issue20207 and https://blog.hboeck.de/archives/184-Fir ... pport.html .
Maybe they did a better implementation since they started from scratch? :mrgreen:
I don't understand what you're arguing; that makes no sense to me.
In evaluating the quality of an implementation age is not the only parameter. Often newer implementations can benefit from previous mistakes and create conceptually superior solutions.
phelix wrote:* extra implementation and user effort necessary for user to see details about actual website certificate should be possible to pass through

edit: note: it would be good to only route .bit requests through the proxy (e.g. via pac file)
Well, I certainly hope that you're only intercepting .bit requests. That was one of the first things I implemented in both Convergence and mitmproxy. It would also be wise to put a name constraint for .bit on the CA certificate, so that (assuming that name constraints are honored) a compromise of the intercepting proxy doesn't expose non-.bit traffic.
That is a good hint, thanks.
nx.bit - some namecoin stats
nf.bit - shortcut to this forum

Post Reply