I work in a "high-security" facility and we have all manner of policies and rules to follow to be considered worthy of a simple Internet connection at our workstations. Our facility is just one building on a much larger campus and all the network traffic to the outside world goes through a single gateway. As part of their security apparatus, they have set up an HTTPS proxy at this gateway. The proxy allows them to break open every HTTPS connection with a deliberate man-in-the-middle (MITM) "attack" for the purpose of analyzing it to make sure it's "OK" by their definition. On top of this, they also subscribe to a content analysis/filter service that categorizes various sites and pigeon-holes them into one or more labels. Then certain labeled categories are denied as a whole.
Both of these mechanism introduce some rather severe usability issues. In several previous posts on this blog, I have brought up the subject of usability. In my experience, usability is about the last thing that security folks consider. It really ought to be part of the availability side of the security triangle. If something becomes too difficult or impossible to use, but it is a legitimate need, then it has become unavailable. Then users are tempted to seek workarounds or "other means" of getting to their resources. Security FAIL.
Let's look at some specific issues.
The MITM HTTPS proxy wreaks all kinds of havoc. It causes a noticeable increase in download speed for the extra time it takes to establish a second HTTPS connection with whomever you actually intended to reach. Remember this one system is doing this for every connection on the entire campus which is several tens of thousands of systems. I sometimes download software via svn or git servers. Wireshark is a good example. This is a fairly lengthy operation, much larger than a typical single web page. It often breaks down in mid-download for no apparent reason. If I download it from outside the facility (e.g. at home) it works fine every time.
The content analysis/filter frequently prevents access to legitimate resources for the wrong reasons. The facility subscribes to a third-party filter service. They just get a regular filter definition update and install it. Job done as far as they're concerned. Unfortunately, the granularity of the filter is pretty big and lots of sites get lumped into a general category of "bad" with no further distinction made. A great example is a site being declared a "blogging site". That is way too broad. We are doing software engineering and solving a lot of technical problems about Linux, C++, etc. Many times, a web search will turn up someone talking about the exact, specific issue that we are having, but... the discussion is on a "blog" site (<gasp>). So you end up emailing the URL to yourself at home and looking it up from there later. Progress is delayed. Part of our application makes use of OpenGL doing all kinds of fancy graphics stuff. A lot of discussion about those types of issues is found on gaming development forums, but they're all blocked because they are lumped into the "gaming site" category. Shame on you, programmer, for looking at gaming sites instead of working!
I have also encountered sites that are blocked for seemingly no logical reason. A good example is http://libssh.org. When I got denied access to that one day, I decided to try challenging it and requesting an exception. I explained what it was, why I wanted access and how it was not in any way qualified to be in the "known spammers" category. After escalating it to the "tier 2" guy, I was told that yes, that site was OK, but unfortunately the IP of its hosting service was in a range of addresses that host known spammers. So the entire range of addresses was just blocked wholesale. And since they just subscribe to this third-party list and it had the range listed, they could not or would not make any change for it. So once again, I download it at home later.
So how could this situation be improved? If I were in charge of the whole thing, I would have a few folks appointed to be the local point of good judgement and discretion. They would be tasked with evaluating all requests for exceptions and making local tweaks to the content filter definition. I think a subscription-based service is fine as the starting point, but if you just accept that as-is, you will end up hurting the usability for your particular, local users. And that reduces availability which can hurt security more than it helps sometimes.
This post maps to CompTIA SY0-301 exam objectives 1.1 and 2.8.
No comments:
Post a Comment