I think it's amazing the degree to which Obama has managed to inspire a new kind of patriotism among American citizens. For too long, American patriotism has been about how the US is cool just for being there. But really, what has the country done lately that Americans can be proud of? In the last few years a lot of people have been waking up to this, and attitudes are starting to change.. slowly... and I think Obama will be the catalyst to cause a new attitude of doing something about it to spread like wildfire.
People who criticize artists for speaking out about issues should love this video. Personally, I love the idea that there are people willing to try to use their celebrity to educate and affect important issues. But, for those who don't like to hear what celebrities think you should do, here is what they pledge to do themselves, and a challenge to find your own.
For the other geeks out there, this seems like a great start. Are there other service projects geeks can get involved in?
Tuesday, January 20, 2009
Sunday, January 18, 2009
Throwing Hack Weight
The song stuck in my head today is Tournament of Hearts by The Weakerthans (you can sample it at Amazon). Aside from it being a solid, up-beat, alt rock track, I just love the idea of using curling as the metaphor in a love song. It's pretty uniquely Canadian.
Saturday, January 17, 2009
Dumping Old Documentation
I'm one of those people who hasn't been able to completely give up paper. Particularly when I'm dealing with reference material, I like having a book I can flip through, shove bookmarks in, or hold over my head while I lie back on the couch and read. It's hard to do any of those things with a laptop and a PDF. As a result, I've got a shelf behind me that's a veritable rainbow of O'Reilly titles and other technical references. But here's my problem... what to do with outdated editions?
I think I've got three or four copies of DNS & Bind, of varying editions, multiple versions of the bat book and some extras of C and Perl references. I'd like to try to find a way to reuse these before just dumping them in the weekly recycling. Surely someone out there might find these older editions useful for something..
I think I've got three or four copies of DNS & Bind, of varying editions, multiple versions of the bat book and some extras of C and Perl references. I'd like to try to find a way to reuse these before just dumping them in the weekly recycling. Surely someone out there might find these older editions useful for something..
Friday, January 16, 2009
Slouching Toward The Cloud
In wandering the Ether this afternoon I rediscovered a friend's blog and his latest post, Cloud computing is a sea change. Mark Mayo has a lot to say about how cloud computing is going to change the career path of systems administrators in every corner of the industry, and he references Hosting Apocalypse by Trevor Orsztynowicz, another excellent article on the subject.
There's definitely a change coming. Cloud computing has the potential to put us all out of work, or at least severely hamper our employability, if we don't keep on top of the changes and keep our skills up to date.. but that's been true of every shift in the industry since it came into being. Every time a new technology or shift in business practises comes along, individual sysadmins either adapt or restrict their potential employers. The difference with cloud computing is that it promises a lot of change all at once, where in the past we've mostly dealt with at most two or three new things to think about in a year.
I think there are some potential drags on the takeover by cloud computing that will slow the changes Mark and Trevor warn of, however.
A few times now we've been warned that the desktop computer is doomed, and that it's all going to be replaced by thin clients like the Sun Ray from Sun Microsystems, or more recently mostly-web-based Application Service Providers like Google Apps. Despite years of availability of thin clients, and cheap, easy access to more recent offerings like Google's, this hasn't happened. Individual users, and even some small organizations may have embraced web apps as free alternatives to expensive packages like Microsoft Office, but I'm not aware of any significant corporations that have gone down this road. The main reason? I think it has to do with control of data. Most companies just don't want to hand all their data over to someone else. In many cases, simple reluctance can become a statutory hurdle, particularly when you're talking about customer data and there's a national border between the user and the data store. I think this same reasoning will come into play with even stronger force when companies start considering putting their entire data centre in the hands of another company. The difference in who has access to your data between co-location and cloud computing is significant.
Additionally, I think the geography problem will keep cloud computing out of some sectors entirely. As I noted in the comments to Mark's article, the current architecture of choice for cloud computing products is the monolithic data centre. Having only a handful of large data centres around the world will keep the cloud from consuming CDNs like Akamai and keep it out of other sectors entirely where wide topographic or geographic distribution is required, and a large number of small data centres are used, like root or TLD DNS infrastructures.
Mark correctly points out that the geography problem will be solved in some ways as cloud computing becomes more ubiquitous and the big players grow even larger, and in others as the cloud providers become the CDNs. But until a cloud provider can guarantee my data won't leave my local legal jurisdiction I'd never recommend the service to whoever my employer happens to be... and even once they can I'd still recommend the lawyers have a good hard look at the liability of handing over all the company's data to another party.
Mark's core point remains valid however: change is coming. Whether it's fast and sudden, or slow and gradual, sysadmins had better be prepared to learn to deal with the cloud computing APIs, and be comfortable working with virtual machines and networks, or they'll be left behind.
There's definitely a change coming. Cloud computing has the potential to put us all out of work, or at least severely hamper our employability, if we don't keep on top of the changes and keep our skills up to date.. but that's been true of every shift in the industry since it came into being. Every time a new technology or shift in business practises comes along, individual sysadmins either adapt or restrict their potential employers. The difference with cloud computing is that it promises a lot of change all at once, where in the past we've mostly dealt with at most two or three new things to think about in a year.
I think there are some potential drags on the takeover by cloud computing that will slow the changes Mark and Trevor warn of, however.
A few times now we've been warned that the desktop computer is doomed, and that it's all going to be replaced by thin clients like the Sun Ray from Sun Microsystems, or more recently mostly-web-based Application Service Providers like Google Apps. Despite years of availability of thin clients, and cheap, easy access to more recent offerings like Google's, this hasn't happened. Individual users, and even some small organizations may have embraced web apps as free alternatives to expensive packages like Microsoft Office, but I'm not aware of any significant corporations that have gone down this road. The main reason? I think it has to do with control of data. Most companies just don't want to hand all their data over to someone else. In many cases, simple reluctance can become a statutory hurdle, particularly when you're talking about customer data and there's a national border between the user and the data store. I think this same reasoning will come into play with even stronger force when companies start considering putting their entire data centre in the hands of another company. The difference in who has access to your data between co-location and cloud computing is significant.
Additionally, I think the geography problem will keep cloud computing out of some sectors entirely. As I noted in the comments to Mark's article, the current architecture of choice for cloud computing products is the monolithic data centre. Having only a handful of large data centres around the world will keep the cloud from consuming CDNs like Akamai and keep it out of other sectors entirely where wide topographic or geographic distribution is required, and a large number of small data centres are used, like root or TLD DNS infrastructures.
Mark correctly points out that the geography problem will be solved in some ways as cloud computing becomes more ubiquitous and the big players grow even larger, and in others as the cloud providers become the CDNs. But until a cloud provider can guarantee my data won't leave my local legal jurisdiction I'd never recommend the service to whoever my employer happens to be... and even once they can I'd still recommend the lawyers have a good hard look at the liability of handing over all the company's data to another party.
Mark's core point remains valid however: change is coming. Whether it's fast and sudden, or slow and gradual, sysadmins had better be prepared to learn to deal with the cloud computing APIs, and be comfortable working with virtual machines and networks, or they'll be left behind.
Labels:
cloud computing,
internet,
systems administration,
technology
Tuesday, January 13, 2009
Not Your Best Interview, Mr. Tovar
I've been thinking for some time about starting to regularly post thoughts about random things somewhere visible — blogging, if you will. This afternoon I was thinking about it somewhat more earnestly, trying to choose between several topics for a first post, when a friend pointed out this interview. Seeing as the subject (the DNS) is what I spend most of my days doing, it seemed like an excellent place to start.
I'll approach one particular point here as a pair of answers, since there's tightly related information in both.
As for the rest of what he has to say here, on the surface it seems perfectly reasonable, if you make a couple of assumptions.
First, you have to assume that commercial DNS software, or at least his commercial DNS software, has some special sauce which prevents (or at least identifies) Kaminsky-style attacks against a DNS server, which cannot be available to operators of open source DNS software. It may be reasonable to take the position that it is beneficial for DNS software to notice and identify when it is under attack; however, it is not reasonable to suggest that this is the only way to detect attacks against the software. When the details of Kaminsky's exploit were eventually leaked, almost immediately tools began to spring up to help operators detect poisoning attempts, such as the Snort rule discussed in this Sourcefire Vulnerability Research Team white paper [PDF].
The second assumption you must make to consider this point reasonable is that a network operator is going to fail to notice a concerted effort to attack one or more of her DNS servers. Overcoming source port randomization, the "fix" under discussion, requires such a concerted effort — so much bandwidth consumed for so long a period — that it is generally considered unlikely that a competent network operator is going to fail to notice a cache poisoning attempt in progress when it is directed at a patched server. It's a bit like suggesting that it takes 10 hours of someone banging on your front door with a sledgehammer to break through it, and that it's unlikely you'll notice this going on.
First let me say that I disagree with Tom Tovar's basic position on open source DNS software. As a general class of software, there is absolutely nothing wrong with using it for mission critical applications — not even for critical infrastructure. The various arguments about "security through obscurity" vs. open analysis and vetting have been made so many times that I won't bother with the details here. Suffice to say that, all other things being equal, I'd be far more inclined to trust the security of my DNS to open source software than closed, proprietary commercial software. Not that his position is that big a surprise. After all, he is the CEO of a company that sells some extremely expensive DNS software.
Mr. Tovar's early statements about DNS security, Kaminsky, and the current concerns of the DNS industry as a whole are hard to argue with. However, as one gets further into the interview, some serious problems with what Mr. Tovar has to say crop up including one glaring error in a basic statement of fact. I'll approach those in turn as I work my way through the article, addressing flawed points in the order they come.
I'll approach one particular point here as a pair of answers, since there's tightly related information in both.
GCN: The fix that was issued for this vulnerability has been acknowledged as a stopgap. What are its strengths and weaknesses?Mr Tovar's last point in the first question is right on the money: the patches released for dozens of DNS implementations this summer do not constitute "a fix." Collectively they are an improvement in the use of the protocol that makes a security problem tougher for the bad guys to crack, but does not make that problem go away.
TOVAR: Even to say that it is pretty good is a scary proposition. There was a Russian security researcher who published an article within 24 hours of the release of the [User Datagram Protocol] source port randomization patch that was able to crack the fix in under 10 hours using two laptops. The strength of the patch is that it adds a probabilistic hurdle to an attack. The downside is it is a probabilistic defense and therefore a patient hacker with two laptops or a determined hacker with a data center can eventually overcome that defense. The danger of referring to it as a fix is that it allows administrators and owners of major networks to have a false sense of security.
GCN: Are there other problems in DNS that are as serious as this vulnerability?
TOVAR: I think there are a lot of others that are just as bad or worse. One of the challenges is that there is no notification mechanism in most DNS solutions, no gauntlet that the attacker has to run so that the administrator can see that some malicious code or individual is trying to access the server in an inappropriate way. If UDP source port randomization were overcome and the network owner or operator were running an open-source server, there would be no way to know that was happening. This has been a wake-up call for any network that is relying on open source for this function.
As for the rest of what he has to say here, on the surface it seems perfectly reasonable, if you make a couple of assumptions.
First, you have to assume that commercial DNS software, or at least his commercial DNS software, has some special sauce which prevents (or at least identifies) Kaminsky-style attacks against a DNS server, which cannot be available to operators of open source DNS software. It may be reasonable to take the position that it is beneficial for DNS software to notice and identify when it is under attack; however, it is not reasonable to suggest that this is the only way to detect attacks against the software. When the details of Kaminsky's exploit were eventually leaked, almost immediately tools began to spring up to help operators detect poisoning attempts, such as the Snort rule discussed in this Sourcefire Vulnerability Research Team white paper [PDF].
The second assumption you must make to consider this point reasonable is that a network operator is going to fail to notice a concerted effort to attack one or more of her DNS servers. Overcoming source port randomization, the "fix" under discussion, requires such a concerted effort — so much bandwidth consumed for so long a period — that it is generally considered unlikely that a competent network operator is going to fail to notice a cache poisoning attempt in progress when it is directed at a patched server. It's a bit like suggesting that it takes 10 hours of someone banging on your front door with a sledgehammer to break through it, and that it's unlikely you'll notice this going on.
This is the only reason informed DNS operators will allow anyone to come anywhere near implying that the patch is a "fix" of some kind. In fact, the only currently available fix for this vulnerability in the protocol is securing the protocol, which is where DNSSEC comes in.
This sort of fundamental misunderstanding of the DNS infrastructure is a good indicator of just how much weight should be given to Mr. Tovar's opinions on how that infrastructure is managed.
UPDATE: Florian Weimer raises an interesting question, one that I'm embarassed to say didn't occur to me when I was originally writing this. Even though the question itself is facetious, the point is valid. The data sheets [PDF] for the main product that Mr. Tovar's company markets say that it only runs on open source operating systems. How can he keep a straight face while on the one hand claiming that his product is suitable for mission critical applications but open source isn't, while on the other hand his software only runs in open source environments? It's quite baffling.
GCN: Is open-source DNS software inherently less secure than proprietary software?There actually is a widely available deterministic defense mechanism which is implemented in open source, as well as commercial software. It's known as DNSSEC, and it comes up later in the interview. Putting that aside though, I'd be curious to know what other deterministic defense mechanisms Tovar is implying are available in commercial software, but are not available in open source software. Obviously he doesn't say; he can't. His own point addresses why non-revealable deterministic countermeasures are insufficient for securing DNS software: should they be revealed (and let's face it, corporate espionage happens) then they are immediately invalidated.
TOVAR: The challenge of an open-source solution is that you cannot put anything other than a probabilistic defense mechanism in open source. If you put deterministic protections in, you are going to make them widely available because it is open source, so you essentially give the hacker a road map on how to obviate or avoid those layers of protection. The whole point of open source is that it is open and its code is widely available. It offers the promise of ease of deployment, but it is likely having a complex lock system on your house and then handing out the keys.
GCN: BIND, which is the most widely used DNS server, is open source. How safe are the latest versions of it?He is, of course, welcome to this opinion. I do not share this opinion, and I would bet real money that neither do a majority of the other operators who manage critical DNS infrastructure. Refuting the basis for this opinion is what this post is all about, so I'll move on.
TOVAR: For a lot of environments, it is perfectly suitable. But in any mission-critical network in the government sector, any financial institution, anything that has the specter of identity theft or impact on national security, I think using open source is just folly.
GCN: Why is BIND so widely used if it should not be used in critical areas?The implication being... as the Internet ages and we gain more experience, we should think more poorly of open source?
TOVAR: The Internet is still relatively young. We don’t think poorly of open source.
In fact, many of our engineers wrote BIND versions 8 and 9, so we do believe there is a role for that software. But the proliferation of DNS has occurred in the background as the Internet has exploded. DNS commonly is thought of as just a translation protocol for browsing behavior, and that belies the complexity of the networks that DNS supports. Everything from e-mail to [voice over IP] to anything IP hits the DNS multiple times. Security applications access it, anti-spam applications access it, firewalls access it. When administrators are building out networks it is very easy to think of DNS as a background technology that you just throw in and then go on to think about the applications.The rest of this statement is about how people deploy DNS, not about how the software is designed or works. Nothing here explains why he feels open source DNS software can't be used in mission critical situations. All he explains is how he thinks it can be deployed in sloppy ways. Commercial software can be deployed sloppily just as easily as open source software. Nothing inherent in the software prevents or aids the lazy administrator in being lazy.
GCN: Why is DNSsec not more widely deployed?And here, at the end, is where I think Mr Tovar shows his ignorance of DNS operations in general, and in particular the operation of critical infrastructure. There is, in fact, a central authority for the .COM zone. That authority is Verisign; they manage and publish the .COM zone from one central database. Mr Tovar is, perhaps, thinking of the .COM WHOIS database. The WHOIS database holds the details of who holds each registered domain, and is indeed distributed among the .COM registrars.
TOVAR: Not all DNS implementations support it today, and getting vendors to upgrade their systems is critical. Just signing one side of the equation, the authoritative side, is a good step, but it’s only half the battle. You need to have both sides of the DNS traffic signed. And there is no centralized authority for .com. It is a widely distributed database and you have to have every DNS server speaking the same version of DNSsec. So the obstacle to DNSsec deployment is fairly huge. It is going to take government intervention and wide-scale industry acknowledgment that this is needed.
This sort of fundamental misunderstanding of the DNS infrastructure is a good indicator of just how much weight should be given to Mr. Tovar's opinions on how that infrastructure is managed.
UPDATE: Florian Weimer raises an interesting question, one that I'm embarassed to say didn't occur to me when I was originally writing this. Even though the question itself is facetious, the point is valid. The data sheets [PDF] for the main product that Mr. Tovar's company markets say that it only runs on open source operating systems. How can he keep a straight face while on the one hand claiming that his product is suitable for mission critical applications but open source isn't, while on the other hand his software only runs in open source environments? It's quite baffling.
Labels:
BIND,
DNS,
internet,
interview,
Nominum,
open source,
technology,
Tom Tovar
Subscribe to:
Posts (Atom)