The recent breach of payment card data from US Target stores piqued my curiosity about exactly how credit and debit card security works. Many of us use these cards nearly every day if not several times per day to pay for goods and services both in person and online. Yet how many people understand the security of these instruments that permit access to our financial resources?
There are two basic types of payment cards, credit and debit. Both of them serve as your identification and means of authentication for authorizing a transaction between you and a merchant. The cards all conform to the ISO/IEC 7813 standard. For identification, they have your name and account number both printed and encoded on the magnetic stripe. There are several mechanisms that provide authentication. Which one(s) gets used depends on which type of card it is and how you are using it. You can use the cards in person at a point-of-sale (POS) terminal or you can make an online/telephone/mail-order (i.e. "card not present") transaction.
If you are physically at the merchant's checkout register, the card is "what you have". If it is a debit card, then the PIN is "what you know" and you have two-factor authentication. If it is a credit card, the merchant should verify your identity against a government-issued photo ID (e.g. driver's license) and also compare the signature on your credit card to the one on your receipt or the digital signing pad. I think of this as a crude quasi-biometric that weakly qualifies as "what you are". (That's probably another post to rip that idea apart.) So in practice, credit cards are effectively one-factor authentication.
Interestingly, most credit cards have a PIN number too, but that's only used for ATM cash advances. I think most people use a debit card at the ATM, so it's kind of useless. You'd think they would have required that at POS terminals for two-factor authentication, yes? Credit cards have been around far longer than debit cards. I suspect the PIN came later and now it's too expensive to mandate an update to the entire infrastructure (e.g. IPv6 upgrade). Also, federal law limits your fraud liability to $50 and the big companies (Visa, etc) waive that to $0. I'm guessing with their massive resources, it's cheaper to take the chance and maintain good relations with the customers, i.e. a calculated risk.
If you are making a "card not present" purchase, then a merchant should require you to provide the CVV2 code (the 3 or 4 digit code on the back of the card). This code can be used in the transaction with the payment processor to verify that the card number given is legitimate. This replaces the use of the CVV1 code which is only encoded on the magnetic stripe which is not used in this case. Since all the data provided is on the card, this is one-factor authentication. Some merchants still do not require the CVV2 code. I guess they're just passing the buck to financial institution if the transaction turns out to be fraudulent.
No matter which type of card and transaction you're doing, the steps are fundamentally the same. You provide the card (or its data) to the merchant. They communicate this data along with the proposed amount of the transaction to their payment processor for authorization. Most of the card-issuing institutions do not deal with this directly, but farm it out to third parties who have the ability to authorize the transaction to occur between the merchant and your account. There are many institutions, processors and merchants that all have to work together seamlessly in order to provide you with a consistent purchasing mechanism. In regards to security, the Payment Card Industry (PCI) Security Standards Council formed in 2006 to provide security requirements and best practices for all these participants. They offer the Data Security Standard (version 3.0 as of Nov 2013) that provides a framework of security requirements, testing procedures and implementation guidance for all aspects of payment card processing.
We don't have any technical details from the Target breach (and we probably won't), but we can learn a few things from their public relations handling. They have a web page dedicated to the issue including an FAQ. According to the FAQ, they found some type of malware on their point-of-sale systems. I take that to mean the actual swipe terminals or the interface between them and the payment processor. There was some initial confusion about "CVV information" being acquired. This is a terminology nightmare. Read something like this Wikipedia entry and then tell me what they're talking about. I believe what was acquired was the CVV1 code which is on the magnetic stripe. The CVV2 code is not on the stripe, only physically printed on the back of the card. There is no way to capture that electronically (unless you want to talk about cameras planted under the POS terminal). The CVV1 code cannot be used for fraudulent online purchases, but it can be used to reconstruct a clone of your card for fraudulent in-person purchases.
There was also concern over captured PIN data from debit cards. Target later confirmed that the encrypted PIN data was also acquired. Your PIN gets encrypted right in the keypad terminal and has to be transmitted to the payment processor. Also PCI DSS requirement 3.2.3 says that PIN data (encrypted or not) should never be stored. In fact, there is an entire separate document all about PIN Security Requirements. However, it has to reside in some memory buffer long enough to get transmitted from the POS terminal to the processor. That's probably where it got captured. The data is encrypted with Triple-DES which was the precursor recommendation for "strong encryption" before AES was published. The Target FAQ says that the decryption key is only possessed by the payment processor and has never been on any Target system ever. This implies PKI and Target only has the payment processor's public key. So this is as well protected as anything encrypted with Triple-DES. Unless the NSA was behind this, I doubt the thieves can do much with that. Nor do they need to because you can opt to use a debit card but pay as a credit transaction which does not require PIN entry.
And now we're back to my earlier question about why credit cards do not require PIN use. How many more large-scale incidents like this will it take before we bite the bullet and introduce credit card PINs? Perhaps we use the European-style "chip and PIN" smart-cards? This enhances security because the PIN is only used as an input to the chip on the card to unlock access to your encryption keys that then protect the interaction between card and terminal. That's a lot smaller window of opportunity for PIN capture.
Definitely a story to keep following as it develops due to its large scale and high visibility. This is another great example of the tradeoff between security and usability. Do we keep things simple like they are and avoid a potentially high-cost upgrade of the infrastructure? That will result in more breaches, more bad public relations, etc. Or do we pay a relatively one-time cost and go for a more secure solution? Also consider that some due care should be expected on the part of the customer. We should learn to keep an eye on our accounts. Most institutions will offer to send you emails or text message alerts if anything suspicious happens. Since this is all about money, the answer comes down to money as well. Whichever path is deemed to have the lesser overall cost will win.
This post maps to CompTIA SY0-301 exam objectives 2.1, 2.5 and 5.2.
Sunday, December 29, 2013
Monday, December 23, 2013
Certificate Revocation Checks
Another usability issue that I have to deal with on a daily basis is the clash between enforced security policies and unavailable security services. Where I work, we use smart-cards that contain identity and email certificates. We are required to use them to access email using Microsoft Outlook. They serve as our two-factor authentication for accessing the account (you must have the card and know the PIN). This is a full-blown Outlook installation, so we have a central server that stores everyone's public keys so we can encrypt messages to one another and/or digitally sign them. The public keys are used by others to verify our digitally signatures and/or decrypt messages.
The cards are issued by another centralized authority so if you look at your certificates, you can see the chain of signatures going up to the root certificate authority (CA). The root CA certificates and all the intermediate certificates are installed and updated on everyone's system as part of a common enterprise-managed infrastructure (Windows 7 Enterprise).
Certificates can be revoked for a number of reasons (expiration, terminated employees, etc). This includes both individual certificates and any certificate up the chain including the root CA potentially. So any time you receive a signed email, the signature must be verified by making sure it traces back to a root CA and has not been revoked. So where do we keep track of the certificate revocation list (CRL)? That's on yet another centralized repository somewhere and we use a program called Axway Desktop Validator that queries this repository to see if any certificates in the chain being verified have been revoked, which would invalidate everything from the point of revocation on down.
Our managed system policy enforces a number of behaviors that cannot be changed by normal, non-administrative users. The use and configuration of Axway being one of these. It integrates with Outlook so you rarely, if ever, see any visible signs of its existence. However, we certainly feel its effects because it rarely works well. The Axway program has a plethora of configuration options, many of them pertaining to timeouts for how long to keep revocation data and what to do in various circumstances and error cases. We seem to hit these cases frequently.
Whether the program and the service are simply latent in response or completely down, the local effect is that Outlook hangs until the operation is complete or times out, which can be on the order of 30 seconds. This falls into what I call the "piss-off" zone. It's long enough to disrupt your mental workflow and initiate frustration, but too short for you to go do something else and come back later. And often, the emails that use digital signatures are "official" messages from various authorities (facilities, security, IT, administration) that you really should read, so there's no avoiding it.
This whole certificate business is all about protecting integrity and providing non-repudiation, but these policy vs implementation clashes result in reduced availability. In my experience, availability always gets the short end of the stick over every other security aspect. I suspect it is because most people think of availability as simply a system being online or not regardless of performance and the cumulative impact that can have. The downstream consequences of poor performance can get multiplied exponentially.
Our technical support just shrugs their shoulders and says they can't do anything about the performance. They've deployed it as required. The security folks just shrug their shoulders and say "it's policy, you have to use it", whether it actually works well or not. So it becomes a classic case of the two entities that could do something about it just pointing the finger at each other and doing nothing. Neither one cares because they have each met their specific requirement.
A true improvement in the overall security of an organization would have someone appointed with both the responsibility and authority to look at these combined effects and do something about it. Perhaps the "Chief Usability Officer (CUO)". I'd like that job... :)
This post maps to CompTIA SY0-301 exam objectives 6.3 and 6.4.
The cards are issued by another centralized authority so if you look at your certificates, you can see the chain of signatures going up to the root certificate authority (CA). The root CA certificates and all the intermediate certificates are installed and updated on everyone's system as part of a common enterprise-managed infrastructure (Windows 7 Enterprise).
Certificates can be revoked for a number of reasons (expiration, terminated employees, etc). This includes both individual certificates and any certificate up the chain including the root CA potentially. So any time you receive a signed email, the signature must be verified by making sure it traces back to a root CA and has not been revoked. So where do we keep track of the certificate revocation list (CRL)? That's on yet another centralized repository somewhere and we use a program called Axway Desktop Validator that queries this repository to see if any certificates in the chain being verified have been revoked, which would invalidate everything from the point of revocation on down.
Our managed system policy enforces a number of behaviors that cannot be changed by normal, non-administrative users. The use and configuration of Axway being one of these. It integrates with Outlook so you rarely, if ever, see any visible signs of its existence. However, we certainly feel its effects because it rarely works well. The Axway program has a plethora of configuration options, many of them pertaining to timeouts for how long to keep revocation data and what to do in various circumstances and error cases. We seem to hit these cases frequently.
Whether the program and the service are simply latent in response or completely down, the local effect is that Outlook hangs until the operation is complete or times out, which can be on the order of 30 seconds. This falls into what I call the "piss-off" zone. It's long enough to disrupt your mental workflow and initiate frustration, but too short for you to go do something else and come back later. And often, the emails that use digital signatures are "official" messages from various authorities (facilities, security, IT, administration) that you really should read, so there's no avoiding it.
This whole certificate business is all about protecting integrity and providing non-repudiation, but these policy vs implementation clashes result in reduced availability. In my experience, availability always gets the short end of the stick over every other security aspect. I suspect it is because most people think of availability as simply a system being online or not regardless of performance and the cumulative impact that can have. The downstream consequences of poor performance can get multiplied exponentially.
Our technical support just shrugs their shoulders and says they can't do anything about the performance. They've deployed it as required. The security folks just shrug their shoulders and say "it's policy, you have to use it", whether it actually works well or not. So it becomes a classic case of the two entities that could do something about it just pointing the finger at each other and doing nothing. Neither one cares because they have each met their specific requirement.
A true improvement in the overall security of an organization would have someone appointed with both the responsibility and authority to look at these combined effects and do something about it. Perhaps the "Chief Usability Officer (CUO)". I'd like that job... :)
This post maps to CompTIA SY0-301 exam objectives 6.3 and 6.4.
Friday, December 13, 2013
Password Complexity vs Usabilty
Continuing the topic of security vs usability from my previous post, let's talk specifically about password complexity. Why is a more complex password better than a simple one? It is better because that makes it more difficult for an imposter to crack and thus falsely authenticate as you.
I have been on the Internet since about 1987 and have both watched and participated in the massive growth and usage ever since commercial entities and individuals were allowed on in the early 90's. Almost every business and organization you might deal with today has an online presence involving a web site. And you can usually register an account there which involves creating a password for authentication. In my experience, the password policies vary quite widely from site to site. Some sites still don't seem to care and let you enter short and weak passwords (e.g. 1234). On the other end of the spectrum, some sites require a minimum length, a mix of character types and even provide a strength indicator feedback as you enter it. Some sites are even adding two-factor authentication in the form of a fob/card or more likely an SMS text to your cell phone. Sadly, some sites don't seem to care until they get hacked and have to deal with the PR and legal ramifications of PII being publicly leaked.
One mitigating factor is how valuable you consider your account on a given site to be. If you have not registered any PII or payment information, then perhaps little to no damage can occur and a complex password is not necessarily warranted. But IMHO, that's just bad practice and leads to a less careful attitude.
As more and more people "got online", the number of bad apples got proportionally larger and they too have grown in skill and continue to develop their password cracking tools and techniques. Back in the day, it was just simple brute-force exhaustive search of the password space. Then they added dictionaries of common words, rainbow tables, etc. Powerful, parallel computing hardware is now readily available for a low cost to speed up whatever techniques are used. Even worse, when someone's database gets hacked and lots of passwords are captured, that just gets added to the front of the dictionary list to hit the low-hanging fruit first. So we're in an "arms race" with these folks and as long as we're using passwords, the only tactic is to stay ahead of them with length and complexity to make it more costly to crack.
And that brings us back to the security vs usability trade-off. The longer and more complicated your password is, the harder it is to remember and enter it quickly and correctly. A couple years ago, one of my online accounts got hacked (still don't know which one) and some (fortunately out of date) credit card information was acquired. They tried to use it for a purchase and the credit card company flagged the transaction as suspicious due to the outdated address information. This blocked the card and I had to get a new number, etc. I had registered that card on a number of sites for automated, periodic billing, but had neglected to keep a careful list. So I had to rediscover them the hard way as I got notification from everyone of failed payments over the next month's billing cycle.
This episode taught me two important lessons. First, document where you have left your credit card information. Second, use better passwords. I was guilty of using the same simple password on a LOT of sites. My first thought was to change them all to something better, but then I quickly realized that sharing the same one on every site was a serious vulnerability no matter how good it was. And nothing I could remember would ever be a really strong password. So I was caught between security and usability and knew it.
Then an idea formed. What if I introduced a small usability inconvenience that would let me maximize the complexity and strength of all my passwords? I decided to make use of a secure password database program. My original choice was Bruce Schneier's Password Safe. Later, I migrated to KeePass because I wanted cross-platform support for Linux, Windows, Android, iOS, etc. Regardless of the specific application, I only need to create and remember one good password to unlock the database. Then I can store every other password (and associated meta-data) securely. This would let me generate maximum-length, random passwords using all acceptable characters which is the best you can possibly do. Each site gets its own database record and different password from all other sites. So compromise of one site does not give the attacker any advantage on any other site. The usability drawback, however, is that I don't actually know what any of my passwords are. I must have the password database available to copy/paste the passwords into the web browser.
And that leads to a second usability issue. With touch screens like an iPad or an Android phone, you have virtual keyboard and they are not necessarily QWERTY layout. It takes some doing to type in the password database password on such a device because these keyboards typically have different pages for letters, numbers and sometimes special characters are split across two different sections. So entry is not as efficient as a real keyboard for something long and complex.
Even worse is any interface that is presented on a TV screen, like adding a DVR or streaming video player (e.g. Roku) to your WiFi network. The interface is typically a slow cursor (left/right/up/down) to each character using the remote control. I once had a 63-character random string for my WiFi password. After failing to enter it correctly 4 or 5 times in a row, I compromised on the strength vs my sanity trying to use the remote control and changed it to something easier to enter. I would really like to see such devices add a web interface so you could configure them from your computer instead of on the TV screen. Even if that requires an initially wired Ethernet connection.
I also tend to compromise slightly on passwords that must be entered frequently from the touch devices. It really is a pain in the butt to use the password safe on them because of the password entry and then the copy/paste mechanism.
I have been using this system for over 2 years now and it works quite well. At last count, I have 171 records in the database. I've gotten use to the database and it's really not that bad once you become proficient with its use. And I rest easier with regards to the security of my accounts.
That takes care of due diligence on my end, but I have encountered some examples of laxity on the other end. Some sites have no password policy (other than entering "something") or only accept a limited set of special characters (why?). Some have made odd choices or have technical issues. One brokerage site that I use does not allow passwords. They still use a numeric PIN that must be between 6 and 12 digits in length. That is an incredibly small character space. Of course, I use all 12 digits and select them randomly, but still it's only 10^12 permutations. The only mitigating factor is that there is no user name. They use your account number as the ID, so a random hacker with no other information is left to guess two random numbers and see if that happens to equal anyone's account. An example of a technical weakness is Vanguard which a lot of employers use for their 401(k) benefits program. The Vanguard web site password policy is:
This post maps to CompTIA SY0-301 exam objective 5.3.
I have been on the Internet since about 1987 and have both watched and participated in the massive growth and usage ever since commercial entities and individuals were allowed on in the early 90's. Almost every business and organization you might deal with today has an online presence involving a web site. And you can usually register an account there which involves creating a password for authentication. In my experience, the password policies vary quite widely from site to site. Some sites still don't seem to care and let you enter short and weak passwords (e.g. 1234). On the other end of the spectrum, some sites require a minimum length, a mix of character types and even provide a strength indicator feedback as you enter it. Some sites are even adding two-factor authentication in the form of a fob/card or more likely an SMS text to your cell phone. Sadly, some sites don't seem to care until they get hacked and have to deal with the PR and legal ramifications of PII being publicly leaked.
One mitigating factor is how valuable you consider your account on a given site to be. If you have not registered any PII or payment information, then perhaps little to no damage can occur and a complex password is not necessarily warranted. But IMHO, that's just bad practice and leads to a less careful attitude.
As more and more people "got online", the number of bad apples got proportionally larger and they too have grown in skill and continue to develop their password cracking tools and techniques. Back in the day, it was just simple brute-force exhaustive search of the password space. Then they added dictionaries of common words, rainbow tables, etc. Powerful, parallel computing hardware is now readily available for a low cost to speed up whatever techniques are used. Even worse, when someone's database gets hacked and lots of passwords are captured, that just gets added to the front of the dictionary list to hit the low-hanging fruit first. So we're in an "arms race" with these folks and as long as we're using passwords, the only tactic is to stay ahead of them with length and complexity to make it more costly to crack.
And that brings us back to the security vs usability trade-off. The longer and more complicated your password is, the harder it is to remember and enter it quickly and correctly. A couple years ago, one of my online accounts got hacked (still don't know which one) and some (fortunately out of date) credit card information was acquired. They tried to use it for a purchase and the credit card company flagged the transaction as suspicious due to the outdated address information. This blocked the card and I had to get a new number, etc. I had registered that card on a number of sites for automated, periodic billing, but had neglected to keep a careful list. So I had to rediscover them the hard way as I got notification from everyone of failed payments over the next month's billing cycle.
This episode taught me two important lessons. First, document where you have left your credit card information. Second, use better passwords. I was guilty of using the same simple password on a LOT of sites. My first thought was to change them all to something better, but then I quickly realized that sharing the same one on every site was a serious vulnerability no matter how good it was. And nothing I could remember would ever be a really strong password. So I was caught between security and usability and knew it.
Then an idea formed. What if I introduced a small usability inconvenience that would let me maximize the complexity and strength of all my passwords? I decided to make use of a secure password database program. My original choice was Bruce Schneier's Password Safe. Later, I migrated to KeePass because I wanted cross-platform support for Linux, Windows, Android, iOS, etc. Regardless of the specific application, I only need to create and remember one good password to unlock the database. Then I can store every other password (and associated meta-data) securely. This would let me generate maximum-length, random passwords using all acceptable characters which is the best you can possibly do. Each site gets its own database record and different password from all other sites. So compromise of one site does not give the attacker any advantage on any other site. The usability drawback, however, is that I don't actually know what any of my passwords are. I must have the password database available to copy/paste the passwords into the web browser.
And that leads to a second usability issue. With touch screens like an iPad or an Android phone, you have virtual keyboard and they are not necessarily QWERTY layout. It takes some doing to type in the password database password on such a device because these keyboards typically have different pages for letters, numbers and sometimes special characters are split across two different sections. So entry is not as efficient as a real keyboard for something long and complex.
Even worse is any interface that is presented on a TV screen, like adding a DVR or streaming video player (e.g. Roku) to your WiFi network. The interface is typically a slow cursor (left/right/up/down) to each character using the remote control. I once had a 63-character random string for my WiFi password. After failing to enter it correctly 4 or 5 times in a row, I compromised on the strength vs my sanity trying to use the remote control and changed it to something easier to enter. I would really like to see such devices add a web interface so you could configure them from your computer instead of on the TV screen. Even if that requires an initially wired Ethernet connection.
I also tend to compromise slightly on passwords that must be entered frequently from the touch devices. It really is a pain in the butt to use the password safe on them because of the password entry and then the copy/paste mechanism.
I have been using this system for over 2 years now and it works quite well. At last count, I have 171 records in the database. I've gotten use to the database and it's really not that bad once you become proficient with its use. And I rest easier with regards to the security of my accounts.
That takes care of due diligence on my end, but I have encountered some examples of laxity on the other end. Some sites have no password policy (other than entering "something") or only accept a limited set of special characters (why?). Some have made odd choices or have technical issues. One brokerage site that I use does not allow passwords. They still use a numeric PIN that must be between 6 and 12 digits in length. That is an incredibly small character space. Of course, I use all 12 digits and select them randomly, but still it's only 10^12 permutations. The only mitigating factor is that there is no user name. They use your account number as the ID, so a random hacker with no other information is left to guess two random numbers and see if that happens to equal anyone's account. An example of a technical weakness is Vanguard which a lot of employers use for their 401(k) benefits program. The Vanguard web site password policy is:
Your password must have 6–20 characters, with at least 2 letters and 2 numbers. Don't use spaces.That's pretty good, but there is another interface to some of your data. I use the Quicken software to manage my personal finances. Vanguard supports downloading transactions and account balances by Quicken. To do this, you give Quicken your web site username and password (which it also stores securely). For the longest time, I could not get the Quicken interface to work. Contact with both Quicken and Vanguard technical support ended with them pointing the finger at each other and shrugging it off. So I did a little experimenting on my own and discovered that the Quicken interface would work if I did not use any special characters in my password. Would have been nice if Quicken could have documented that... but this points out another potential security weakness. If your site supports multiple interfaces, they should all share the same password policy. Otherwise, you have to compromise to the common denominator (i.e. the weakest one).
This post maps to CompTIA SY0-301 exam objective 5.3.
Sunday, December 8, 2013
SSH key agents - security *and* convenience
Have you seen those commercials where folks imagine what it would be like to use nuts or bolts instead of nuts and bolts? I too prefer and most of the time. All too often in the world of computer security, we have to make a trade-off between security and usability, a direct analogy to the "or" case. We can make it secure but hard to use. Or we can make it easy to use, but then it's too weak. So typically a compromise is selected somewhere in the middle, usually towards the security end of the scale, of course. So this ends up being "secure or usable".
A great example that everyone can understand is password policies and their required complexity.
Suppose we start at one end of the spectrum and have no policy. Most people will then use something very easy to remember (e.g. 1234 or blank). Unfortunately, that has almost no security strength. Anyone could crack that with a near-trivial effort. But it sure is easy to use! Going to the other extreme, we could require the password to be a 20 or 30 character random string. That's extremely strong, but most people could never remember it or type it correctly, especially on virtual keyboards where it's usually a pain to get to all the special characters. Good luck getting anyone to use your service like that. So we end up compromising in the middle. The password can be 8-12 characters, two upper-case, two lower-case, two number, two specials, etc. Now people can make up something memorable, but it's still strong enough to resist a quick break-in. That's not as strong as it could be and not as usable as it could be. Those are two competing factors in this case. I'm not sure we'll ever be satisfied with it.
Remember the security triangle with the three sides of confidentiality, integrity and availability?
Part of availability is authorized users being able to access resources when they need to.
If you choose a password complexity that creates a severe enough usability problem, then users will have difficulty on a daily basis getting into the systems they need to. Is this really any different than some hacker making the system unavailable via denial-of-service attacks? Not from the user's perspective it isn't. They can't get into the <expletive> system again. And then a slow burn of anger against IT and management begins. Let's not go there.
But every so often, we find a use-case where we can have our cake and eat it too. The Secure Shell (SSH) protocol supports a variety of authentication options. It supports and defaults to password authentication where you end up fighting all the stuff I just mentioned. But it also supports public-key authentication. We generate a public/private key pair, install the public key on a remote system that we have access to and then we can authenticate without using our remote account password. Awesome. But wait... you have to give your local ssh commands access to your private key and that is protected by a pass-phrase. A pass-phrase can be anything and, so far, I have never encountered any means of enforcing a policy on that. But suppose we follow good password/pass-phrase practice and give it a strong one. Aren't we back to square one now?
Not really. There are two mitigating factors that help in this case.
First, as I mentioned, there is no complexity policy associated with the private key pass-phrase. Why not? Because by it's very design and definition, the private key is private. It's for your eyes only. You don't get one from your system administrator or download it from the "cloud". You create it on a local system and protect it from access by others like any other file on the system. Yes, your local admin could lay hands on the file, but you have to trust him/her anyway. And the pass-phrase should prevent them from actually using it. The private key never actually leaves the local system. It is never transmitted over the network in the clear or otherwise. It gets used locally to encrypt portions of the SSH session negotiation so the other end can verify things using your public key, but the key itself stays put. So unless someone can hack your local system, they can't even try to use the file. Contrast this with passwords which are a direct input of the remote authentication process, thus allowing a hacker to try different values in an attempt to get in. They don't need your "help" with passwords.
So although you're on your own with regards to the complexity of the pass-phrase, the risk of anyone being able to make use of the private key is significantly smaller than passwords. And it's called a pass-phrase rather than a password because it can be literally any string. For example, you could type a short sentence that involves some upper-case letters, numbers and special characters that creates length, but is very easy to remember. For example, "My lucky number is 314159!". Sure it uses some English dictionary words, but look at the length. 26 characters long without even trying hard and trivial to remember and type easily. Try that with a password when spaces are not allowed and you start trying to come up with clever mnemonics and such.
The second factor is a very handy program associated with SSH called the "key agent".
Once invoked, the key agent process runs in the background as your user. It's not a daemon or system-level service. This helps prevent other users from abusing your agent (insider threat). So what does it do? After logging into your local system and invoking an SSH command (ssh, sftp, scp) for the first time and giving your pass-phrase to unlock the private key, the agent caches the fact that you authorized private key access. Every successive time you need it for another ssh command, the agent is contacted first and if you have already unlocked the private key, it tells the command that you did so and it can proceed without you re-entering the pass-phrase every time. Nice!
In my work environment, I typically stay logged into my workstation 24/7 and just lock the screen-saver. I only log in to the desktop from scratch when there's been a power outage or I have to update the kernel, etc. Under good conditions, I may be logged in to a single session for months at a time. And the first time in, I gave my private key pass-phrase and started the key agent. So every single ssh command after that just happens smoothly without interruptions. This is a tremendous efficiency boost to my workflow because I use ssh command several times a day, every single day. So I now have the security of SSH with public-key authentication *AND* the convenience of not typing the pass-phrase 10 times a day. WIN. :)
It gets better. Suppose you have to do a multi-hop SSH session through an intermediate host system. For example, your local system is A and you want to get to system C. But there is no direct path. You have to go through system B. Without a key-agent, you would ssh from A to B (and give the phrase). Then ssh from B to C (and give the phrase again). By learning two things, we can make this just as simple and smooth as our simple, single-system scenario. First, the ssh command has an option -t to force a pseudo-tty allocation. Normally, ssh expects to connect to an interactive login shell on the other side which has a tty. By using -t, we can just skip on through the middle system(s) without stopping there first and running the second ssh command manually.
For further convenience, there is a great package called keychain that makes using the key agent very easy. The first time you run it, it will create some tiny (~2 line) shell script that are designed to be sourced from your .bashrc when you start a new shell. If it discovers that ssh-agent is not yet running, it starts it and prompts for your private key pass-phrase to cache the authorization. If the agent is running and you've already authorized, then it just sets a couple of environment variables so the ssh programs can find the proper agent. Then you don't have to do anything no matter how many different shells you open within your one desktop session. After the first pass-phrase entry, every other ssh command just works.
This post maps to CompTIA SY0-301 exam objectives 1.4, 2.8 and 5.3.
A great example that everyone can understand is password policies and their required complexity.
Suppose we start at one end of the spectrum and have no policy. Most people will then use something very easy to remember (e.g. 1234 or blank). Unfortunately, that has almost no security strength. Anyone could crack that with a near-trivial effort. But it sure is easy to use! Going to the other extreme, we could require the password to be a 20 or 30 character random string. That's extremely strong, but most people could never remember it or type it correctly, especially on virtual keyboards where it's usually a pain to get to all the special characters. Good luck getting anyone to use your service like that. So we end up compromising in the middle. The password can be 8-12 characters, two upper-case, two lower-case, two number, two specials, etc. Now people can make up something memorable, but it's still strong enough to resist a quick break-in. That's not as strong as it could be and not as usable as it could be. Those are two competing factors in this case. I'm not sure we'll ever be satisfied with it.
Remember the security triangle with the three sides of confidentiality, integrity and availability?
Part of availability is authorized users being able to access resources when they need to.
If you choose a password complexity that creates a severe enough usability problem, then users will have difficulty on a daily basis getting into the systems they need to. Is this really any different than some hacker making the system unavailable via denial-of-service attacks? Not from the user's perspective it isn't. They can't get into the <expletive> system again. And then a slow burn of anger against IT and management begins. Let's not go there.
But every so often, we find a use-case where we can have our cake and eat it too. The Secure Shell (SSH) protocol supports a variety of authentication options. It supports and defaults to password authentication where you end up fighting all the stuff I just mentioned. But it also supports public-key authentication. We generate a public/private key pair, install the public key on a remote system that we have access to and then we can authenticate without using our remote account password. Awesome. But wait... you have to give your local ssh commands access to your private key and that is protected by a pass-phrase. A pass-phrase can be anything and, so far, I have never encountered any means of enforcing a policy on that. But suppose we follow good password/pass-phrase practice and give it a strong one. Aren't we back to square one now?
Not really. There are two mitigating factors that help in this case.
First, as I mentioned, there is no complexity policy associated with the private key pass-phrase. Why not? Because by it's very design and definition, the private key is private. It's for your eyes only. You don't get one from your system administrator or download it from the "cloud". You create it on a local system and protect it from access by others like any other file on the system. Yes, your local admin could lay hands on the file, but you have to trust him/her anyway. And the pass-phrase should prevent them from actually using it. The private key never actually leaves the local system. It is never transmitted over the network in the clear or otherwise. It gets used locally to encrypt portions of the SSH session negotiation so the other end can verify things using your public key, but the key itself stays put. So unless someone can hack your local system, they can't even try to use the file. Contrast this with passwords which are a direct input of the remote authentication process, thus allowing a hacker to try different values in an attempt to get in. They don't need your "help" with passwords.
So although you're on your own with regards to the complexity of the pass-phrase, the risk of anyone being able to make use of the private key is significantly smaller than passwords. And it's called a pass-phrase rather than a password because it can be literally any string. For example, you could type a short sentence that involves some upper-case letters, numbers and special characters that creates length, but is very easy to remember. For example, "My lucky number is 314159!". Sure it uses some English dictionary words, but look at the length. 26 characters long without even trying hard and trivial to remember and type easily. Try that with a password when spaces are not allowed and you start trying to come up with clever mnemonics and such.
The second factor is a very handy program associated with SSH called the "key agent".
Once invoked, the key agent process runs in the background as your user. It's not a daemon or system-level service. This helps prevent other users from abusing your agent (insider threat). So what does it do? After logging into your local system and invoking an SSH command (ssh, sftp, scp) for the first time and giving your pass-phrase to unlock the private key, the agent caches the fact that you authorized private key access. Every successive time you need it for another ssh command, the agent is contacted first and if you have already unlocked the private key, it tells the command that you did so and it can proceed without you re-entering the pass-phrase every time. Nice!
In my work environment, I typically stay logged into my workstation 24/7 and just lock the screen-saver. I only log in to the desktop from scratch when there's been a power outage or I have to update the kernel, etc. Under good conditions, I may be logged in to a single session for months at a time. And the first time in, I gave my private key pass-phrase and started the key agent. So every single ssh command after that just happens smoothly without interruptions. This is a tremendous efficiency boost to my workflow because I use ssh command several times a day, every single day. So I now have the security of SSH with public-key authentication *AND* the convenience of not typing the pass-phrase 10 times a day. WIN. :)
It gets better. Suppose you have to do a multi-hop SSH session through an intermediate host system. For example, your local system is A and you want to get to system C. But there is no direct path. You have to go through system B. Without a key-agent, you would ssh from A to B (and give the phrase). Then ssh from B to C (and give the phrase again). By learning two things, we can make this just as simple and smooth as our simple, single-system scenario. First, the ssh command has an option -t to force a pseudo-tty allocation. Normally, ssh expects to connect to an interactive login shell on the other side which has a tty. By using -t, we can just skip on through the middle system(s) without stopping there first and running the second ssh command manually.
ssh -t user@systemB ssh user@systemCThat takes care of the annoying "stop" at system B, but you'll still have to give your pass-phrase twice. Why? Once for system A and then again for the second session from system B. However, you can configure the local sshd to allow agent forwarding (see AllowAgentForwarding keyword in /etc/ssh/sshd_config). This tells your local SSH service that it's ok to send the cached authentication ahead and use it again on system B. Once you do that, the command above just drops you out directly on system C with no prompting.
For further convenience, there is a great package called keychain that makes using the key agent very easy. The first time you run it, it will create some tiny (~2 line) shell script that are designed to be sourced from your .bashrc when you start a new shell. If it discovers that ssh-agent is not yet running, it starts it and prompts for your private key pass-phrase to cache the authorization. If the agent is running and you've already authorized, then it just sets a couple of environment variables so the ssh programs can find the proper agent. Then you don't have to do anything no matter how many different shells you open within your one desktop session. After the first pass-phrase entry, every other ssh command just works.
This post maps to CompTIA SY0-301 exam objectives 1.4, 2.8 and 5.3.
Saturday, December 7, 2013
Elliptic Curve Cryptography
For the last eight years or so, I keep seeing the occasional article or reference to elliptic curve cryptography (ECC). Having only glossed over article headlines and summaries before, I understood it to be some new algorithm for use with public-key cryptography, i.e. an alternative to the RSA algorithm. My only experience with public-key cryptography where I worked directly with the keys and was cognizant of the algorithm being used is with Secure Shell (SSH). Where I work, we have "smart cards" that hold certificates for authentication, message signing and encryption. I know the signing and encryption keys are part of a public-key infrastructure, but I couldn't tell you what algorithm is used or anything else about the keys. It's all encapsulated on the card and the software that reads it.
So what's the difference between RSA and elliptic curves? Is it somehow better, stronger, faster, etc?
The whole idea of public-key cryptography is based on some type of mathematical problem that is easy to solve in one direction, but practically infeasible to reverse engineer and solve in the opposite direction. We're talking about problems whose known solution(s) involve exponential time and would take inordinate amounts of time to solve even if you had inconceivable amounts of computing power at your disposal. In other words, not practical for cracking open single emails and files on a daily basis.
RSA does this by using a pair of large prime numbers. If you know what the two numbers are, it is really simple to crank them through the math and sign/verify or encrypt/decrypt data. But if you only have one of the numbers (i.e. the public key) and some signed or encrypted data, good luck trying to reverse engineer the other number (i.e. the private key) from that. Prime factorization of large numbers has no known polynomial-time solution. However, with advances in computing horsepower, some folks have factored a 768-bit number using a massive, distributed system. So we begin the escalation of attack/counter-measure and start increasing the bit-size of all our keys to stay ahead of the hardware. But that causes some inconvenience like distributing new, larger keys and the extra time required to run the math with them.
Elliptic curves is the same concept, but uses a different type of math problem. The math is hairy and I won't pretend to understand all the details, but the gist is to take the plane curve defined by an equation of the form y^2 = x^3 + ax + b and introduce a set of group operations that take advantage of an interesting property of such a curve. If you draw a line through it, it intersects the curve at exactly three places (yes, there are some boundary cases). These points, boiled through the math, ultimately equate to your key pair. And like RSA, it is infeasible to easily reverse engineer the private key if you only have the public key and some signed/encrypted data.
So is this better than RSA? It depends on what your criteria is. So far the research shows that ECC keys provide the same security as an RSA key of significantly larger size. For example, if you have a 2048-bit RSA key, you only need a 224-bit ECC key. So that will take up less space and computation time.
Should we all jump on the ECC bandwagon then? There are some caveats. There are obviously a lot of different formulas for elliptic curves and the group operations. There are some known sets of parameters for these that are weak in cryptographic terms. So you have to choose wisely. NIST has published a set of recommended parameter sets for use with ECC. Of course, recent revelations about the NSA influencing the choice of certain aspects of these parameters and algorithms to make surveillance easier might give one pause to consider whether these recommendations are really that good.
Another possible issue is patents. Some companies and organizations have patented certain techniques for implementing ECC systems. This might keep someone from implementing or using ECC based on the fear of inadvertently using one of these and getting sued for it.
Then you also have what I think of as the inertia of an entrenched system to overcome. RSA has been around much longer and is in very wide use. To suddenly switch to ECC requires updating software and keys and possibly retraining some people. With no compelling need to do this, many will not bother. This is the same reason IPv6 is not in more use than it is. IPv4 works good enough for 99% of us. Same goes for RSA.
So can I play with ECC keys? It will have to be with SSH. Looks like OpenSSH added support for it in version 5.7 and I have version 6.2 available in Fedora 19. The ssh-keygen man page says they added a new key type of "ECDSA".
Installing the public key in the authorized_keys2 file on another system, we can test it like this
Unfortunately, many of the systems I work with do not have a sufficient version of SSH to support ECDSA keys. So I will maintain both key types for a while and try the ECDSA where possible.
This post maps to CompTIA SY0-301 exam objectives 1.4, 6.1 and 6.2.
So what's the difference between RSA and elliptic curves? Is it somehow better, stronger, faster, etc?
The whole idea of public-key cryptography is based on some type of mathematical problem that is easy to solve in one direction, but practically infeasible to reverse engineer and solve in the opposite direction. We're talking about problems whose known solution(s) involve exponential time and would take inordinate amounts of time to solve even if you had inconceivable amounts of computing power at your disposal. In other words, not practical for cracking open single emails and files on a daily basis.
RSA does this by using a pair of large prime numbers. If you know what the two numbers are, it is really simple to crank them through the math and sign/verify or encrypt/decrypt data. But if you only have one of the numbers (i.e. the public key) and some signed or encrypted data, good luck trying to reverse engineer the other number (i.e. the private key) from that. Prime factorization of large numbers has no known polynomial-time solution. However, with advances in computing horsepower, some folks have factored a 768-bit number using a massive, distributed system. So we begin the escalation of attack/counter-measure and start increasing the bit-size of all our keys to stay ahead of the hardware. But that causes some inconvenience like distributing new, larger keys and the extra time required to run the math with them.
Elliptic curves is the same concept, but uses a different type of math problem. The math is hairy and I won't pretend to understand all the details, but the gist is to take the plane curve defined by an equation of the form y^2 = x^3 + ax + b and introduce a set of group operations that take advantage of an interesting property of such a curve. If you draw a line through it, it intersects the curve at exactly three places (yes, there are some boundary cases). These points, boiled through the math, ultimately equate to your key pair. And like RSA, it is infeasible to easily reverse engineer the private key if you only have the public key and some signed/encrypted data.
So is this better than RSA? It depends on what your criteria is. So far the research shows that ECC keys provide the same security as an RSA key of significantly larger size. For example, if you have a 2048-bit RSA key, you only need a 224-bit ECC key. So that will take up less space and computation time.
Should we all jump on the ECC bandwagon then? There are some caveats. There are obviously a lot of different formulas for elliptic curves and the group operations. There are some known sets of parameters for these that are weak in cryptographic terms. So you have to choose wisely. NIST has published a set of recommended parameter sets for use with ECC. Of course, recent revelations about the NSA influencing the choice of certain aspects of these parameters and algorithms to make surveillance easier might give one pause to consider whether these recommendations are really that good.
Another possible issue is patents. Some companies and organizations have patented certain techniques for implementing ECC systems. This might keep someone from implementing or using ECC based on the fear of inadvertently using one of these and getting sued for it.
Then you also have what I think of as the inertia of an entrenched system to overcome. RSA has been around much longer and is in very wide use. To suddenly switch to ECC requires updating software and keys and possibly retraining some people. With no compelling need to do this, many will not bother. This is the same reason IPv6 is not in more use than it is. IPv4 works good enough for 99% of us. Same goes for RSA.
So can I play with ECC keys? It will have to be with SSH. Looks like OpenSSH added support for it in version 5.7 and I have version 6.2 available in Fedora 19. The ssh-keygen man page says they added a new key type of "ECDSA".
ssh-keygen -b 256 -t ecdsaThat generated a new key pair for me in $HOME/.ssh/ called id_ecdsa and id_ecdsa.pub.
-rw-------. 1 mjones mjones 736 Jul 12 06:01 id_dsaThey are certainly much smaller than my existing 1024-bit DSA keys. I've been using those for a long time, circa 1998. Can't even remember why I used DSA instead of RSA now. :)
-rw-------. 1 mjones mjones 605 Jul 12 06:01 id_dsa.pub
-rw-------. 1 mjones mjones 314 Dec 6 16:55 id_ecdsa
-rw-------. 1 mjones mjones 190 Dec 6 16:55 id_ecdsa.pub
Installing the public key in the authorized_keys2 file on another system, we can test it like this
ssh -v -i ~/.ssh/id_ecdsa user@hostand verify that it did indeed successfully use the new ECC key instead of the default DSA key.
debug1: identity file /home/mjones/.ssh/id_ecdsa type 3
Unfortunately, many of the systems I work with do not have a sufficient version of SSH to support ECDSA keys. So I will maintain both key types for a while and try the ECDSA where possible.
This post maps to CompTIA SY0-301 exam objectives 1.4, 6.1 and 6.2.
Friday, November 29, 2013
Performance Impact of SSH on Large File Transfers
I think by now, we all know that Secure Shell (SSH) should be preferred to TELNET and FTP for a lot of reasons, but especially for the security of encrypting the session and not sending authentication in the clear. This is great for traffic over the Internet or between any two network domains where you don't know who or what may be lurking out there between point A and point B.
But within your own LAN where you can personally vouch for the integrity of every node, the overhead of encryption is stealing a little bit of your session bandwidth. This adds extra time to large file transfers (e.g. moving DVD ISO files around). Suppose your local facility rules dictate that you may not use plain old FTP and must use SFTP. In this particular use-case, we don't care about security, but are forced to use the same tools regardless. So how can we maximize their efficiency? Exactly how much is this overhead and should we be concerned? If we are concerned, can we do anything about it? Let's find out with an experiment.
When you establish an ssh/sftp/scp session between two hosts, a negotiation takes place to establish common parameters including the algorithms used for bulk data encryption and hash-based message authentication codes (HMAC). Which ones gets selected comes down to a combination of which versions of SSH are on both sides of the connection (based on which algorithms they both support) and then the order of preference. So the final choices are the most preferred algorithms that both client and server support. However, you can forcibly change this selection using configuration options.
Let's use OpenSSH for experimenting since that's all I have convenient access to (other than PuTTY on Windows). I'm using a virtual Fedora 19 (running in VirtualBox on a Windows 7 host with a Core i7-860). F19 has this version of SSH:
FYI, the 'none' at the end is the compression algorithm which is not used by default.
If you ask for it with the -C option, you get:
Let's first measure the transfer time of a reasonably large file. I just picked a "large" (259 MB) file I had sitting around. I send it from Fedora to Ubuntu five times in a row and saw the same performance each time, about 12 seconds.
Quick side-bar... the same test using compression took 20 seconds. The file being transferred is already compressed with gzip, so more compression doesn't save much, if anything, but sure takes up more time. Definitely not helping.
So how do we change the selection of these algorithms? And what other choices are available?
The ssh man page shows that the -c option can be used to select the encryption algorithm and the -m option selects the HMAC algorithm. The available choices are documented in the ssh_config man page. You can use the command line options to change it for just the current session or you can use the config file options in $HOME/.ssh/config to permanently save your alternate choices.
The scp and sftp file transfer programs do not support the -m option to change this. We will see if they will still honor the MAC keyword in the config file or not.
So in good scientific method, let us only modify one variable at a time and see what happens.
We will start with the encryption algorithm. See the ssh_config page 'Ciphers' keyword for the choices. Be careful not to confuse it with the 'Cipher' (singular) keyword as that lists the choices for the deprecated SSH Protocol version 1.
Here are all the cipher choices that were compatible with both systems and the resulting transfer times in seconds.
Does HMAC matter? Let's find out. It turns out that you can modify the choice via the config file using the 'MACs' keyword and scp/sftp both honor it. We'll leave the encryption choice alone (back to the default aes128-ctr) for now.
In both algorithm choices, we see as expected that larger key/hash bit sizes for the same algorithm increase the computation time. As stated before, my use case does not require any security, so I would use 'arcfour'. If you follow the literature, however, arcfour (RC4) is considered to be somewhat weakened. The blowfish is almost as fast and considered strong. In fact, it was a finalist in the NIST AES selection process, but the Rijndael algorithm won the competition and was redubbed "AES".
This back-of-the-envelope experiment shows that in an environment where FTP is not allowed but security is not an issue you can gain considerable bandwidth back by selecting a different encryption algorithm for scp/sftp file transfers.
This post maps to CompTIA SY0-301 exam objectives 1.4 and 6.2.
But within your own LAN where you can personally vouch for the integrity of every node, the overhead of encryption is stealing a little bit of your session bandwidth. This adds extra time to large file transfers (e.g. moving DVD ISO files around). Suppose your local facility rules dictate that you may not use plain old FTP and must use SFTP. In this particular use-case, we don't care about security, but are forced to use the same tools regardless. So how can we maximize their efficiency? Exactly how much is this overhead and should we be concerned? If we are concerned, can we do anything about it? Let's find out with an experiment.
When you establish an ssh/sftp/scp session between two hosts, a negotiation takes place to establish common parameters including the algorithms used for bulk data encryption and hash-based message authentication codes (HMAC). Which ones gets selected comes down to a combination of which versions of SSH are on both sides of the connection (based on which algorithms they both support) and then the order of preference. So the final choices are the most preferred algorithms that both client and server support. However, you can forcibly change this selection using configuration options.
Let's use OpenSSH for experimenting since that's all I have convenient access to (other than PuTTY on Windows). I'm using a virtual Fedora 19 (running in VirtualBox on a Windows 7 host with a Core i7-860). F19 has this version of SSH:
OpenSSH_6.2p2, OpenSSL 1.0.1e-fips 11 Feb 2013I have another system on the LAN that is running native Ubuntu 13.04 with SSH version:
OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012When I establish a session using default settings, the negotiation selects these algorithms. You can see this by using the -v option to get some debug output. 'kex' is short for "key exchange".
debug1: kex: server->client aes128-ctr hmac-md5 noneSo by default, it prefers 128-bit AES for bulk encryption and MD5 for HMAC.
debug1: kex: client->server aes128-ctr hmac-md5 none
FYI, the 'none' at the end is the compression algorithm which is not used by default.
If you ask for it with the -C option, you get:
debug1: kex: server->client aes128-ctr hmac-md5 zlib@openssh.comCompression would reduce the amount of data being sent, but at the expense of more time on the sending and receiving ends to compress and uncompress it. This really only buys you something if you have a very slow (low bandwidth) connection. You generally don't want this on your local LAN transfers.
Let's first measure the transfer time of a reasonably large file. I just picked a "large" (259 MB) file I had sitting around. I send it from Fedora to Ubuntu five times in a row and saw the same performance each time, about 12 seconds.
Quick side-bar... the same test using compression took 20 seconds. The file being transferred is already compressed with gzip, so more compression doesn't save much, if anything, but sure takes up more time. Definitely not helping.
So how do we change the selection of these algorithms? And what other choices are available?
The ssh man page shows that the -c option can be used to select the encryption algorithm and the -m option selects the HMAC algorithm. The available choices are documented in the ssh_config man page. You can use the command line options to change it for just the current session or you can use the config file options in $HOME/.ssh/config to permanently save your alternate choices.
The scp and sftp file transfer programs do not support the -m option to change this. We will see if they will still honor the MAC keyword in the config file or not.
So in good scientific method, let us only modify one variable at a time and see what happens.
We will start with the encryption algorithm. See the ssh_config page 'Ciphers' keyword for the choices. Be careful not to confuse it with the 'Cipher' (singular) keyword as that lists the choices for the deprecated SSH Protocol version 1.
Here are all the cipher choices that were compatible with both systems and the resulting transfer times in seconds.
aes128-ctr 12.0Wow, a clear winner with 'arcfour' by 43% of the default transfer time. I'll take it. :)
aes192-ctr 13.3
aes256-ctr 14.5
arcfour256 7.2
arcfour128 7.0
aes128-cbc 7.6
3des-cbc 19.5
blowfish-cbc 7.8
cast128-cbc 12.0
aes192-cbc 8.3
aes256-cbc 8.5
arcfour 6.9
Does HMAC matter? Let's find out. It turns out that you can modify the choice via the config file using the 'MACs' keyword and scp/sftp both honor it. We'll leave the encryption choice alone (back to the default aes128-ctr) for now.
hmac-md5 12.0This clearly has a much lesser effect on the transfer time. The default choice matches the best of the other times, so we can just leave this alone.
hmac-sha1 12.5
umac-64@openssh.com 12.0
hmac-sha2-256 14.0
hmac-sha2-512 14.5
hmac-ripemd160 13.8
hmac-sha1-96 12.9
hmac-md5-96 12.0
In both algorithm choices, we see as expected that larger key/hash bit sizes for the same algorithm increase the computation time. As stated before, my use case does not require any security, so I would use 'arcfour'. If you follow the literature, however, arcfour (RC4) is considered to be somewhat weakened. The blowfish is almost as fast and considered strong. In fact, it was a finalist in the NIST AES selection process, but the Rijndael algorithm won the competition and was redubbed "AES".
This back-of-the-envelope experiment shows that in an environment where FTP is not allowed but security is not an issue you can gain considerable bandwidth back by selecting a different encryption algorithm for scp/sftp file transfers.
This post maps to CompTIA SY0-301 exam objectives 1.4 and 6.2.
Time to get started
Time to get started.
The notion of using blog posts was inspired by this post that I found while searching for security-related webinars that would earn CEUs but had low or no cost. Turns out there really aren't many (any?). I figure it can't be too difficult to write a handful of posts. They just need to be relevant to the Security+ exam objectives. I'll be using the SY0-301 list.
But first a quick rant about the cost of all these certifications and their maintenance.
I am not an "IT guy". My actual job role has always been software developer/analyst/engineer/architect, but during my career (almost 23 years now), out of both necessity and personal interest , I have learned many of what we now collectively refer to as "IT skills". I've always been on small-ish teams and we've rarely had the luxury of someone dedicated to taking care of our IT needs. So I volunteered a lot of such effort over the years and learned all kinds of things. Computer security has always been one of my interest areas.
More recently, our team moved to a new facility that had significantly higher security standards than our previous home. We were short on staff at the time and in order for me to be permitted to keep helping out with the IT tasks, I would have to meet the same criteria as our formal IT guys, i.e. certifications. So I self-studied the CompTIA Security+ and passed the exam. And I have to maintain the certification in order to retain my administrative privileges.
Philosophically, I completely support the notion of certified individuals doing something to maintain their knowledge and skills and present some evidence of having done so. (Sometimes while driving, I think folks ought to have to retake their drivers license exam every so often...) In the case of all these IT and security certifications, however, I find a significant financial barrier. If your primary job role is one of these areas and your employer will pay for the time and expense of training and taking the exams, then that's great for you. But if you fall into my case and you're just doing it out of self-interest or "on the side" as it were, then a lot of these certifications and their maintenance are likely WAY out of your budget.
The CompTIA certifications seem to be some of the least expensive options.
There are usually some good self-study books available for less than $50 and the exam fees are $200 - $300. Not so bad.
But take a look at some of the other stuff, like the SANS, Cisco, EC Council, etc.
Sticker shock! The exam fees are $500+ and you really need to buy either their training material or take one of their training courses, which will run from many hundreds to a couple thousand dollars.
Wow. It would be easy to get cynical and say they're all just taking us for what they can. But I can also see that these certs are not desired by enough people for any kind of Wal-Mart style volume discounts to start happening.
Maybe that will change over time. We'll see. Until then, the unsupported enthusiasts and non-IT people like me will just need to look for the affordable options where we can.
The notion of using blog posts was inspired by this post that I found while searching for security-related webinars that would earn CEUs but had low or no cost. Turns out there really aren't many (any?). I figure it can't be too difficult to write a handful of posts. They just need to be relevant to the Security+ exam objectives. I'll be using the SY0-301 list.
But first a quick rant about the cost of all these certifications and their maintenance.
I am not an "IT guy". My actual job role has always been software developer/analyst/engineer/architect, but during my career (almost 23 years now), out of both necessity and personal interest , I have learned many of what we now collectively refer to as "IT skills". I've always been on small-ish teams and we've rarely had the luxury of someone dedicated to taking care of our IT needs. So I volunteered a lot of such effort over the years and learned all kinds of things. Computer security has always been one of my interest areas.
More recently, our team moved to a new facility that had significantly higher security standards than our previous home. We were short on staff at the time and in order for me to be permitted to keep helping out with the IT tasks, I would have to meet the same criteria as our formal IT guys, i.e. certifications. So I self-studied the CompTIA Security+ and passed the exam. And I have to maintain the certification in order to retain my administrative privileges.
Philosophically, I completely support the notion of certified individuals doing something to maintain their knowledge and skills and present some evidence of having done so. (Sometimes while driving, I think folks ought to have to retake their drivers license exam every so often...) In the case of all these IT and security certifications, however, I find a significant financial barrier. If your primary job role is one of these areas and your employer will pay for the time and expense of training and taking the exams, then that's great for you. But if you fall into my case and you're just doing it out of self-interest or "on the side" as it were, then a lot of these certifications and their maintenance are likely WAY out of your budget.
The CompTIA certifications seem to be some of the least expensive options.
There are usually some good self-study books available for less than $50 and the exam fees are $200 - $300. Not so bad.
But take a look at some of the other stuff, like the SANS, Cisco, EC Council, etc.
Sticker shock! The exam fees are $500+ and you really need to buy either their training material or take one of their training courses, which will run from many hundreds to a couple thousand dollars.
Wow. It would be easy to get cynical and say they're all just taking us for what they can. But I can also see that these certs are not desired by enough people for any kind of Wal-Mart style volume discounts to start happening.
Maybe that will change over time. We'll see. Until then, the unsupported enthusiasts and non-IT people like me will just need to look for the affordable options where we can.
Saturday, November 23, 2013
Why this blog?
I am starting this blog to earn CEU credits towards the renewal of my CompTIA Security+ CE certification. Turns out you can claim credits by authoring topic-relevant blog posts of sufficient length. You can claim up to 16 CEUs, one per post during your 3 year renewal cycle.
Subscribe to:
Posts (Atom)