I just read an article that illustrates how basic planning and proper implementation of procedures could have saved us tax payers $200,000.
A computer technician reformatting a disk drive at the Alaska Department of Revenue. While doing routine maintenance work, the technician accidentally deleted applicant information for an oil-funded account — one of Alaska residents’ biggest perks — and mistakenly reformatted the backup drive, as well.
There was still hope, until the department discovered its third line of defense had failed: backup tapes were unreadable.
“Nobody panicked, but we instantly went into planning for the worst-case scenario,” said Permanent Fund Dividend Division Director Amy Skow. The computer foul-up last July would end up costing the department more than $200,000.
Now, you may ask: “How could this have been avoided?”
The answers are simple, “separation of privileges” and “regular backup validation”.
In this article it was mentioned that the data contained on the drive was for “an account worth $38 billion”. So for data that is that important and that valuable, why do they only have one backup tape? If they do only have one backup tape why wasn’t it validated?
The “seperation of privileges” is a security concept that you often see demonstrated in movies when a government is about to launch a rocket into space or a nuke. Either, two people have two seperate keys to launch; or one person has a key and another a secret code. This is a valuable security concept because it ensures that no single person is responsible for the launch of a nuke.
In this case the technician (most likely ex-technician by now) should have only had file system permissions to either the data drive or the backup drive, but not both.
I thought the U.S. government invented these concepts? Why is it that they don’t follow them?
<a rel="nofollow" data-shared="sharing-twitter-234" class="share-twitter sd-button share-icon" href="http://downgrade see this website.org/2007/03/23/technician-error-costs-taxpayers-200000-and-illustrates-lack-of-procedures?share=twitter” target=”_blank” title=”Click to share on Twitter”>Twitter
I finally decided that it was time to move on to a job, in which all I do is information security. Yesterday I accepted an offer with another department on campus and will be starting in this new role April 16th.
Because MSU uses union titles that don’t describe the specific role, it is difficult to say for certain what my real title would be. Having closely read the job posting, comparing it to other postings on job boards and what I know of InfoSec careers, I would describe it as “Information Security Analyst”.
It will be refreshing to get away from certain aspects of my current job: specifically support. I’m done crawling under people’s desks, thank you.
The best part of this move is that I will be able to devote all of my attention and research to InfoSec.
The whole “job search” process was interesting this time around. I interviewed at MANY different places, for all sorts of security related positions. I turned down a number of offers and had some exciting and interesting opportunities.
The two positions I narrowed it down to are the one I accepted (obviously) and a permanent position as an “Information Assurance Analyst” doing contract work for the Department of Defense. I would have had secret and IT-1 security clearance. It sounded exciting, but I decided that MSU was the best fit.
MSU is a 13 minute commute as opposed to an hour for the DoD and I can pursue a college degree here at MSU.
In case you haven’t been following security and privacy related news, last week Texas Attorney General Greg Abbot ruled that exposing SSNs in public documents violates state and federal laws.
To me, this is common sense and good news for the common good of everyone in Texas. Why would you want anyone printing your social security number in a public document? It makes no sense and is outright dangerous.
The Texas House of Representatives last week passed emergency legislation that would absolve county clerks of civil or criminal liability for exposing SSNs in public documents “in the ordinary course of business.” […] The ruling would require that clerks check each document for SSNs and remove them before making the documents public. Daunted by the task and fearful of running afoul of the law, county clerks asked state legislators to come to their aid.
This sounds like a group of people so set in their ways and fearing of change that they are unwilling (or too lazy?) to change operating procedure to comply with the law and the good of the general public.
I’m appalled that the Texas county clerks would ask legislators to exempt them from this law and I am even more disgusted with the fact that the legislators are considering it.
Apparently even the privacy concerns are bigger in Texas.
The United States Computer Emergency Readiness Team (US-CERT) has issued an alert warning of a worm that exploits a vulnerability in the Sun Solaris telnet daemon. The flaw could be exploited to gain unauthorized access to a host using the service. Sun Microsystems has made available a patch and a workaround for the flaw, as well as an inoculation script to disable the telnet daemon and repair changes the worm has made.
Internet Storm Center (published far earlier than most other major
I would have to add to this; that simply using telnet is vulnerability and the patch (that has been available for years) is called SSH.
Stop using telnet! It floors me how often I go to configure a hardware firewall to find that telnet is left open or is the only remote shell available. Stop it!
What’s it mean?
A salami attack is a series of minor data-security attacks that together results in a larger attack. For example, a fraud activity in a bank where an employee steals a small amount of funds from several accounts, can be considered a salami attack. (source: wikipedia) Why is it so hilarious?
Think SuperMan or Office Space. Now say “Salami attack” aloud and try not to snicker. See, I told you it was funny.
What’s it mean?
This refers to the practice of siphoning data from users’ PCs as they surf the ‘net. (source: itsecurity.com) Why is it so hilarious?
As best as I can remember a woozle is a weasel like creature that was friends with the heffalumps and arch enemy to Winnie the Pooh in the 80’s cartoon series. But this one would be upgraded with mechanized parts. Hence the ‘cyber’ prefix.
3. Smurf Attack
What’s it mean?
The Smurf attack works by spoofing the target address and sending a ping to the broadcast address for a remote network, which results in a large amount of ping replies being sent to the target. (source: sans.org) Why is it so hilarious?
Call me a child of the 80’s but this is one attack that I have a hard time taking seriously simply because of its name. It always conjures up images of Gargamel and Smurfet.
4. Sheep Dip
What’s it mean?
A computer that is isolated from a business core network used to screen incoming digital devices. They will often contain multiple malware scanners and egress packet detection. (source: wikipedia) Why is it so hilarious?
Just picture it in literal terms and try not to laugh. In my head I always see a sheep being lowered into a vat of… something… by a crane with a leather strap holding the sheep up. That’s funny stuff.
What’s it mean?
A script that will help you update and manage your Snort rules. (source: oikmaster site) Why is it so hilarious?
For starters it has the word oink in it. Call juvenile, but that’s funny. If you compound oink (the sound a pig makes) with a mastery of it, that’s just downright hilarious.
6. chaffing and winnowing
What’s it mean?
Chaffing and winnowing are dual components of a privacy-enhancement scheme that does not require encryption. The technique consists of adding false packets to a message at the source (sender end of the circuit), and then removing the false packets at the destination (receiver end). The false packets obscure the intended message and render the transmission unintelligible to anyone except authorized recipients. (source: searchsecurity.com) Why is it so hilarious?
Not a single term, but yet a strange situation in which two terms are tied to a single concept, and both of them are down right hilarious. Chaffing on its own means “To make fun of in a good-natured way; tease.” Good-natured teasing is humor based and… I’m grasping at straws here… besides, it sounds funny.
7. Port Swigger/Burp Suite
What’s it mean?
Burp suite is an integrated platform for attacking web applications (source: portswigger.net) Why is it so hilarious?
Now this is a project that doesn’t take it too seriously. It was previously known as Port Swigger, which, I guess, means to rapidly drink a port (or data from a port) and I’m sure Burp needs no explanation.
What’s it mean?
A key agreement algorithm published in 1976 by Whitfield Diffie and Martin Hellman. Diffie-Hellman does key establishment, not encryption. However, the key that it produces may be used for encryption, for further key management operations, or for any other cryptography. (source: sans.org) Why is it so hilarious?
I’d like to immediately apologies to Whitfield and Marin for making light of their last names, but when you combine them it just sounds silly. This is another one that has to be said aloud to be appreciated. Hearing it conjures images of rotten mayonnaise. Maybe I’m just warped.
What’s it mean?
The use of special regression testing tools to generate out-of-spec input for an application in order to find security vulnerabilities. Also see “regression testing”. (source: sans.org) Why is it so hilarious?
Think puppies and kittens with their tickley softness.
What’s it mean?
Automated system simulating a user browsing websites. The system is typically configured to detect web sites which exploit vulnerabilities in the browser. Also known as Honey Client. (source: sans.org) Why is it so hilarious?
Monkeys are, by default, funny. They do human things, make funny faces and fling poo. Cover them in honey and you have a sure-fire recipe for hilarity. Try it, you won’t be disappointed.
Information Security is a game of tradeoffs. The most common way these trade offs are represented is the CIA Triad. It is often visually represented as a triangle with the three tenants (concepts, principles, whatever) written across each side. Then as the security of your project is being evaluated a dot will be drawn on each side of the triangle relative to the (evaluators perceived) level of each tenant.
In most cases the goal is to find an absolute balance so that the evaluation of your proposed security solution has dots in the exact center of each of the three sides. The idea is as security (confidentiality and integrity) is increased the availability (usability) will go down. In cases that require high security, this is absolutely acceptable.
The triad is broad and flexible enough that it can generally be used to gauge any product, project, problem or system. Because of this, the three tenets can often mean different things in different situations. I will explain them in the most general terms that will apply to most situations, but be aware that this is in no way exhaustive.
Confidentiality: Confidentiality is all about keeping things that are supposed to be secret… well… secret. Safeguards that would fall into this category include cryptography and anti-spyware. Attacks against confidentiality include sniffing, key logging and cryptanalysis.
Integrity: In the world of information security this is most generally likened to authentication. Non-repudiation is essentially what this one is all about. This can mean either proving you are who you say you are or the file has been unaltered. Other examples of how integrity comes into play in information security include code signing, file checksums, logins and biometrics or using PGP to digitally sign emails.
Availability: When most IT administrators think of the word ‘availability’ the first term that pops into their head is ‘up-time’. To be available is to be accessible by users. While that is still true in this case, it is also only a very small part of the availability definition. This is the one that often gets pushed lower as integrity and confidentiality get pushed higher. Availability can also be thought of as usability. How easy or hard is it for the end user to utilize your system.
Examples of situations that you could benefit from using the CIA triad could range from a user requesting to use their personal laptop at work to individual pieces of a new password policy.
A good example that was recently presented to me was the ballad of Bob (obviously not his real name). Bob works for Company A and Company B (obviously not the real companies, either) and splits his time between both with his laptop. Bob physically works from both offices and needs to access resources on the Active Directory domain of each company. Unfortunately, no trust relationship exists between these two domains.
The IT staff came to me with this dilemma and had three possible solutions; they wanted my input on which is the most ‘secure’.
Solution 1. Set Bob up with a network account under each active directory domain: have him log in to which ever one he needs access to at the time. Although he may be physically working from Company A, he will likely still need to access resources from Company B and vice versa. Although this will allow both companies to stay in line with their security policy regarding expiring passwords and maximum password age, it introduces problems with file synchronization and having to login and out multiple times per day. Bob would likely perceive this to be a pain in the butt.
Solution 2.Create a local profile on Bob’s laptop and have him manually map to the resources he needs access to and set his passwords to never expire on both domains. Bob would likely really like this solution as it involves less work and inconvenience for him. As you can see from the associated figure it would bring accessibility up on the triad while increasing the risk due to no password expiration.
Solution 3.Because Company A and Company B are both bound by internal and industry regulations regarding maximum password age, a third (hybrid) solution was developed. This involves Bob working from a local profile (as seen in solution 2) but having to log into each domain once per password cycle to change his passwords before expiration. As you can see by the figure, this provides and acceptable level of risk and accessibility.
From the above example you can see, even if you aren’t an information security professional, knowing and applying the CIA Triad is a good way to evaluate technology choices and serves as visual way to back up your decisions to management. Without much explanation management grasps why you would want the balance in the picture and will be more willing to follow your advice.
Well, not really. This time it was only my debit card.
I received word, last Saturday evening, from bank (National City) that my debit card had been used for a ‘card-in-hand’ transaction at a gas station in Canada (they made a physical card containing my debit card information on the back strip). The women from the bank asked if had been in Canada earlier that day. After telling her that I was at home all day she informed me that my card number had likely fallen prey to the recent rash of debit card information thefts.
From what I was able to gather from previously reading about this, is that a number of merchants illegally retained debit card PIN information and the information was subsequently stolen and used all over Canada and Europe.
The woman from the fraud department at National City informed me that the transaction had occurred about an hour earlier, that she saw no additional fraudulent transactions (I verified with my online account view), that my card had been frozen to prevent further charges and that the bank maintains no liability policy. In other words I was not responsible for the transaction in any way. She asked that I stop by a branch and fill out an ‘Affidavit of Fraud’ and that a new debit card was being mailed first thing Monday.
All and all I was very impressed with quickness of detection and the fact that they took the initiative and corrected things. They turned what could have been a disaster into only a minor inconvenience.
I am, however, unimpressed with the fact that government still has not passed any law that will hold the vendor(s) accountable for allowing the information to be compromised. I am certain that once a law of this sort passes, the frequency of these sort incidents will drop like a stone.
The number of articles about this whole debacle indicate that hundreds of thousands of others have also fallen victim. A couple from security focus are as follows:
I spent last week in a training class for the Certified Ethical Hacker (CEH) exam. The first day of class they issued me an EC-Council backpack that contained two text books (1,800 pages worth), one lab manual and one t-shirt. It’s heavy as hell and I can see why they provide the (fairly nice) bag to lug it all around in.
I went into the class expecting to only learn the corporate developed ‘best practices’ for penetration testing and hacking. I walked out of the class believing that anyone could benefit from its teachings. Even a seasoned pentester is bound to learn something.
It teaches a best practices methodology to approaching a penetration test. Just about any category of tool that would be useful in a pentest is covered. Far too many, in some cases. Although, I think it is great to get exposure to more tools than one would generally exposure themselves to.
My pentest toolkit is now stocked with only the best tools and separated into the logical categories that the CEH teaches. It just makes sense.
In a near future post I will be explaining my toolkit, what it contains, how it is organized and how to make your own.
By contrast, he points to Microsoft as a prime example of how to respond to threats, providing well-documented communications and prescriptive “how-to” guidance with alerts that are delivered through email, RSS and deployment tools.
This whole paragraph is absolutely laughable. Lets flash back for a second to Microsoft security bulletin 912840 and my rant regarding it. And now lets re-read that happy little Microsoft fud. Something doesn’t add up, does it?
If that isn’t enough to convince you, lets look at yet another reason why no software vendor should ever adopt Microsoft’s security practices. Two words; Patch Tuesday. Holy god is that a bad model. No matter how bad a vulnerability is, they will sit on the patch (leaving everyone exposed) till the next patch Tuesday. Just because its more convenient for admins.
I, as an admin, would much rather patch frequently, than sit on hands while blatantly exposed to a threat.
Once they work these things out, then (maybe) they can blast other software vendors. Until that time though, they should sit back, shut up and stop making themselves look foolish.