A Cisco-commissioned study found that employees at businesses in 10 countries around the world are often unaware of their companies’ security polices, or the employees ignore the policies because they hinder productivity. When surveyed about whether their companies had security policies, there was a 20 to 30 percent gap between responses from IT professionals and other employees. When asked why security policies are violated, IT professionals pointed to ignorance, while other employees said it was because the policies made it more difficult for them to do their jobs. The study surveyed more than 2,000 employees and IT professionals at companies in the US, the UK, France, Germany, Italy, Japan, China, India, Australia and Brazil.
Unfortunately I have seen the same thing in every orginization I have ever worked in. Another unfortunate fact is that no real solution exists to this problem. Most orginizations will do a security awareness program that consists of InfoSec trying to convey the inportance of this information without putting everyone to sleep, and the standard “signing of the security policy every year”.
Neither of these work, but they are better than nothing.
Does anyone else have any unique or effective methods they have used?
FoxNews (not one of my normal news sites… I promise) just posted a story entitled “World Bank Under Cyber Siege in ‘Unprecedented Crisis’“.
The details are fairly chilling and include some amazingly upbeat quotes like…
“While it remains unclear how much data has been pilfered from the bank, it’s a lot. According to internal memos, “a minimum of 18 servers have been compromised,” including some of the bank’s most sensitive systems — ranging from the bank’s security and password server to a Human Resources server “that contains scanned images of staff documents.””
“The World Bank Group’s computer network — one of the largest repositories of sensitive data about the economies of every nation — has been raided repeatedly by outsiders for more than a year, FOX News has learned.”
This is certainly disturbing news for a number of reasons. Most importantly the fact that the worlds financial system is serious peril and this…
In a frantic midnight e-mail to colleagues, the bank’s senior technology manager referred to the situation as an “unprecedented crisis.” In fact, it may be the worst security breach ever at a global financial institution. And it has left bank officials scrambling to try to understand the nature of the year-long cyber-assault, while also trying to keep the news from leaking to the public.
The italicised text is what I find very disturbing. GLB, SOX and a slew of other laws all have strict disclosure guidelines. Trying to hide something of this magnitude is not only futile but also illegal.
Arpwatch is an amazingly useful tool that promiscuously listens on a specified interface for arp broadcasts. It takes what it learns and saves the the output in a database for later reference in the following format.
mac_address ip unix_date/time hostname
It will take any changes/additions and log them to /var/log/messages as well as optionally emailing them.
This functionality is useful for detecting
- Man-in-the-middle attacks
- Arp spoofing/poisoning
- Session hijacking attacks
- New hosts introduced onto your network
Set up and configuration is easy. Just download and compile arpwatch from lbnl’s site, create an arpwatch user (unless you want it to run as root… which you don’t), create an empty arpwatch database (touch/home/arpwatch/arp.dat) and run it.
The command line arguments you run will differ depending on how your network is set up, so check out the man page to be safe. The following should work for most situations.
/usr/sbin/arpwatch -i eth0 -u arpwatch -f /home/arpwatch/arp.dat -n x.x.x.x/21 -e –
-i eth0 tells it to listen on /dev/eth0 only. You can run multiple instances of arpwatch for each nic/network if you are multihomed.
-u arpwatch tell it to run as the user ‘arpwatch’ instead of root.
-f /home/arpwatch/arp.dat tells it to save the arp database in that file instead of the default location
-n x.x.x.x/21 tells it that an additional address range is in use on this interface. If you have IPs outside of those defined on your monitor nic it will report them as bogon.
-e – tells it not to email you with every thing it discovers. You will want to run it this way the first time to avoid flooding your mail box.
If you have been living in a cave for the past few months you may not be aware of Comcasts recent practice of “shaping” bit-torrent traffic.
Specifically they insert RST packets into, what they believe to be, bit-torrent sessions and forge them to look like they came from the host at the other end of the session. For those of you not familiar with hot TCP/IP works, a RST packet is normally sent to tear down an established session. If this is erroneously sent in the course of a communication (as is the case with Comcast) your computer will get confused, drop and have to re-establish a connection.
The primary issues with this are…
- In order to associate the RST packet with your bit-torrent session they have to forge it to make it appear as if its from the other host you are communicating with. This violates a number of U.S. computer crime laws.
- They do a pretty crappy job in determining what bit-torrent traffic is. A number of reports have surfaced indicating the Lotus Notes and a number of other protocols are being improperly “shaped” as a result of this.
- A large number of legitimate software packages are distributed ONLY via bit-torrent. This is often the case with open source and free software as the developers are usually unable to afford the bandwidth required to distribute their works.
Another things that irks me regarding Comcast’s media handling of this is a position often stated by their PR and Executives.
Cohen also reiterated Comcast’s position that it doesn’t block traffic. “Comcast does not, has not, and will not block any websites or online applications, including peer-to-peer services,” he said, pledging to work with the FCC to “bring more transparency for consumers regarding broadband network management.”
They don’t seem to understand that inserting a RST packet is “blocking” traffic. A number of hardware Intrusion Protection Systems use that method to block intrusion attempts when they are not configured “inline” and have the ability to kill a session normally.
The folks at consumerist (excellent site, btw) just posted a copy of the disclosure letter geeks.com (aka computergeeks.com) sent to customers informing them that their credit card data may be compromised.
A few items that concerned me about the disclosure are…
Genica Corporation dba Geeks.com
1890 Ord Way Oceanside, CA 92056
January 4, 2008
The purpose of this letter is to notify you that Genica dba Geeks.com (“Genica”) recently discovered on December 5, 2007 that customer information, including Visa credit card information, may have been compromised. In particular, it is possible that an unauthorized person may be in possession of your name, address, telephone number, email address, credit card number, expiration date, and card verification number.
Two things immediately jump out at me in this first chunk of text. The first is date of letter compared to the stated date of discovery.
Being a PCI-DSS guy I know that most merchant gateway providers require disclosure within 1 day of “a suspected compromise”. Granted, that is disclosure to the merchant gateway and not customers. However, computer geeks operates out of California which is on the forefront of disclosure laws. In fact the California Security Breach Information Act (SB-1386) states…
Any agency that maintains computerized data that includes
personal information that the agency does not own shall notify the
owner or licensee of the information of any breach of the security of
the data immediately following discovery
The other troubling part was “and card verification number”. This is the CVV2 that is NEVER to be stored per PCI directive 3.2.2.
3.2.2 Do not store the card-validation code or value (three-digit or four-digit number printed on the front or back of a payment card) used to verify card-not-present transactions
I am troubled by the fact that vendors still remain clueless on best practices and regulations that govern their actions. I am even more disturbed with the fact that (despite these regulations) implementing proper safeguards and demonstrating caution is in their customers best interests, but yet is still not done.
I recently posted a comment on FOSSwire.com in response to other comments condeming the author for suggesting moving ssh to a port besides 22 was “security through obscurity” and a worthless security measure.
I have argued this topic many times with many different people and felt that comment bears repeating for my downgrade.org audience.
— snip —
“security through obscurity is no security at all” Says the broken record.
I believe heavily in security metrics because numbers are awfully hard to argue with.
In a university environment a machine with ssh on port 22 in my DMZ would receive an average of ~100 invalid login attempts per day (averaged over the course of 2 months).
This same machine in the same DMZ running SSH on port 51234 received an average of zero… no, not a average of zero… just zero.
This effectively eliminates all scripted attacks, worms, Trojans, bots and most uninitiated real attackers.
In fact if you run it on a very high port — say 51234 — most people won’t even find it with a port scanner.
One would have to statically define the port range as most port scanners quit far before 51234.
At that rate scanning ports 1-51234 would take an insane amount of time per host, and most attackers scan huge blocks of hosts.
At that point hopefully an IDS/IPS would pick up the port scan and make the whole thing moot.
Seriously. Its not a fool proof security measure and I certainly wouldn’t use it as the only means of protecting SSH, but its an effective layer. And those same people that are so quick to spew out the “Security through obscurity” cliche are also the same that are quick to pull out the “Layered Security” ones.
— snip —
You can use NSM (Netscreen Security Manager) to manager your Netscreen firewalls.
You can use NSM (Network Security Monitoring) to monitor your network.
From now on you’re Bob, you’re Fred and you’re Julio… I hope you all can play nice together.
I have been working on various SIEM (Security Information and Event Management) and log retention policy related projects lately. Through these projects, and others that I did as a security consultant, I have developed a list of log categories (or log types).
Surprisingly, I have found little to no authoritative document that provides such a list.
I have read through various RFCs, The NIST SP 800-52 Guide to Computer Security Log Management and a large number of other documents. And still not found a comparable list.
Because of the lack of existing lists I wanted to post what I have come up with in hopes that it will help others seeking out the same information, or at least generate conversation and point out other resources or types that I may have missed.
- Audit Trails: logs that document application or OS changes made and/or specific actions taken by a user. Also includes “object access/change” logs… This would include output from change management systems and system integrity logs like tripwire produces
- Event Logs: internal system or application events that are not specific to a user or user generated
- Traffic/Access Logs: web server hit logs, contain url accessed, visitor ip, browser, ect.
- Filter device Logs: allow/denies from: firewall, ips, acl enforcing routers, ect.
- Exception Logs: error logs
- Network Traces: packet captures, flow data, ect.
- Authentication Logs: login/log out/invalid logins and session tracking
- Physical Access Logs: visitor log, biometric/badge/token door logs
- Transaction Logs: database generated
- Data Logger: statistical or numeric data. Data center environmental monitors, web hit counters, manufacturing equipment output data, ect.
Obviously some systems would lump data from multiple categories into one physical file. This is where a good parser or SIEM product would come into play.
These categories also only include log data that would generally be ‘computer generated’ and are to be considered top level categories. Many different sub categories may exist under each.