My current research is mainframe security as it applies to web applications. This is where the high hanging fruit (the golden apples) lie. If you can
a) fake or bypass authentication
b) fake or bypass authorization
c) spoof logging or otherwise destroy accountability
d) interact directly or indirectly with a deeply nested service of value
e) manipulate data to violate integrity (creation, update or delete)
f) view data (read)
you are most likely to pwn the high hanging fruit. It’s actually amazing to me how LITTLE information is available on securing this stuff, and how often products which are marketed as “enterprise ready” and “secure” are actually not worth running a faulty bidet let alone left in charge of multi-trillion dollar a day roles.
Then there’s the dumb architectures which often use clear text protocols, unauthenticated transfers (often using ftp or worse), batches with no integrity and no accountability controls, and so on. This field is amazing that no one has taken the time to really learn how to do it properly. It is not 1969 any more. The days when the data center was guarded and that’s how the punch cards arrived and the tapes left no longer apply.
However, there’s a few protocols and common transports which need some help first. I’m going to blog on those in the near term future.
3 thoughts on “Reaching for the high hanging fruit”
ACF2 and RACF are both excellent facilities for securing access control as well as the auditing of said access control to resources residing on mainframes. As with everything else, the trick is the proper configuration and use of these facilities.
However, these facilities don’t address the other concerns you mentioned in this article, especially in the areas of batch data transfers that may use cleartext protocols, batches with no integrity and no accountability controls, and so on.
Having conducted a few security and compliance initiatives for two large retailers in North America, this is a key area for improvement. Not only that, but the auditors are also taking note, especially for PCI.
I’ve seen cases where FTP was used to transfer critical information for a few reasons: either it was the only protocol available, or it was the easiest way to do things. Added to that was the skill sets and experience levels of the mainframe administrators who may or may not know how to secure said transfers.
This is a fascinating field.
Some things that environments can do to immediately help with this are:
1. Understand what data is sensitive to their mainframe environments. A true understanding of this data can help companies decide on what measures they can deploy that can provide the best value to mitigate risk and balance operational efficiencies;
2. Isolate the mainframe environment from the actual web processing environment, ensuring that only the bare minimum of information flow happens between the two. If possible, use a data abstraction layer between the two separated environments;
3. Understand how to secure protocols, including the clear text protocols such as FTP. Easy ways of securing FTP are Secure FTP (FTP/s) and SSH FTP (SFTP). Use FTP servers that fully audit activities;
4. Consolidate the logs of all these different systems, processes and subsystems into a centralized log management solution such as a SIEM. Utilize any correlation and analysis capabilities of said solution. I’ve had great success with the RSA enVision solution for this.
These are some of the things companies can do, that can realize immediate benefits to them.
just blogged about your blog. I’m also doing some research about mainframe security, specially on cases where legacy applications are being used by new apps on other platforms. I have seen some huge security holes on how people are doing that integration, like screen scraping systems, direct CICS Sockets access and so on. Good to see more people talking about it.