Security and the DBA – from an Oracle perspective

Why security isn’t all hackers and ransom-ware.


Security has hit the headlines many times since computers came to the forefront of popular culture and with the recent wave of headlines whilst all eyes are focused on black hat hackers, it’s important to understand the full extent of security to a DBA.

Security is iterative and evolving and is like standing on shifting sands so, having a holistic view of securing data for an application is mandatory – whilst the technical approaches may change rapidly the broad areas for consideration generally remain the same.

Whilst the DBA is in a relatively privileged position in that the databases are rarely customer facing, and usually have a few secure layers between the database and the user [App Servers / Web Servers / Firewalls / etc] a good DBA should not rely on other teams to do a DBAs due diligence for them.

Data traditionally can be thought of as “at-risk” in two primary scenarios

1)     Data in flight – this is where data is being transferred between two [or more] points

This is where most people think security is paramount and where hackers will try and get your data. However, there is a further point to consider – if you are using encryption or VPN then that helps secure network based traffic driven by application requirements, but what about the requirement to backup data. Usually backups are held locally and then periodically sent off site but this usually requires 3rd party involvement and as such, you need to know how well their practices adhere to your security requirements.

Consider if the network is being snooped – can the traffic be logged for later decryption by a motivated hacker? If so, ensure you are at the highest level of encryption your application stack can take and work closely with your Security team to ensure any processes running on Oracle-interested ports are monitored. Can you authenticate the servers requesting data as part of the configuration and/or can a two-tier approach to authentication be introduce that helps secure your data with minimal disruption to normal service?

Also consider any one-off transfers to satisfy new/ad-hoc requirements – will the data transfers meet the required security standards? Will there be default settings used for ports etc which would be where a hacker could sniff data packets? Move your configuration away from default ports and report any attempts to connect to default ports to Security for further investigation. These simple changes from defaults along with working with security personnel can help avoid detection which can vastly reduce your risk profile.

Consider resilience requirements – does the traffic between primary and standby sites also meet the security standards, even though the data may not travel over public networks? Are the same standards in place at the standby site that are in place at the primary?

2)     Data at rest – this is where data is stored for application requirements or resilience requirements. The data is stored within data files which are stored either in OS standard filesystem or within a bespoke file management system [eg. ASM], permissions need heightened for the former, and access to the bespoke system via direct / group membership needs agreed for the latter.

Anyone with access can copy files off so how will this be monitored and managed? Locking down ports and USB is a good start but being able to upload large amounts of data to “support” sites needs properly managed.

Hardware level encryption offers protection from someone copying data files which means without the appropriate hardware keys, the thief cannot decrypt the data files thus rendering them unusable for data retrieval. Ensure these keys are not owned by the same account as the database owner so if one account is hacked, then you have at least one additional protection for this hardware level encryption.

Backups of files are also at risk and so, should also be encrypted before being delivered to backup medium. Whilst this may add a little extra time to backup runs, it will help secure you against any malicious 3rd party involvement.

If a set of backup media goes missing, either internally or via 3rd party, how would this get handled?

However when you drill in to the responsibilities of a DBA then you need to grasp the nettle of the following:

1)     Security of Performance

Having the best data in the world will mean nothing to the user community if it doesn’t deliver within a realistic timeframe. Part of the overall performance considerations should consider repeated, regular batch runs and their impact on IO/CPU/Disk Space/backups.

Peak user traffic needs to be understood to ensure when approaching peak traffic, a plan can be put in place to deal with post-peak traffic thus securing an agreed minimum level of service can be maintained [assuming a burstable solution isn’t part of your arsenal].

Performance of background tasks required to meet SLAs [data feeds, off-site clones, standbys, backups, etc] needs to be fully understood and any degradation considered for day2day running and also during extraordinary situations.

2)     Security of geographic independence

Having a brilliantly tuned database will mean nothing should your data centre go off-line for whatever reason [terrorism, flood, fire, civil unrest, etc.] and you have no ability or window to reproduce your data from backups. This is where a remote geographic solution needs to be secured, where a mirrored site containing as close to real time data as is required, can be deployed as part of the production configuration. This means that you will be able to switch over [or fail over depending on the circumstances] to the remote site and continue delivery of data to the application via tried and tested procedures/processes.

You can also look separately at performance requirements specific to the conditions that would cause you to invoke a move to the standby site and manage that as the conditions demanded, filtering access via a priority list if needed thus guaranteeing access to specific users or servers.

3)     Security of who has access to what data within a database

Ensuring that only those users who should be accessing specific data within the database should be fully thought through and delivered from a roles or profiles perspective which should restrict access accordingly.

Privileged accounts, both system and application, should be monitored and role separation employed to protect data from these privileged users. Strong password management should be at the forefront of any privileged account with regular enforced changes to ensure any expired users will not have unlimited access.

Products exist to enforce this type of role separation but can cause performance overheads, so that needs to be considered when looking at solutions to who sees what within the permitted user base.   

Majority of data leaks are caused not by hackers, but employees and 3rd party companies who, maliciously or otherwise, haven’t thought through all the process and procedures required to stop these types of leaks. See here for a recent report on a malicious Admin attempts to disrupt their old company.

4)     Security via multiple layers of resilience

Ensuring that data is available/recoverable in the event of a mishap can be core to the “trustability” of an application, on which many a reputation can rely. Consider the multitude of ways data can be damaged and then think of the tools available to restore that data in a meaningful timeframe.

Use of backups or geographically remote copies/standbys can be deployed to ensure physical corruptions can be overcome relatively quickly, whilst logical corruptions can be mitigated against by allowing standby’s to be rolled back and checked before committing to that strategy on the primary site OR allowing a lag exist to give the admin a chance to stop the corruption before reaching the standby. Having multiple standbys with different lags can satisfy SLAs whilst also allowing logical corruptions be planned for over a varying timeframe.

Exporting data to secure locations or other databases can also provide resilience thus assisting in securing data availability

5)     Security of responsible patching

Ensuring that the software is kept patched to a recent release level, can help protect and secure your data from known exploits. No one realistically expects patches to be applied “on release” from a supplier but, as rapid a test and deployment phase as is expedient should be used, to ensure your database software doesn’t allow data to leak or be compromised via malicious entities. Agree in advance with the application and business owners a comprehensive patching schedule and policy, which allows reasonable windows for patching during the year, with exceptional circumstances allowed for. This gives everyone a realistic understanding of patching requirements and reduces the likelihood of patching being the source of vulnerability.

6)     Security within the deployed applications

Understanding the applications deployed to your database[s] is key to securing the data and access to it. Reviewing the structures and code should be part and parcel of the DBAs day2day working, understanding how the structures interact and how proposed code changes will affect logic and performance of the application.

The DBA should understand what impact the proposed objects may have on data footprint and if the objects can be normalised to allow a smaller footprint to hold the same information.

The DBA should also understand the potential for data growth and redo/change growth to ensure that any planning can account for these changes and thus not threaten the availability of access to the data due to space restrictions, performance hits caused by larger than necessary data sets and resilience hits caused by longer replication/backup durations.

7)     Security of Data Quality

Without understanding and controlling the data changes within the database, then the data quality can potentially drop to the point of non-usability. As above, without understanding the application objects and what changes the application or batch jobs will make, then unfortunately you are in a “rubbish in, rubbish out” situation.

Secure the database as you would if you were the gatekeeper to a castle allowing entry to only those you trust – ensure proposed changes sustain the current quality within the data, and any major application changes/releases are understood for both forward support and any rollback position you may need to maintain.

8)     Security of key/core files

Within all software deployment there are core binaries and configuration files specific to your database, especially where encryption is involved. Ensure these files are highlighted and their purpose understood and document these findings. Maintain a log of changes to these so you have traceability and scan on a regular basis to see if any of these core components have changed. Backups of these files can help identify what changes have been made and when, along with checksum values for each file.

If available, use software code control to maintain these thus allowing for future ease of support.

Notify Security if changes cannot be explained by the current support teams to ensure it’s not a stealth approach by someone with malicious intent.

9)     Security of Future Proofing

Knowing how the database will likely change over the lifetime of its tenure will also be core to the security of the database. If you hit space, performance or resilience issues then this will reduce the availability of the data along with increasing support requirements. Understand the footprint of the application data from IOPS/Disk/CPU/Memory and how projected growth will impact on day2day support. Redo growth will increase backup duration thus prolonging resource requirements which may eventually straddle operational windows. Plan for these type of bite points, monitor and understand the metrics presented by the database and most importantly, know how long additional resources will take to deploy to resolve these types of issues. Planning for these eventual pain points will help minimise any disruption they may cause.

Understand also the growth in application requirements and how that may also impact on available resources – if a data store is initially planned for 2 or 3 feeds but over time it is apparent that the store will have to handle a factor of x5, x10 or more, then understand how that will impact on available resources and resilience considerations.

10)  Security of Costs

Whilst most DBAs don’t fully have control of budgets, knowing the potential costs over the projected timeline of the database, helps ensure the availability of resources and personnel required to maintain the availability of the data. If these costs can realistically be projected and planned for, then budgeting should bring minimal risk to the database and data within.

11)  Security via controlled Social Media

A lot of DBAs [and data architects] enjoy sharing information on tasks they’ve performed and also new ideas they’ve come across as part of their day2day role. Great care should be taken that information, whether direct or indirect, in social media/blogs doesn’t reveal more than the author had intended. Version of software, patch levels, password structures, actual passwords, timelines, port numbers, etc can reveal to non-permitted users how a company [revealed by the DBAs social media accounts usually] runs it software releases.

Social media and blogs are a great resource for a DBA to resolve issues and learn but they need to be handled with care. If in doubt, pass the proposed post/tweet/update to your Security Team to see if they have issue with what you’d like to reveal.

I’ve worked with some fantastic DBAs and one of the better ones blogged prolifically without understanding just how much unintended information he revealed about his client. Within a fortnight of meeting him, I was able to explain to him how much I knew about his client without ever having being on their systems AND was able to point out vulnerabilities in their setup, which if obvious to me would have been gold dust to a black hat hacker.


Conclusion

Obviously this isn’t an exhaustive list by any manner, but should give a DBA reason to pause when asked about securing a database and ensure at least the above list has been considered. There are many competing factors and drivers when managing a database but overly focusing on external threats can distract from other, just as real, threats that can as easily derail an application.

By no means am I suggesting that the external threat from a dedicated hacker [or collective] should be trivialised but should be put in context of the additional threats that can as easily disrupt the Holy Grail of “uninterrupted, appropriate data access”.

Whilst I would also hope that lessons have now been learned from the recent spate of high profile incidents I am too much of a cynic to believe that these will have been enough to increase peoples’ security awareness and change their security practices. “Today’s newspaper is tomorrow’s chip wrappers” and unfortunately these type of incidents are cyclical. My advice to anyone working in the DBA arena is to understand the holistic threats to your data, whilst keeping an eye on the ever-changing technical approaches.

To view or add a comment, sign in

More articles by Martin Cassidy

  • Death of the DBA - from an Oracle perspective

    In relation to an article from John’s blog, I posted the following and thought it may be worth widening the audience to…

    2 Comments

Insights from the community

Others also viewed

Explore topics