Auditing oracle database stopping and starting using the elk stack _ frits hoogland weblog

This blog post is about two things: one how you can monitor who is bringing you database up and down (there is a twist at the end!) and two how you can very conveniently do that with aggregated logs in a browser with a tool called ‘Kibana’, which is the K in ELK. – Elasticsearch is an open source search engine based on Apache Lucene, which provides a distributed, multitenant-capable full-text search engine with a http web interface and schema-free JSON documents. – Logstash is a fully configurable open source data processing pipeline that can receive data from a multiple sources simultaneously, transform it and output it based on the output plugin, which is the elastic search plugin in this blogpost but could be anything from STDOUT, an unix pipe, a file, a file in CSV, HTTP, email, IRC, Jira, graphite, kafka, mongodb, nagios, S3, SolR, … really whatever you want. Installing the ELK stack in a basic way is easy. Database entity In this blogpost I will install everything on the same host, everything being the ELK stack and an Oracle database installation.


Database developer In reality you should have a log gatherer on every host (called ‘filebeat’) and a dedicated host which runs the rest of the stack (logstash, elasticsearch and kibana). Data recovery plan The below install actions were executed on a Linux 64 bit host running Oracle Linux 6.8. In order to make the installation really easy, I use the yum repository of the elastic company, this is how to set that up (all done as root, ‘#’ indicates root): If you see the ‘Configuration OK’ message, it means logstash could interprent the configuration files. Data recovery kansas city It does not mean it will all work as desired, there could be runtime issues. Now let’s start logstash. N k database Logstash uses upstart (meaning a startup script in /etc/init) instead of the legacy startup mechanism using the chkconfig and service utilities. The last part of the data pipeline is ‘filebeat’. Data recovery 2016 There are and could be multiple input products, in this blogpost I use ‘filebeat’, which keeps track of logfiles. – /var/log/secure: this is the default linux logfile which contains all kinds of authentication messages, as defined in /etc/rsyslog.conf (authpriv.* /var/log/secure). – /u01/app/oracle/admin/*/adump/*.aud: this is the default place where the oracle database stores it’s audit files. 510 k database fda These audit files provide what is called ‘mandatory auditing’, and includes at least connections to the instance with administrator privilege, database startup and database shutdown. Database programmer The default is a normal text based logfile, it could be set to XML. – /var/log/audit/audit.log: this is the logfile of the linux kernel based audit facility. Data recovery osx This is actually a lesser known hidden gem in Linux, and provides audit information from the Linux kernel. These files need to be configured in filebeat, in the file: /etc/filebeat/filebeat.yml. Database integrity As the extension of the file indicates, this is a file organised in YAML syntax. Database backup The best way to configure the file is to move the file, and create your own file with your desired configuration. Hollywood u database First of all we add the output, which is logstash in our case. Data recovery ipad Please mind the default configuration of filebeat is direct output to elasticsearch, which means we don’t have an option to enrich the data! One thing to notice is that a type is set for each file (which is really just a name for the file filebeat monitors), which makes it able to find data from these specific files later on. Database vs server Now the Oracle audit file: This looks a bit more complicated. Database is in transition The reason for the complication is the multiline specification. Data recovery ios An Oracle database audit file contains a timestamp, after which the audit data is written; it looks like this: The important things at this time: the ‘pattern’ keyword specifies the timestamp, you can see you can match it with the timestamp, and all the following data needs to be processed together, this is a single record, written over multiple lines. Database data types ‘negate: true’ means that anything that does not fit the pattern needs to be added to this piece of data, ‘match: after’ means that this is added after the pattern is matched. Now that we’ve set the entire pipeline up, a next thing to do is to configure logstash to enrich the data. Data recovery johannesburg Here’s the how it’s done for the Oracle database audit file: match => { “message” => “ACTION :\[[0-9]*\] ‘(?.*)’.*DATABASE USER:\[[0-9]*\] ‘(?.*)’.*PRIVILEGE :\[[0-9]*\] ‘(?.*)’.*CLIENT USER:\[[0-9]*\] ‘(?.*)’.*CLIENT TERMINAL:\[[0-9]*\] ‘(?.*)’.*STATUS:\[[0-9]*\] ‘(?.*)’.*DBID:\[[0-9]*\] ‘(?.*)'” } It’s beyond the scope of this article to go through every detail, but as you can see we apply a filter. Iphone 5 data recovery software Everything in this filter takes place for the type “oracle_audit”, which is set by filebeat. Database operations The next thing we encounter a couple of times is ‘grok’s’. Database index The term grok comes from the Robert Heinlein science-fiction novel ‘Stranger in a Strange land’. Database crud Essentially, a grok with logstash means you specify a pattern, for which the actions are applied if the specified pattern matches. Drupal 8 database The first grok looks for the date pattern for which extra fields are created (M,d,h,m,s, after the colon) in the field ‘message’, and adds a tag (a word in the tags field for the record that is created). Data recovery disk The second grok also looks in the ‘message’ field, and specifies text (ACTION for example), some other characters and then (?.*) is visible. Database 3 tier architecture This is a custom pattern, for which the field name to be created is in between < and > and is followed by a pattern. Data recovery orlando This grok line (including all the patterns) creates fields for all the Oracle audit fields in the audit file! The next grok picks up the PID from the filename of the logfile (the filename is in a field ‘source’), and the two mutates create and destroy a new field ts which is used for the date, and date specifies the date/time with the data flowing through logstash is filled with the date and time in the ts field, instead of the time filebeat picked up the data and sent it through logstash. Database cardinality Please mind that if you add (or change) configuration in a logstash configuration file, you need to restart logstash. We are all set now! Last words on this configuration: kibana and elasticsearch by default do not require authentication. Database unit testing Do not expose the ports of these products to the internet! I am using a tunnel to the kibana website, which runs on port 5601. I data recovery software free download It’s very easy to ssh into the machine running the ELK stack using ssh user@machine -L 5601:localhost:5601, which creates a port on localhost:5601 on my machine at home (-L = local), for which communication is tunnelled to localhost:5601 on the remote machine (the localhost in the ssh line example is an address on the machine you ssh in to, this could also be another server which is only visible from the machine you ssh into. You get this screen when you goto kibana at port 5601, enter: ‘type: secure’ in the search bar to display data of the type secure (which is what is set with document_type: secure in filebeat.yml), and login to the machine where filebeat is watching the /var/log/secure file. O review database As you can see, you get two lines from the ssh deamon, one indicating something’s done with pam (pam_unix), and one line which indicates it authenticated via a public key for user ops from an ip address (which is anonymised) at port 39282 via ssh2. With a lot of cloud providers you get a user which has public key auth
entication setup (which you saw above), after which you need to sudo to for example the oracle user. Database in recovery In a lot of companies, you get a personalised account to log on to servers, after which you need to sudo to oracle. Data recovery wizard professional In both cases you need to use sudo to become the user that you need to administer, for example oracle. Data recovery open source This is what sudo generates in the /var/log/secure file: Now that I have became oracle using sudo, I set the environment of my database using oraenv and started up a database. Gif database Now go over to kibana, and issued a search for ‘type: oracle_audit’. Data recovery lifehacker This is how that looks like: Now if you look at what the audit record provides, the only things that provide something useful for the purpose of investigating who did stop or start a database are ACTION and CLIENT TERMINAL (I assume the database is stopped and started by the ‘oracle’ user). Top 10 data recovery software 2014 Now change the ‘selected fields’ in kibana and add the (dynamically created!) fields: ora_audit_action, ora_audit_term and ora_audit_derived_pid, and remove message. Database gale This is how that looks like: The important thing to look for here is the ora_audit_action ‘startup’, then look at the ora_audit_derived_pid, and two rows down we see terminal ‘pts/1’ was the terminal on which this was entered. Now that we know the terminal, we can add in searching in the message field for the secure type. Database life cycle Enter ‘type: oracle_audit OR (type: secure AND message: “*pts/1*”)’ in the search bar. Okay, this works. Data recovery dallas But it’s far from perfect. Data recovery usb In fact, it only works if the username of the session doing the sudo is the only session with that username, otherwise if there is more than one session it can be any of these sessions doing the sudo, since there is nothing more than the username. Database 4th normal form This also means that if there is a direct logon to the oracle user, there is no way to identify a session with a TTY, and thus database startup and shutdown are completely anonymous, there’s no way to link a specific session to that action outside of probably the oracle user and a TTY which can not be linked to anything like for example an ip address. Is there a better way? Yes! We can also use the linux, kernel based, auditing service, which is on by default. V database in oracle This service keeps a log file at /var/log/secure/secure.log, and gives way more granular auditing events than the /var/log/secure log. Data recovery tampa Linux audit generates a lot of diverse types of rows, so it’s actually not easy to grok them, but in order to understand which session executed a startup or shutdown, the only audit row that is important for this specific use case is an audit type called ‘CRED_ACQ’. R studio data recovery with crack The grok for this type looks like this: match => { “message” => “”type=%{WORD:audit_type} msg=audit\(%{NUMBER:audit_epoch}:%{NUMBER:audit_counter}\): pid=%{NUMBER:audit_pid} uid=%{NUMBER:audit_uid} auid=%{NUMBER:audit_auid} ses=%{NUMBER:audit_ses} msg=’op=%{NOTSPACE:audit_op} ((acct=\”%{GREEDYDATA:audit_acct}\”)|(id=%{NUMBER:audit_id})|acct=%{BASE16NUM:audit_acct}) exe=\”%{GREEDYDATA:audit_exe}\” hostname=%{NOTSPACE:audit_hostname} addr=%{NOTSPACE:audit_addr} terminal=%{NOTSPACE:audit_terminal} res=%{NOTSPACE:audit_res}'” } This grok matches the CREDIT_ACQ audit type which we will use to trace back the session via the audit log. Database uses Another nicety of this logstash configuration is the audit records time using an epoch timestamp, which logstash can translate back to a human readable timestamp. Database history Once this is in place, log in again and use sudo to switch to oracle (or log in directly as oracle, it doesn’t really matter that much now!), and search in kibana for: ‘type: oracle_audit OR (type: audit AND audit_type: CRED_ACQ)’. Database b tree Now get the relevant fields; remove ‘message’, and add: audit_hostname, audit_acct, audit_ses, audit_terminal, ora_audit_term, ora_audit_derived_pid, ora_audit_action. Database optimization This probably returns a log of rows, now scroll (“back in time”) and search for the startup or shutdown command, and then follow the trail: Startup points to (oracle server process) PID 17748, which was instantiated by a session using by pts/1 (two rows down), one row further down we see the audit information which shows pts/1, which is connected to audit_ses 4230. Data recovery software reviews The audit_ses number is a number that sticks with a session, regardless of using sudo. Cnet data recovery If you follow down number 4230, you see multiple rows of audit_ses 4230, some of them with root, which is typical for sudo switching from one user to another. Database systems The final row shows the user logging in with it’s ip address. Data recovery for mac In other words: using the linux kernel audit facility, you can get all available information! Whenever you use RAC, or use ASM, or use both, or you are using the grid infra single instance as a framework to track your your listener(s) and database(s) and start and stop them automatically, you can still stop and start an instance directly using sqlplus, but in most cases you will be using the grid infrastructure crsctl or srvctl commands. Data recovery damaged hard drive When the grid infrastructure crsctl and srvctl commands are used, this is how the Oracle database audit information looks like: As you can see, because the cluster ware brought the database down, there is no terminal associated with the shutdown. Database builder So the above mentioned way of first searching for startup and shutdown in the oracle audit information, finding the associated terminal, and then tracing it through the audit records can NOT be used whenever the Oracle cluster ware is used, because a grid infrastructure deamon is actually stopping and starting the database, and the grid infrastructure does not keep any information (that I am aware of) about which client invoked a grid infrastructure command. Data recovery cnet I guess a lot of auditors will be very unhappy about this. Now the good news: you can solve this issue very easy. Database log horizon The downside is it requires additional configuration of the linux auditing. Data recovery raid The solution is to put an ‘execution watch’ on srvctl and crsctl; this is how this is done: In order to validate the working, I started a database using srvctl, and searched for: ‘(type: oracle_audit AND ora_audit_action: STARTUP) OR (type: audit AND message: key=\”oracrs\”)’. Database design for mere mortals This is how that looks like: As you can see, there’s the Oracle database record indicating the startup of the database, and a little while back in time there’s the linux audit row indicating the execution of the srvctl executable. Database hardening Once you are at that point, you can using the earlier mentioned way of using the audit_ses number to trace the session execution, including sudo and ip address at logon time.

banner