Wednesday, May 23, 2007

Dataguard, documentation/scripts for non-DBAs during failover

We're looking at implementing Dataguard as part of an implementation of Documentum (a document management system from EMC) and I have been asked to look at producing documentation and scripts for non-DBA users to use during a failover. The actual failover of the database itself will be handled by our DBAs, this is for the sys admins, network admins, application admins &c who may need to do things during fail over.

I'm currently going through the Dataguard concepts guide and have found some other documentation on OTN but was hoping that someone with more knowledge of Dataguard could point me towards any documentation that might help with the non-Oracle side of failover. If anyone has any documentation/scripts like this they have prepared themselves that they are prepared to share and I could adapt to our environment then I'd be very grateful.

Here is the background:

We have a pair of IBM p590 servers, running AIX 5, currently sitting in different parts of the same datacentre but (hopefully) one will be moving to a different site in the near future. Each will run both primary and standby environments, one running the standbys for the primaries on the other. Each environment will run in a virtual server.

The Documentum service runs in an N-Tier configuration:

Presentation layer/
Application/Business logic layer
Metadata/Storage layer

The presentation layer is either a fat client running on local PCs or a web front end running in Tomcat that the users can access via a browser. The application/business logic layer is the Documentum application running in Oracle Application Server 10g. The metadata/storage layer consists of an Oracle 10g database and filesystem storage on IBM storage devices.

Additionally on the application/business logic layer there are interfaces to SAP provided by Documentum Services for SAP running on a separate Windows 2003 server and scanning stations and servers, these connect to the Documentum application.

Users do not access the database directly, nor do any other services, all access is via the application.

When a document is added to the repository (via the presentation layer, Services for SAP or scanned) it is rendered to PDF and added to a filestore on the storage (i.e. the file is saved to a directory), metadata about the document (title, location, categories, key words &c) are stored in the Oracle database.

The filestore will be synced from primary to standby by either IBM Flashcopy or IBM Metro Mirror, the metadata will be synced by Oracle Dataguard. Due to the way Documentum handles inconsistency between the metadata and the filestore (i.e. documents in one that are not in the other) the metadata sync will always lag behind the filestore sync (if there's metadata for a non-existant document then the metadata can easily be found and deleted but if there's no metadata for an existing document it's a bigger job to find the document, an analogy would be looking up words in a book's index to find them in the book vs checking each word in a book to see if it's in the index).

Edited to add (as a result of comments on Experts Exchange):

The failover of the database itself will be handled by the DBA team. I have been asked to produce documentation for any changes that need to be made outside the database. All the documentation I can find ignores anything outside the database. Clearly there will need to be activities outside of the database when a failover or switchover takes place, the obvious one that comes to mind is pointing the clients to the new server. I haven't been able to find anything about that.

Possible solutions that come to mind for pointing the clients to the new server are:
  • Edit the TNSNAMES.ORA files. This would be possible in this set up as we only have a few boxes (application servers, scanning servers and Services for SAP servers) that connect to these databases. If the number of boxes increases significantly then it might no longer be possible, anyhow I prefer to avoid manual processes as I know how easy it is to miss something when you're under pressure.
  • Use a unique hostname for each database and have a DNS entry (so if database ORCL1 is on server bigprod, IP address 10.10.10.10, then we have a DNS entry orcl1 which resolves to 10.10.10.10 and use that in the Net8 settings for the ORCL1 service), when failover happens we just edit the DNS settings to point to the IP address of the standby server. This was proposed in another project and may get implemented. The downside is it means involving a directories person in a fail/switch over.
  • Use OID for database names resolution. Probably the ideal solution but also means implementing another extra technology, and paying for it. On the other hand we do have a plan to implement OID at some point in the future so we will probably use this eventually.
  • Specify multiple ADDRESSes in the TNSNAMES.ORA file so if it can't reach the primary it will try to standbys, if the failover hasn't happened yet (primary is down, standby hasn't changed to primary yet) then it will have to time out. As we're planning regular switch overs we'd have to make sure it times out quickly for those times that the first server in the list is the secondary.




Friday, February 16, 2007

Where should data be validated?

This just came up on the mailing list for my local Linux Users Group following last night's meeting (which I didn't attend), the original mail and my response is below:


> In the pub, there was an interesting conversation going on regarding
> validation of data in databases.
>
> Excuse the omissions, as I said, it was overheard
>
> Someone brought up the point that in databaseX If say, you have a
> varchar field set to a limit of 10, and put 26 chars of data into it
> databaseX silently truncates it.
>
> So my question is, in your opinion, should it be up to the front end or
> the database to do this kind of data validation?
>

I'm a Database Administrator, mostly working with Oracle. The reverse of this problem (data validated in the client but not in the database) is something I come accross a lot. Most RDBMS/ODBMS/ORDBMS, certainly any that can claim to be enterprise class, will have functionality to implement data validation (key constraints, check constraints, strong datatyping, triggers &c). Unfortunately the majority of software vendors, in my experience, seem unwilling to use this functionality with their products. The most common excuse is 'database independence', which in reality translates to their app, instead of working well with one *DBMS, will work badly with three or four. They want all the data validation to happen in their application so if the business rules change you have to pay them to update the application rather than just getting your DBA to change the rules in the database, I've met a couple of vendors who have insisted that lists of values (where the user picks a value from a list) have to be hard coded in the application rather than generated from a lookup on a table so everytime you need a new value you have to pay them to write a patch.

To get back to the original question. I'd say that validation _must_ be done in the database layer, _may_ be done in the application layer and _could_ be done in the client layer (I differentiate between application layer and client layer as, in my world of work, N-Tier is very common). The database, however, when it gets invalid data should raise an error/exception which it then propergates to the application layer for handling. The database may carry out some action as a result of that error/exception (writing it out to a log file, the error plus the data that caused the error, is often a good idea) but it should pass it back to the application, what the application then does with it is up to the application, usually you'd want it passed back to the user, maybe translated to a more human readable message if necessary.


I remember attending a presentation at UKOUG Conference a few years ago about implementing business rules via constraints in the database, a very useful presentation.

Sunday, February 11, 2007

Preventing record deletion

This entry is partly an aide memoire for me, partly to try to get something that has been keeping me awake for the past hour or so out of my brain so I can sleep and partly in the hope that someone can suggest a way forward.

A quick bit of background. Until April 06 most of our major systems were looked after by an external Faccilities Management company. In April 06 IT was kind of outsourced to a joint venture company, support of the systems transferred to that company and a couple of the DBAs transferred in as well under TUPE. A reccurring problem on one of the systems is that the users keep deleting records which, by law, they must not delete so we (actually one of the DBAs who transferred in who is responsible for that system) has to restore from backup to another machine and copy the deleted records over (she's tried using LogMiner but finds it too unwieldy). The core problem is that the application is faulty and lets the users delete the records when it shouldn't.

The application is closed source from an external vendor so we cannot change it to make it prevent the users from deleting the records. Due to the political environment we've got zero chance of getting the application changed to one that does stop them (the people with authority to replace the app don't have responsibility for the costs of continually putting right the problems it causes and vice versa).

The application logs into the database as the schema owner, individual user authentication is handled within the app, so we can't just revoke delete privilege from the users.

It occured to me just (this is what has been keeping me awake) that we might be able to fix it using triggers.

First I thought of a before delete...for each row trigger to archive to another table before it deletes them so at least restoring the rows is just a case of an insert statement. Then I thought thst times when we might legitimately need to delete a record are massively out numbered by the times we want to prevent a record being deleted so preventing deletion would be much better. Now I'm thinking we need and instead of delete trigger so if someone attempts a delete it won't let them. According to the documentation instead of triggers can only be applied to views so we might have to rename the table and replace it with an updateable view witht he original tablename, not sure how that would impact on the support/maintenence of the app, we might not able able to do it. Assuming we can work around that problem the next issue is what we do when the trigger fires. Do we do nothing, log that somone tried to delete a record or do we raise an exception?

Any thoughts and/or suggestions gratefully received.

Wednesday, January 10, 2007

Remote automated install of Oracle 10g client

We have a situation where we need to rationalise the range of installed Oracle clients (i.e. the bit that sits between the app and the network stack) we have installed. We currently have versions from 7.x through to 10.2 installed accross approximately 12,000 desktops (accross various locations in an area of around 26 square miles) running various apps on Windows versions from NT4 to XP (mostly Windows 2000). We are also introducing a standard TNSNAMES.ORA file (this is the impetus to standardise on a single client version as different locations on disk and formats of the TNSNAMES.ORA file would make it pretty much impossible to manage the rollout of the file otherwise).

With the number of desktops to be updated and the area they are spread over it would not be possible to do this by visiting every desktop so management are proposing automated installation of the Oracle client through scripts run at logon requiring no user interaction. Has anyone ever tried something like this? Are there any lessons learned you would be willing to share?

I haven't been able to find any references to this sort of work other than how to do a silent install using a responses file. I have looked at the 10g Instant Client which looks like it might be more suited to our needs as we just need to copy the files onto the desktop and set the path variable. Does anyone have any experience of using this client that they'd be willing to share. I have used it in test and it seemed fine, I'd be grateful for any comments or advice.

I suggested that we look at moving applications to Citrix or other thin client solutions so negating the need to have the Oracle client on the desktop but was told that this would be too expensive to consider right now.