The day itself was pretty good, lots of good talks (more on that later), but I ran into a couple of issues in the evening. The first issue was when I went into the Handmake Hamburger just accross the canal from the ICC, I was alone and the waitress/Maitre d' said they didn't have any free tables suitable for one person (looking around they had several free tables, just they were for 2, 4 or more people) so after expressing my disatisfaction and feeling about the poor quality of service I went accross Broad Street to Walkabout and had a Kangaroo Burger. The second issues was that I had hoped to go to the bloggers' meetup at All Bar One. Although I blog about Oracle I'm not sure if I qualify as a true Oracle Blogger so when everyone disapeared upstairs to a private party (UKOUG had laid on some sort of 'Thank You' event for the volunteers) I didn't follow. I know I'm probably worrying over nothing but didn't want to risk my biggest memory of this year's conference being getting thrown out of an event for gate crashing. Hopefully by next year I'll have sorted out what the criteria are for access to such gatherings, and will fulfil the criteria.
Getting back to the day, the talks.
A number of the talks I attended dealt with bitmap indexes to a greater or lesser degree. A bitmap index (for those who don't yet know and don't want to wade through the guides) is one where there is a series of bits (the bitmap) for each possible value for a field with a bit for each record in the table, so there are 4 possible values and 10 records there will be 4 bitmaps of 10 bits each (apparently bitmap indexes compress repeating identical values so the bitmaps would be smaller than that but I've only heard that from one source and haven't been able to confirm it yet, anyhow from a logical perspective there's still a bit for each records in each bit map).
For example suppose you have a people table which contains a gender column. The gender column can have 4 possible values (actually, depending on your (political/social) environment, it can have many more than that (at least 30 in some systems I know of) but that's a blog entry for another time): "Don't Know", "Refused to divulge", "Female" and "Male". The difference between "Don't Know" and "Refused to Divulge" is that in the case of "Don't Know" we haven't asked and for "Refused to Divulge" we asked but they wouldn't say, due to the nature of data privacy laws in the UK, except in very specific circumstances, we cannot compel an answer or put in what we believe the answer to be. I'll use a 1 letter code for each one in this example (D R F M) the letter at the start of the line is the field value, the string of digits is the value of the bits in the bitmap:
D 00000000000000000100
R 00001000001000000001
F 11100001100011111111
M 00010110010100000000
As you can see from this we have 1 record with "Don't Know", 3 with "Refused to Divulge", 13 with "Female" and 5 with "Male" in the indexed field.
An interesting point from the talks was that each presenter seemed to have a slightly different view on the criteria for when you use a bitmap index. They mostly agreed but there were slight variations in the detail and not all of them listed all the same criteria. Collating the criteria given ('bitwise and' :-)) gives something like:
- Small number of possible values. The key question here is what constitutes a small number. 4 is a small number but if you only have a 5 records you might question if the overhead of the bitmap is worth it, you might even question if it's worth indexing the table at all (it might be, I recall attending a talk (by Connor McDonald IIRC) on the advantages of indexing very small tables). 100 isn't a particularly small number but if you've got 10,000 records then it is in comparison. Assuming the compression of repeating identical values does happen then, given that statistically for each of those 100 bitmaps 99% of the bits will be zero, you should get a reasonable compression rate as worst case scenario is you have sequences of a 1 followed by 9 zeros. If the distribution is highly skewed (say there's 100 different possible values but a few values account for most of the records) then your compression rate might be far better. Like many things it's probably a case of test it and see, I'll hold off judgement till I've had a chance to test it.
- The field doesn't tend to get updated or deleted. This applies to any type of index really, if you can you want to avoid indexing fields that get updated (i.e. changed) or the records deleted as this will cause extra IO (updating the index when the field is updated or the record is deleted) and your index to hold 'dead keys' (entries that used to point to a record but don't any more because the indexed field was updated or the record deleted) which waste space and thus can slow down IO on the index (Oracle has to read blocks which contain dead space). Sometime I really must look into how a bitmap index handles dead keys. If anyone else does please comment here with a link to your results.
- Field tends to be used a lot on where clauses. Again applies to any sort of index. Indexes generally slow down updates/inserts/deletes (something else to update) and speed up queries that use the indexed field in their where clause. If the field is rarely or never used in a where clause then the index is not used and is just wasting resources.
- Field tends to be used in where clauses in combination with other bitmap indexable fields. Oracle can do bitwise operations (AND, OR &c) on the bitmaps, bitwise operations tend to be very fast (they're implemented at a very low level in the microcode in the processor). Suppose your people table in the example above also has fields for marital status, employment status and ethnicity. If you run queries with where clauses such as "gender='F' and marital_status='Married' and ethnicity='White British' and employment_status='Full Time Permanent'" (useful for Equal Opportunities monitoring) then Oracle can just load the appropriate bitmaps and do a bitwise AND to quickly identify all records that satisfy that clause.
One suprise during the day was I ran into John Ryland, a DBA from Capita currently seconded to Service Birmingham (where I work); I hadn't known he was going to be there. Service Birmingham is a joint venture company between Birmingham City Council and Capita group in April 2006 which provides ICT and Business Transformation services to Birmingham City Council. The majority of the staff are secondees from Birmingham City Council (mostly from the Business Solutions and IT department) with some secondees from Capita and TUPE transferees from the council and other bodies. John hadn't been aware that Service Birmingham had a membership or that Birmingham City Council had had a membership, apparently when Capita asked the UKOUG (as part of their due dilligence checking) the UKOUG and denied we had a membership.
Of the presentations that I attended today the one that probably grabbed my attention and soparked my interest the most was "Accountability for System Performance (introducing Six Sigma quality in Oracle software performance)" from Cary Millsap. What I picked up from it was that everything should be instrumented, in every layer and every transation (that is from when the user clicks a button or link to when they get the result, not just a database transaction) should measure and record how long it took. When I get back to work next week I think I might propose that we put into our standards for applications (that we procure externally (most of them) and the ones we develop internally (a few)) that the application must be instrumented to record time based stats at every level. Of course the 'Holy Grail' would be to tag every transaction with a unique ID that would be resolvable at every level (a lot of our apps are browser-apps server-database-SAN/NAS or client-apps server-database-SAN/NAS) and have each level record start/end times and as many other time based statistics as possible (if the application does three things before it contacts the database and 2 things with the results it gets back then I want to know how long each of the 5 things took and what it did at each step). Over the years the vast majority of the performance problems I've seen have been the result of slow steps outside the database (e.g. a user sees a transaction taking a minute but only half a second of that is in the database, the slow step is in the client, apps server or simply network delay and invisible to the database instrumentation, you can pick up some storage related information from the database instrumentation but having instrumentation in that level would be much better). If the application had been properly instrumented then solving the problem would have been a lot quicker (if only because it would have saved a lot of time arguing over where the problem was, saying it's not in the database is one thing but being able to say where it is, with hard evidence to back it up, is much better). After the session I spoke with Cary about tacking instrumentation on to uninstrumented apps and he suggested a few UNIX tools that would be useful. I'm going to be in Tom Kyte's session on "Instrumentation 101" tommorrow, maybe I'll pick up more then.
I had intended to talk more about the other sessions but it's after midnight and I have to be up in a little over 5 hours so they'll just have to wait.
No comments:
Post a Comment