MP wrote on Nov 15
th, 2007 at 2:39am:
Is there a performance advantage to using @IsUnique with a compound key versus using @XResultSetSearch with multiple criteria?
If you are doing what I think you are, @IsUnique should be faster. Single value comparison, stops on first match. The @XResultSetSearch method requires multiple comparisons and will continue looking until it finds all matches.
Quote:On a related note, are @XResultSetSearch calls also optimized by SERVER PREINDEX XLOOKUPS?
I think so, but I'm not where I can check. I'll confirm in the morning.
Quote:(Further aside, it would be really useful if Lantica released an "internals" white paper. Knowing how things like storage and indexing and locking work at a low level is key to truly allowing developers to milk the best performance out of a DB. As is, performance is a great mystery in Sesame, making it hard to answer questions like "How do I optimize a query?" and "When are the PREINDEX indexes rebuilt?".)
Honestly, MP, this very rarely comes up. The vast majority of our user base doesn't have any need (or desire) for that level of milking and optimization. The only time we hear this kind of question is when pro developers come at us from something like Oracle and haven't shaken off the big engine mindset. We are designed for a different environment and a different kind of user than Oracle and SQL Server and such.
The answers to these questions are usually pretty straightforward and short enough to where white papers are kind of overkill.
I'll confirm this in the morning, but I believe the PREINDEX index for a field is built the first time per session that the server uses that field value as the key value in an X command. Once an index exists, it is updated when indexed values are changed/added/deleted. So, the first X command per session using a particular key field will be slower than all subsequent X commands using that key field. PREINDEX speed increases only apply to absolute key values like "APPLE". Keys including a search spec, like "..APPLE.." do not benefit. Unloading a database also unloads the indexes for that database.
Optimizing queries involves some very basic rules, some of which are common sense to someone with your experience:
1. Absolute searches are faster than searches with wildcards or ranges.
2. Programmed retrieve specs are slower than non-programmed. Wherever possible, pre-resolve constant values like @Date in a retrieve spec by using the pipe (|) syntax to gain significant speed over the {} equivalent.
3. For subrecord searches, Natural linking is faster than Relational linking.
You're not dealing with JOINs or HAVING clauses or nested SELECTS or any of that kind of stuff. To put this in SQL parlance (although we are not SQL-based), you have the equivalent of SELECT, WHERE and ORDER BY with no nesting. Subrecords add a level to that under-the-hood, but you get my drift. It's a very straightforward query environment.
We maintain a constant presence among our users, so we are constantly adding tools and making adjustments to optimize Sesame to the way it is actually being used in the field. If you have questions about specific things, the developers and Support people are here and willing to help, or you can contact Support directly. Also, there is a section in the back of the User Guide that discusses some of our internal structures and may provide some of the information you are after.
Edited: MP: I checked on it this morning and the statements I made above regarding preindexing are accurate.