Why is my database query slow?
There are various possible reasons. Here's a checklist:
1. First, check the plan of a query. If you don't want to hog the server just by testing plans, you can only request for plan without executing the query on real data. It is done by typing SET PLAN ONLY in isql, or clicking Show Plan in FlameRobin.
If you see a NATURAL plan going against a big table, you've found the problem. If you have where clause or JOIN to that table, make sure you have index defined on related fields. If you do have index, but it isn't used, perhaps you have ascending index (default) and you need descending (or vice versa). Or perhaps you just need to rebuild the index statistics so that Firebird finds it usable. That can be done with SET STATISTICS sql command.
If you use views with unions (you cannot index a view), I highly recommend you use at least Firebird 2.0 as earlier versions don't use any indexes of underlying tables when you use WHERE or JOIN with a view. If you can't use Firebird 2.0, the only way to speed it up is to write a stored procedure that takes value in WHERE clause as agrument.
2. If the plan is ok, perhaps you have bad transaction management, or a lot of deleted records. Perhaps the garbage collection kicks in at a bad momemt. Take a look at database statistics and the gap between oldest and active transaction. Try increasing the sweep interval or turn it off completely and see if it helps.
If you use Delphi and IBX, make sure you understand the implications of different transaction levels. IBX default for AutoCommit is CommitRetaining, which is a killer for the server. The best idea is not to use AutoCommit, but do explicit commits and rollbacks from application at the right point (depending on nature of application and user actions).
3. If you are using Classic, it's a good idea to create a RAM disk to be used as a shared storage for all sorting of datasets that don't use index. You should set up firebird.conf in such way that TempDirectories setting has two directories: RAM disk and then the default Temp space.
4. Are you using some UDF functions that work with character data? If you use a function like LPAD for example, it would return 32.000 bytes. If you in turn use that for ORDER BY or GROUP BY on a resultset of 10.000 records for example, it would require the engine to sort 300MB of data. In this case, a solution is to CAST the result to shorter CHAR type, for example, the maximum length you would expect.
5. If you have a selectable stored procedure that returns a lot of rows, it can work really slow if you do ORDER BY on such select. This happens even it the field used in ORDER BY has an index in the original table. The reason is that stored procedures are precompiled code, so they cannot change the way query is executed at runtime, and connection between the original table and output field is lost. Except for the simplest cases, it would be impossible to use it anyway.
Therefore, in order to sort the result set, Firebird first has to fetch all the data from stored procedure to a temporary storage and then do the sorting.
To fix this, do not use the ORDER BY clause, or move it inside the procedure. Here's an example:
create procedure p1 returns (x integer) as
for select x from zillion_row_table into :x do suspend;
If you run: SELECT x FROM p1; it will run fast.
But if you run: SELECT x FROM p1 ORDER BY x; it will first fetch the "zillion" rows, then sort them and then hand them to the client. This also means that no rows will be fetched until data is sorted.
One possible solution is to write the procedure like this:
create procedure p1 returns (x integer) as
for select x from zillion_row_table order by x into :x do suspend;
6. If all else fails, perhaps you really have a large amount of data. It's time to check your configuration for optimal performance.
a) Do you use PATA, SATA, SCSI, SAS or SSD disks? SATA disks will generally provide better performance than PATA disks. 15k RPM SCSI disks are generally the fastest available drives and reduce the CPU overhead when performing I/O. SSD disks are also very fast, but are not recommended for Firebird databases if you use forced-writes because it shortens the lifetime of disk. SAS disks are a newer technology that provides a very useful middle ground - they are almost as fast as SSD, but much cheaper and do not suffer from disk lifetime problem. Of course the right choice depends on the size of your databases and read/write ratio. If databases are not large and you mostly read (and write less often) from them, SSD could be the best option.
b) Is the database on same disk as the rest of operating system (esp. swap file/partition and temp directory from firebird.conf)? Using the same disk reduces performance. If you can attach the driver to a different controller, that can improve performance as well.
c) How about memory usage? Swapping on disk (because of low RAM) is a killer for database applications?
Make sure you use at least Firebird 1.5 as it does sorts in memory if the data fits (Firebird 1.0 does it on disk always).
You can also try to increase amount of RAM Firebird will use. There are settings in firebird.conf to increase the default and maximum RAM allocation. If you use Classic, you can also try to increase the page buffers RAM allocation (it is set to low value as Classic creates one for each connection). Use gstat -h and read the Page Buffers line to see how much each client is using, and then increase DefaultDbCachePages in firebird.conf.
d) multi-core CPU? If you are using Super Server you need to set CPU Affinity option in firebird.conf.
e) if you're using .gdb as file extension, make sure you read FAQ #353