By Ryan Huber
A continuation of the blog post Database 101-2: Why should I care about indexes and statistics?
In our last post we discussed index fragmentation, which occurs because your business and database isn’t static. When data is inserted into, deleted from, or updated in your SQL tables, the indexes update to reflect those changes.
As indexes are modified the information in them becomes fragmented, or scattered. This means you’ll want to schedule index maintenance to rebuild, reorganize, defrag, and update statistics to run efficiently.
Thank you, Ola Hallengren!
The good news is that doing index maintenance is a very easy task in SQL Server, because a nice guy over in Sweden wrote a whole solution that he published on the web, available completely for free.
His name is Ola Hallengren and you can find his solution on his site or on Github. I’ve been a SQL Server DBA for ~15 years, and this is, by far, the best solution I’ve seen for backups and maintenance.
Ola’s solution has 3 main components:
Backups: This component allows you to generate BACKUP DATABASE commands with nearly every option you could ever need. All of the options are documented here: https://ola.hallengren.com/sql-server-backup.html
For any mission-critical databases, I would suggest setting them to a FULL recovery model, along with setting up a scheduled job to execute a FULL backup once a week, DIFFERENTIAL backups nightly, and LOG backups every hour, with @Compress = "Y".
This setup allows you to recover databases to a point-in-time with, at most, 1 hour of data loss without being onerous in terms of disk space dedicated to backup files.
Please ensure the backup files are being written somewhere other than the local server. Because if the server crashes, you can't recover.
Integrity checks: This component will check for data page corruption in every object in every database on your server. Checking for corruption is extremely important, as the corrupt data pages can lie idle in your database, undetected, until the data page is accessed by a query. The corruption could only be minutes old. It could be years old. Once it's found, the only option to fix it without data loss is to restore from backup. But the corruption will exist in your backup files as well if it's never detected, which is why this step is so vital.
I would suggest setting up a scheduled job to check USER databases nightly if your business operations allow, or at minimum once a week. All of the options are documented here: https://ola.hallengren.com/sql-server-integrity-check.html
To improve the run-time of the check, I would suggest setting the @NoIndex parameter to "Y". This option will skip checking for corruption in NONCLUSTERED indexes. While it never hurts to check indexes, this check is not vital, as any corrupt NONCLUSTERED index can simply be rebuilt.
Index optimization: This component allows you to check the fragmentation level of all of your indexes, and take action to keep them in order, along with updating statistics on tables and indexes as well. Not keeping your indexes maintained can be very detrimental to the performance of your database server. Fragmented indexes can make even the perfectly optimized query perform badly, and out-of-date statistics can cause the optimizer to build query execution plans that aren't even close to optimal.
Like the integrity checks, I would suggest setting up a scheduled job to rebuild fragmented indexes nightly if business operations allow, or at minimum once a week. The same goes for statistics updates. If your business operations won't allow nightly index rebuilds, I would still attempt to update statistics nightly, as it's a much more lightweight operation than checking and fixing index fragmentation.
You can find all of the options for this component here: https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
Ryan Huber is the Senior Database Architect at SoftWriters, a Pittsburgh-based company that offers pharmacy management software solutions to pharmacies serving the long-term care community. Ryan is our in-house SQL guru, providing technical expertise in every facet of database operations, including architecture and code review, resolving performance issues for customers, and formulating a strategy for existing and future product features and enhancements.