man Bric::DBA () - Bricolage Database Administration Guide.

NAME

Bric::DBA - Bricolage Database Administration Guide.

VERSION

$LastChangedRevision$

DATE

$LastChangedDate: 2005-08-23 13:52:27 -0700 (Tue, 23 Aug 2005) $

DESCRIPTION

This guide is intended for the database administrator in charge of maintaining the Bricolage PostgreSQL database.

SECURITY

By default, PostgreSQL has pretty lax authentication settings, at least locally. Client authentication is controlled by the file pg_hba.conf in the CW$PGDATA directory, e.g., /usr/local/pgsql/data/pg_hba.conf. The default settings allow completely trusted connections to the server by any local user using either Unix domain sockets or TCP/IP connections (although by default, PostgreSQL does not bind to a TCP socket). These settings are as follows:

 local  all                                       trust
 host   all         127.0.0.1    255.255.255.255  trust

These settings allow anyone connecting via local Unix domain sockets to connect to all databases in a completely trusted manor. They won't even be prompted for a password! Similarly, hosts connecting from the local IP address, 127.0.0.1 with Mask 255.255.255.255, are completely trusted to connect to all databases.

While these settings are fine for limited local use (provided you trust all users on the local system!), they are not good for a production environment. We recommend that clients be required to provide a password and preferably that it be encrypted. If the Bricolage web server runs on the local box and no other box will need to connect, we recommend the following settings:

 local  all                                        md5
 # host  all         127.0.0.1    255.255.255.255  md5

These are similar to the defaults, except that they require a password, but the password is sent over the wire encrypted using a simple challenge-response protocol. We have commented out the host entry because Bric is running locally and can therefore rely solely on Unix domain sockets. (In this case, you may also wish to remove the -i switch from the PostgreSQL startup script.) Enable it if you find a need to connect locally via TCP/IP.

If you're running PostgreSQL on its own box, and Bricolage (or any subset of the Bricolage application, such as the distribution server) runs on another box, you will need to enable secure connections to that box. To limit the connections to come from that single box, use this configuration:

 local  all                                         md5
 host   all           127.0.0.1    255.255.255.255  md5
 host   bric_dbname   192.168.1.2  255.255.255.255  md5

Where the IP address 192.168.1.2 is the IP address of the box running Bricolage, and 255.255.255.255 is its mask. If you feel comfortable allowing more broad access to the server say you have a number of database applications running on multiple servers in the same subnet to connect try the following configuration:

 local  all                                         md5
 host   all           127.0.0.1    255.255.255.255  md5
 host   bric_dbname   192.168.1.0  255.255.255.255  md5

Here the 192.168.1.0 IP address represents the 192.168.1 subnet on which the multiple clients live.

Many other combinations of authentication settings are of course possible via the flexible pg_hba.con configuration syntax, but the above are our recommended settings for ensuring the security and integrity of your data. Refer to the PostgreSQL documentation at <http://www.postgresql.org/> for more details, including other levels of encryption.

BACKUP AND RESTORE

PostgreSQL supports backups and restores via its pg_dump, pg_dumpall, and psql utilities. File system backups are also a good idea. We recommend that you use the pg_dumpall utility on a regular basis to backup the Bricolage database:

  pg_dumpall > outfile

This program outputs a series of SQL statements that can then be reloaded into the database using the psql utility:

  psql < infile

We recommend the use of pg_dumpall in order to preserve the the entire database system. Read the pg_dumpall man page for more details. However, if you use your PostgreSQL server for other databases, you may wish to place them on separate backup schedules. In that case, use pg_dump to backup each of the databases, including the Bricolage database:

  pg_dump bric_dbname > outfile

The restoration is the same as for the pg_dumpall file, except that you must be sure to create the database and users, first.

  psql bric_dbname < infile

Read the pg_dump man page for more details. Note that neither pg_dump nor pg_dumpall prevent database access by other processes (e.g., Bricolage), but they will affect performance. It's a good idea to dump the file to a different partition or disk or even server (since pg_dump and pg_dumpall can be used on any server with access to the PostgreSQL server). Changes made to the database during the process of the backup will not be backed up.

In either case, if your Bricolage database is getting big, you may wish to compress the backupfile, or break it into smaller chunks. This can be done using *nix system utilities. For example, to compress a backupfile, use gzip:

  pg_dumpall | gzip > filename.gz
  gunzip -c filename.gz | psql

You can also use split to break the backup file into smaller chunks. This example breaks the file up into 1 MB chunks:

  pg_dumpall | split -b 1m - filename
  cat filename.* | psql

New to PostgreSQL is the custom dump format. You can use it to compress dumpfiles on the fly. Consult the pg_dump and pg_dumpall man pages for more information.

File system backups are another option, and may in fact be useful as a backup backup methodology. Use whatever methodology for file system backups that you prefer. Here is an example using tar:

  tar -cf backup.tar /usr/local/pgsql/data

There are two restrictions to file system backups, however. First, the PostgreSQL server must be shut down during the backup or during restoration. Second, you can't do selective backups of databases or tables, only the whole database cluster.

For more detail on PostgreSQL backup and restore, please consult the PostgreSQL documentation at <http://www.postgresql.org/>.

WRITE-AHEAD LOGGING (WAL)

Write Ahead Logging (WAL) is a standard approach to transaction logging. It is automatically enabled in the PostgreSQL server. WAL ensures that the log is written before database records are altered, thereby minimizing the number of disk writes; and in the event of a crash, the database can be recovered using using the log.

Please consult the PostgreSQL documentation for more information on configuring WAL and recovering databases with WAL.

VACUUM

VACUUM is a PostgreSQL SQL command that cleans and analyzes a PostgreSQL database. Its purpose is to reclaim storage and to collect information for the optimizer. PostgreSQL uses a non-overwriting storage manager, which means that CWDELETE commands don't actually remove data, they just mark it invalid; similarly, CWUPDATE commands and rolled-back transactions will create more old versions of rows. These should be cleaned up on a periodic basis by running CWVACUUM. Running CWANALYZE (known prior to PostgreSQL 7.3 as CWVACUUM ANALYZE) also collects statistics representing the dispersion of the data in each column. The statistics maintained include the number of tuples and number of pages stored in all transactions. This information is valuable when several query execution paths are possible. Running CWANALYZE periodically will increase the speed of the database in processing user queries.

We recommend that you CWVACUUM and CWANALYZE on your Bricolage database regularly, in order to remove expired rows and keep the database running efficiently.

There are three ways to CWVACUUM and ANALYZE a database. The first is through an interactive shell such as CWpsql. Another approach is to schedule a nightly CWcron job to do the task, and use the CWvacuumdb utility. Here's an example:

  PGPASSWORD=password /usr/local/pgsql/bin/vacuumdb -U postgres -q -a
  PGPASSWORD=password /usr/local/pgsql/bin/vacuumdb -U postgres -q -a -z

These commands vacuum all of the databases on your PostgreSQL server. If the vacuum is successful, CWvacuumdb will output VACUUM. In the event of a failure, it will output vacuumdb: Vacuum failed.

However, the best way to keep your database well vacuumed is to use the CWpg_autovacuum utility that comes with PostgreSQL 7.4 and later (and which can be used with PostgreSQL 7.3, as well). To quote the CWpg_autovacuum README document:

"CWpg_autovacuum is a libpq client program that monitors all the databases associated with a PostgreSQL server. It uses the statistics collector to monitor insert, update and delete activity."

The advantages to CWpg_autovacuum are also detailed in the README:

The primary benefit of pg_autovacuum is that the FSM and table statistic information are updated more nearly as frequently as needed. When a table is actively changing, CWpg_autovacuum will perform the CWVACUUMs and CWANALYZEs that such a table needs, whereas if a table remains static, no cycles will be wasted performing this unnecessarily."

This has the effect of keeping your database highly optimized at all times. Read the rest of the CWpg_autovacuum README for installation and configuration instructions. When you start it up, we recommend using these options: CW-D -v 300 -V 0.5 for optimal vacuum maintenance.

TUNING

PostgreSQL requires a certain amount of tuning to operate at optimum efficiency. The default settings for a new PostgreSQL installation are very conservative, and are intended to make sure that the postmaster will be able to start up so that you can get started setting up your databases. It is not, however, a very useful setup for a production database environment, since it severely limits the amount of shared memory that the postmaster can allocate for itself.

The specific tuning recommendations below are for PostgreSQL versions 7.3 and 7.4. Tuning will be slightly different for version 7.2, and very different for version 8.0 when it comes out. Consult the necessary documentation.

To tune PostgreSQL, first determine the amount of memory you want to give to your running PostgreSQL server. In a production environment where PostgreSQL is running on a dedicated machine, this will be the entire memory minus the amount needed to run kernel services. In other situations it will pay to observe the machine while running Bricolage and whatever other services are necessary, and make a guesstimate of the amount of memory you are willing to give to the database server. The amount of memory you decide on may be considered your available RAM.

Shared Buffers

After you have decided on the amount of available RAM to allocate, there are several settings to make. First is the shared memory setting, which in PostgreSQL can be set via the CWshared_buffers setting in the postgresql.conf file in your PostgreSQL data directory. This setting controls the number of shared buffers that the database server can use. Each buffer is 8 KB. (Usually. Some OSs have different shared buffer settings compiled into their kernels. Consult your server admin for details.)

It's important to remember that this is only a holding area, and not the total memory available to the server. As such, resist the urge to set this number to a large portion of your RAM, as doing so will actually degrade performance on many OSs. Members of the pgsql-performance mailing list have found useful values in the range of 6% to 20% of available RAM, depending on database size and number of concurrent queries.

Since Bricolage is a large database and we recommend the use of a server with lots of RAM, you will likely want to set CWshared_buffers to around 10%. In the following example, we'll demonstrate how to set the value to 24576 (192 MB, or 10% of 2 GB RAM).

First you may need to set your kernel to allow for the shared memory allocation. On a Red Hat Linux system, shared memory is controlled through settings in /etc/sysctl.conf. So, for our example, where we've decided to give PostgreSQL up to 192 MB of shared memory, here's how you would make the kernel setting (note that the kernel setting is in bytes, so our setting is CW192 * 1024 * 1024 = 201326592):

  kernel.shmmax = 201326592
  kernel.shmall = 201326592

These settings can then be enabled by running CW/sbin/sysctl -p. See <http://www.postgresql.org/docs/current/interactive/kernel-resources.html> for more information on shared buffers and changing your operating system's shared buffer settings. Now you can update your postgresql.conf file's shared memory buffers setting to take advantage of this shared memory:

  shared_buffers = 24576

Work Memory

Next, it's probably also a good idea to boost your work memory (or sort memory, as it was known prior to PostgreSQL 8.0) to get a better response time to the very large queries used to look up Bricolage objects. The work memory has to do with number of concurrent queries. Never risk going over total memory, or else PostgreSQL will go into swap and things will get really slow. Bricolage generally uses no more than three or four concurrent queries, so 8 MB should be okay. Unlike CWshared_buffers, the CWwork_mem (or CWsort_mem) setting is in KB buffers, so setting it to 8 MB would be:

  work_mem = 8192

The difficult issue with CWwork_mem is that more is needed for bigger queries, but CWwork_mem is not shared, being allocated on a per-task basis. You do not, ever, want to use more cumulative CWwork_mem than your computer actually has, because it will go into swap and slow PostgreSQL down to glacial speed.

Bricolage uses some complex queries that require 3 or even 4 sorts per query. This means that you have to be prepared to have CW3 * work_mem available for each concurrent query. Overall, this means (with fudge factors) that when you can calculate the maximum CWwork_mem it is completely safe to set as: CWAvailable RAM / max_connections * 3. But more than 12 MB CWwork_mem should not be needed except on the largest Bricolage installations (5GB + database).

You must remember that if your Bricolage usage forces you to increase CWmax_connections, then you will have to think about lowering CWwork_mem...or buying more RAM.

Maximum Connections

Since Bricolage is a web application with persistent database connections, you can likely decrease the value of the CWmax_connections setting in postgresql.conf To the maximum number of concurrent connections you'd expect, plus a couple extra for good measure. In a very busy Bricolage environment with 100s of users, that number is probably around 50. For less busy installations, the number can be lower, but in any event, probably should not be less than 8 (since you might have that many Apache processes running, even on a single-user system).

If you're not sure what number to assign to the CWmax_connections directive, watch your Bricolage server to see how many connections it holds to the database to determine the optimum setting. Since there is an inverse relation between CWmax_connections and CWwork_mem, lowering this number can also help you to find a middle ground between a healthy value for CWwork_mem and needing to reconfigure the kernel's SHMMAX (See Shared Buffers).

  max_connections = 50

Effective Cache Size

The effective cache size is the amount of memory available to PostgreSQL for caching your database. This setting should be 2/3 - 3/4 the amount of available RAM. This setting is typically set in 8 KB blocks. So for a system with 2 GB available RAM where you want to allocate 75% for the effective cache size, the number would be calculated CW2 * 1024 * 1024 * .75 / 8 = 196608:

  effective_cache_size = 196608

Maintenance Work Memory

The CWmaintenance_work_mem (or CWvacuum_mem, as it was known prior to the release of PostgresQL 8.0) directive reserves memory for database maintenance work, such as CWvacuuming the database. If your PostgreSQL server has a decent amount of RAM (say 1-2 GB), increase the value of CWmaintenance_work_mem to allow CWvacuums and other mainentance work to go faster. We recommend 64 MB (set in KB):

  maintenance_work_mem = 65536

Random Page Cost

For servers where the entire Bricolage database will be loaded into memory, the value of the CWrandom_page_cost directive should be significantly lowered. This directive is a multiple of cost to extract single random tuple instead of sequential tuples. It's faster to extract a series of sequential rows per page from disk. But if most of your database will be cached in memory and you have a fast machine, then the default is too high. Lower it to 2.0. It might even be worth it to lower it to 1.5 if you notice that the database is still doing unnecessary seq scans.

  random_page_cost = 2

Maximum Free Space Map Pages

The CWmax_fsm_pages directive should be set relative to the amount of update activity. No one is really sure how much daily update activity there is in a typical Bricolage database, and the amount will likely vary from installation to installation. However, if you're using CWpg_autovacuum with a threshold of 30%, then you can set CWmax_fsm_pages to 30% of the expected quantity of data pages.

The number of data pages is divided into 8 KB pages. So for a 1 GB database, the number of data pages would be CW1024 * 1024 / 8 = 131072. For each record updated in the database, PostgreSQL must store a dead tuple. The CWfsm_map setting determines how much space is held open so that dead tuples don't crowd the data pages. If you estimate 50% updates, then set CWmax_fsm_pages to CW131072 * .5 = 65536:

  max_fsm_pages = 65536

For a large database on a system with lots of memory, you can assign more. A value of 75000 should be more than enough for expected activity in a 1 GB database.

Where to Learn More

See Josh Berkus' article, Tuning PostgreSQL for performance at <http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html>. You can search the archives of the pgsql-performance list at <http://archives.postgresql.org/pgsql-performance/> and subscribe to the list at <http://webmail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&func=lists-long-full&extra=pgsql-performance>.

MORE PERFORMANCE TIPS

Note: Do this at your own risk!!!

Another common cause of poor performance in a PostgreSQL database is the existence of very very large tables. Bricolage can sometimes have this sort of problem since by design it is very conservative about eliminating old data. As a result, certain tables tend to grow quite large.

For instance, if you notice that the Bricolage database has become quite large you might decide that it makes sense to actually delete some of the old data. In particular the CWjob table tends to grow quite large, with most of the data actually not being used for much of anything, so from time to time we do this:

  DELETE FROM member
  WHERE  class__id IN (54, 79, 80)
         AND id NOT IN (
           SELECT member__id
           FROM   job_member, job
           WHERE  job.id = job_member.object_id
                  AND (
                     executing = 1
                     OR comp_time IS NULL
                  )
           );

  DELETE FROM job
  WHERE  executing = 0
         AND (
           comp_time IS NOT NULL
           OR failed = 1
         );

This can result in significant boosts in preview and publish performance, since, for each job-related query, PostgreSQL will no longer have to load a very very large index into memory, sometimes paging it several times. Of course it also saves disk space.

Of course if you have some reason to generate reports on complete jobs you won't be able to do this.

AUTHORS

David Wheeler <david@wheeler.net>

Mark Jaroski <jaroskim@who.int>

With input and suggestions from Josh Berkus.

SEE ALSO

Bric