When your Control Panel is slow or you have high CPU/memory load,
you can do a few steps to accelerate its performance.
H-Sphere Java-Related Issues
1. Tomcat Optimization
Customize Tomcat environment variables.
2. NFU Cache Optimization
NFU cache parameters have to be set depending on your server memory size and the number
of accounts and domains in your system. If a lot of new accounts/domains
are added to H-Sphere, we recommend to reconfigure NFU cache as follows:
1. Stop the Control Panel.
2. Set NFU parameters in hsphere.properties.
Check hsphere.log for NFU messages:
grep NFU /var/log/hsphere/hsphere.log
You would receive the lines like these:
Here, you should pay attention to the "size" and "rate" parameters.
If the "initial size" is close to the "max size" and rate is lower than 0.75,
it is appropriate to increase the size of NFU cache.
For this, you need to insert two parameters to hsphere.properties:
NFU_CACHE_MULTIPLIER = 5
NFU_CACHE_MULTIPLIER_MAX = 10
In this example, cache size would increase five times, and if necessary (e.g., for accounting)
it could be increased ten times.
3. Start the Control Panel.
H-Sphere System Database Optimization
1. Converting Bigint to Int4
Skip this procedure if you have have already performed it..
Postgres migration from int8 to int4 is very effective
if you host more than 500 accounts.
(By default, Postgres can't index fields of the int8 type.)
You need to perform it once at any time.
For this procedure, find the partition with sufficient amount of free space.
1. Stop the Control Panel (check hsphere.log that no crons are running)
2. Export schema:
pg_dump -u -s -f db_old.db hsphere
chmod 600 db_old.db
cp db_old.db db.db
Note: dump file is created with 644 permissions by default;
you need to set more secure 600 permissions
to prevent the data from being read by other users.
3. Convert int8 to int4:
vi db.db
In vi editor, change every instance of bigint and int8 to int4 by typing the following commands:
%s/bigint/int4/g
%s/int8/int4/g
4. Then, still editing db.db in vi,
change type back to int8 for the ip_num column in the l_server_ips table and its index.
a) find the ip_num definition in the CREATE TABLE "l_server_ips" ( ... ); command:
ip_num int4 NOT NULL
- and change int4 to int8;
b) find the index creation command:
CREATE INDEX "l_server_ips_numkey" on "l_server_ips" using btree ( "ip_num" "int4_ops" );
- and change int4_ops to int8_ops.
5. Export Data:
pg_dump -u -a -f data.db hsphere
chmod 600 data.db
Note: dump file is created with the 644 permissions by default;
you need to set more secure 600 permissions
to prevent the data from being read by other users.
6. Recreate DB:
dropdb -U wwwuser hsphere
createdb -U wwwuser hsphere
7. Create Schema:
psql -q -U wwwuser -f db.db hsphere
psql -q -U wwwuser -f data.db hsphere
9. Start the Control Panel.
2. Updating Moddb (for H-Sphere 2.3.x starting from 2.3 RC4)
Moddb is one of the scripts included in the H-Sphere update. However, it is not automatically performed
during the H-Sphere installation. You should launch it manually and only once. To do this:
Note: Prior to running moddb, update your H-Sphere to the latest version.
1. Stop the Control Panel.
2. Make moddb:
- Run the update script. For example, for the H-Sphere 2.3.2 Patch 5 update script:
#sh ./U23.2P5
- choose the moddb option.
This option will back up old H-Sphere database and modify H-Sphere DB scheme
(increase some fields length, e.g: email, notes, suspend/resume reason etc).
Note: You may be prompted for your H-Sphere DB password under Postgres
versions starting from 7.2.x. Enter the password to complete the procedure.
3. Start the Control Panel.
3. Performing VACUUM
VACUUM should be performed regularly (e.g., once a week).
You may put the corresponding script into cron.
Mind, however, that this procedure requires a lot of system resources and creates a high server load.
We recommend you to back up the database before performing vacuumdb. Be careful: if the server gets down
during this process, some data may be lost!
To backup your system database, run the hs_bck script:
/hsphere/shared/scripts/cron/hs_bck,
or
cd /hsphere/shared/backup ./hs_bck hs_bck.cfg
Do the following procedure to apply VACUUM to your system:
- Log into the server as root:
su - postgres (or su - pgsql for FreeBSD)
- Connect to the database:
psql -U wwwuser -d hsphere
- Do vacuum:
hsphere$ vacuum full;
(or vacuum analyze;, or vacuum;,
depending on the PostgreSQL server version)
Note: vacuum is a time-consuming procedure; it may take up to several hours to complete!
4. Optimizing Postgres
Configuring Postgres Parameters
You can enhance CP productivity by optimizing some Postgres parameters in
the postgresql.conf file. Default values of these parameters
are intended for less powerful workstations, and therefore
these values should be significantly increased for better performance on servers with
multiple CPUs, large RAM, and with large and intensively used databases.
Consider reconfiguration of the following parameters (please refer to PostgreSQL documentation for details):
- shared_buffers - size of shared buffers for the use of Postgres server processes.
It is measured in disk pages, which are normally 8kB.
Default value is 64, i.e., 512 kB RAM. We recommend increasing this parameter:
- for middle-size database and 256-512 MB available RAM: to 16-32 MB (2048-4096)
- for large database and 1-4 GB available RAM: to 64-256 MB (8192-32768)
- sort_mem - size of RAM allocated for sorting query results. Measure unit is 1kB.
Default value is 1024. We recommend setting this parameter to 2-4% of available RAM.
- wal_buffers - size of the transaction log buffer. Measure unit is 8kB. Default value is 8.
It can be increased to 256-512 for better processing of complex transactions.
- max_connections - the maximum number of connections to a database at a time.
Default value is 32. We recommend increasing it to at least 64,
due to innovations in H-Sphere 2.4 and up.
- checkpoint_segments - maximum distance between automatic WAL (Write-Ahead Log) checkpoints.
Measured in log file segments (each segment is normally 16 megabytes). Default value is 3.
We recommend increasing this parameter if data is being actively accessed and modified.
- checkpoint_timeout - maximum time for transaction, in seconds. Default value is 3000.
We recommend increasing this parameter at least 10 times.
- effective_cache_size - sets the optimizer's assumption about the effective size of the disk cache.
Measure unit is 8kB. Default value is 1000. If you have enough memory, we recommend setting this
parameter to 25-50% of available RAM.
WARNING:
For FreeBSD, kernel recompilation is required before changing memory usage parameters in postgresql.conf!
Read
Managing Kernel Resources
in PostgreSQL documentation.
To reconfigure Postgres parameters:
Stop Postgres.
Modify the postgresql.conf file:
su - postgres
cd data
vi postgresql.conf
sort_mem = 131072
shared_buffers = 262144
max_connections = 64
wal_buffers=1000
checkpoint_segments = 9
checkpoint_timeout = 3600
effective_cache_size = 100000
Start Postgres and make sure it's working properly. If parameters are incorrect,
Postgres might not start. In this case, please also set the SHMALL and SHMMAX
kernel parameters according to the rules described in the
RedHat
documentation.
Start Postgres.
Moving Transaction Logs to a Separate Hard Drive
If the system database is large (more than 1G), we recommend allocating a separate hard drive for
its transaction logs. It is especially helpful for the database
recovery.
To move transaction logs to another hard drive:
Stop Postgres.
Mount a new hard drive.
Move the data/pg_xlog directory from the PostgreSQL home directory to the new disk.
Create the data/pg_xlog symlink to the new location in place of the moved directory.
Start Postgres.
5. Upgrading Postgres to the Latest Version
See Upgrading System Database.
Troubleshooting
Sometimes while importing data you may get the message like this:
psql:data.db:527111: ERROR: copy: line 422025, Bad float8 input format -- underflow
psql:data.db:527111: PQendcopy: resetting connection
This means that Postgres can't interpret data it has just exported.
You need to open the data.db file:
vi data.db
and remove the line which number is calculated in the example above as N=527111+422025.
This line would contain a float8 number like 1.2e-318.
After removing that line, you need to recreate and reload the database.
|