home clear 64x64
en blue 200x116 de orange 200x116 info letter User
suche 36x36
Neueste VersionenFixList
14.10.xC11 FixList
12.10.xC16.X5 FixList
11.70.xC9.XB FixList
11.50.xC9.X2 FixList
11.10.xC3.W5 FixList
Haben Sie Probleme? - Kontaktieren Sie uns.
Kostenlos registrieren anmeldung-x26
Kontaktformular kontakt-x26

Informix - Problembeschreibung

Problem IT01853 Status: Geschlossen

INCREMENTAL ARCHIVES ON A SYSTEM WITH MANY SMART BLOB OBJECTS CAN BLOCK THE
INSTANCE FOR SEVERAL MINUTES

Produkt:
INFORMIX SERVER / 5725A3900 / C10 - IDS 12.10
Problembeschreibung:
If your system contains a lot (millions) of smart blob objects 
and you run level 1 or 2 archive, your system might get blocked 
for several minutes (it's indicated by 'Blocked:ARCHIVE' string 
in a header of any onstat command). The only thread which is 
allowed to run is the ontape thread. The problem can be seen 
with both ontape and onbar backup utilities. If you use the 
onbar tool you can see the time delay between archive process 
start and the actual start of rootdbs archive in the 
bar_act.log: 
 
20:00:01 28180618  19398690 /usr/informix/bin/onbar_d -b -w -L 1 
20:00:01 28180618  19398690 Working with veritas-netbackup as 
generic storage manager. 
20:00:03 28180618  19398690 Archive started on rootdbs, 
sbspace1, sbspace2, 
                            sbspace3, sbspace4, datadbs1, 
datadbs2, datadbs3, datadbs4, 
                            logdbs1, logdbs2, logdbs3, logdbs4, 
plogdbs (Requested Level 1). 
20:04:31 28180618  19398690 Begin level 1 backup rootdbs. 
20:04:31 28180618  19398690 Successfully connected to Storage 
Manager. 
20:04:44 28180618  19398690 Completed level 1 backup rootdbs 
(Storage Manager copy ID: 1 1399). 
 
At the beginning of the incremental archive the backup process 
has to sequentially read all the SBLOB headers (stored in 
LO_hdr_partn partition) in every sbspace to identify smart blob 
objects which were changed since last L0 archive. In case the 
pages of LO_hdr_partn partition(s) are not in bufferpool, they 
have to be read from disk.  Till version 11.70.xC2 the instance 
used to use readahead mechanism in this case. After introducing 
the new auto_readahead feature in 11.70.xC3 this is no longer 
true, which makes the performance of LO_hdr_partn reading 
suboptimal.
Problem-Zusammenfassung:
**************************************************************** 
* USERS AFFECTED:                                              * 
* informix users of genpg interface (smart blobs)              * 
**************************************************************** 
* PROBLEM DESCRIPTION:                                         * 
* See Error Description                                        * 
**************************************************************** 
* RECOMMENDATION:                                              * 
* Update to IDS-12.10.xC5                                      * 
****************************************************************
Local-Fix:
Ensure your bufferpool is big enough to hold all pages from all 
LO_hdr_partn partitions in your system (the size of LO_hdr_partn 
partition can be found in 'oncheck -ps <sbspacename>' output). 
Before starting the L1 or L2 archive run 'oncheck -ce' or 
'onspaces -cl' for each sbspace to load the LO_hdr_partn pages 
into bufferpool. This should avoid the server to be blocked for 
too long.
Lösung
Problem Fixed In IDS-12.10.xC5
Workaround
keiner bekannt / siehe Local-Fix
Weitere Daten
Datum - Problem gemeldet    :
Datum - Problem geschlossen :
Datum - der letzten Änderung:
16.05.2014
16.10.2015
16.10.2015
Problem behoben ab folgender Versionen (IBM BugInfos)
Problem behoben lt. FixList in der Version
12.10.xC4.W1 FixList
12.10.xC5 FixList
12.10.xC5.W1 FixList