The db2nodes.cfg file is used to define the database partition servers that participate in a DB2® instance. The db2nodes.cfg file is also used to specify the IP address or host name of a high-speed interconnect, if you want to use a high-speed interconnect for database partition server communication.
The format of the db2nodes.cfg file on Linux® and UNIX® operating systems is as follows:
dbpartitionnum hostname logicalport netname resourcesetnamedbpartitionnum
, hostname, logicalport, netname, and resourcesetname are defined in the following section.
The format of the db2nodes.cfg file on Windows® operating systems is as follows:
dbpartitionnum hostname computername logicalport netname resourcesetname
On Windows operating systems, these entries to the db2nodes.cfg are added by the db2ncrt or START DBM ADD DBPARTITIONNUM commands. The entries can also be modified by the db2nchg command. You should not add these lines directly or edit this file.
To scale your partitioned database system, you add an entry for each database partition server to the db2nodes.cfg file. The dbpartitionnum value that you select for additional database partition servers must be in ascending order, however, gaps can exist in this sequence. You can choose to put a gap between the dbpartitionnum values if you plan to add logical partition servers and wish to keep the nodes logically grouped in this file.
This entry is required.
If host names are supplied in the db2nodes.cfg file, instead of IP addresses, the database manager will dynamically try to resolve the host names. Resolution can be either local or through lookup at registered Domain Name Servers (DNS), as determined by the OS settings on the machine.
Starting with DB2 Version 9.1, both TCP/IPv4 and TCP/IPv6 protocols are supported. The method to resolve host names has changed.
While the method used in pre-Version 9.1 releases resolves the string as defined in the db2nodes.cfg file, the method in Version 9.1 or later tries to resolve the Fully Qualified Domain Names (FQDN) when short names are defined in the db2nodes.cfg file. Specifying short configured for fully qualified host names, this may lead to unnecessary delays in processes that resolve host names.
To avoid any delays in DB2 commands that require host name resolution, use any of the following work arounds:
db2 catalog tcpip4 node db2tcp2 remote 192.0.32.67 server db2inst1 with "Look up IPv4 address from 192.0.32.67"
db2 catalog tcpip6 node db2tcp3 1080:0:0:0:8:800:200C:417A server 50000 with "Look up IPv6 address from 1080:0:0:0:8:800:200C:417A"
DB2 reserves a port range (for example, 60000 - 60003) in the /etc/services file for inter-partition communications at the time of installation. This logicalport field in db2nodes.cfg specifies which port in that range you want to assign to a particular logical partition server.
If there is no entry for this field, the default is 0. However, if you add an entry for the netname field, you must enter a number for the logicalport field.
If you are using logical database partitions, the logicalport value you specify must start at 0 and continue in ascending order (for example, 0,1,2).
Furthermore, if you specify a logicalport entry for one database partition server, you must specify a logicalport for each database partition server listed in your db2nodes.cfg file.
This field is optional only if you are not using logical database partitions or a high speed interconnect.
If an entry is specified for this field, all communication between database partition servers (except for communications as a result of the db2start, db2stop, and db2_all commands) is handled through the high speed interconnect.
This parameter is required only if you are using a high speed interconnect for database partition communications.
This parameter is only supported on AIX®, HP-UX, and Solaris Operating System.
On AIX, this concept is known as "resource sets" and on Solaris Operating System it is called "projects". Refer to your operating systems documentation for more information on resource management.
On HP-UX, the resourcesetname parameter is the name of a PRM group. Refer to "HP-UX Process Resource Manager. User Guide. (B8733-90007)" documentation from HP for more information.
On Windows operating systems, process affinity for a logical node can be defined through the DB2PROCESSORS registry variable.
On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The system utility numactl must be available as well as a 2.6 Kernel with NUMA policy support.
The netname parameter must be specified if the resourcesetname parameter is used.
Use the following example configurations to determine the appropriate configuration for your environment.
0 ServerA 0 1 ServerA 1 2 ServerA 2 3 ServerA 3
0 ServerA 0 1 ServerB 0
4 ServerA 0 6 ServerA 1 8 ServerA 2 9 ServerB 0
0 ServerA 0 switch1 1 ServerB 0 switch2 2 ServerB 1 switch2
These restrictions apply to the following examples:
Here is an example of how to set up the resource set for AIX operating systems.
In this example, there is one physical node with 32 processors and 8 logical database partitions (MLNs). This example shows how to provide process affinity to each MLN.
DB2/MLN1: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00000,sys/cpu.00001,sys/cpu.00002,sys/cpu.00003 DB2/MLN2: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00004,sys/cpu.00005,sys/cpu.00006,sys/cpu.00007 DB2/MLN3: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00008,sys/cpu.00009,sys/cpu.00010,sys/cpu.00011 DB2/MLN4: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00012,sys/cpu.00013,sys/cpu.00014,sys/cpu.00015 DB2/MLN5: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00016,sys/cpu.00017,sys/cpu.00018,sys/cpu.00019 DB2/MLN6: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00020,sys/cpu.00021,sys/cpu.00022,sys/cpu.00023 DB2/MLN7: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00024,sys/cpu.00025,sys/cpu.00026,sys/cpu.00027 DB2/MLN8: owner = db2inst1 group = system perm = rwr-r- resources = sys/cpu.00028,sys/cpu.00029,sys/cpu.00030,sys/cpu.00031
vmo -p -o memory_affinity=1
chuser capabilities= CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH db2inst1
1 regatta 0 regatta DB2/MLN1 2 regatta 1 regatta DB2/MLN2 3 regatta 2 regatta DB2/MLN3 4 regatta 3 regatta DB2/MLN4 5 regatta 4 regatta DB2/MLN5 6 regatta 5 regatta DB2/MLN6 7 regatta 6 regatta DB2/MLN7 8 regatta 7 regatta DB2/MLN8
This example shows how to use PRM groups for CPU shares on a machine with 4 CPUs and 4 MLNs and 24% of CPU share per MLN, leaving 4% for other applications. The DB2 instance name is db2inst1.
OTHERS:1:4:: db2prm1:50:24:: db2prm2:51:24:: db2prm3:52:24:: db2prm4:53:24::
db2inst1::::OTHERS,db2prm1,db2prm2,db2prm3,db2prm4
prmconfig -i prmconfig -e CPU
1 voyager 0 voyager db2prm1 2 voyager 1 voyager db2prm2 3 voyager 2 voyager db2prm3 4 voyager 3 voyager db2prm4
PRM configuration (steps 1-3) may be done using interactive GUI tool xprm.
On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The numactl system utility must be available in addition to a 2.6 kernel with NUMA policy support. Refer to the man page for numact1 for more information about NUMA support on Linux operating systems.
This example shows how to set up a four node NUMA computer with each logical node associated with a NUMA node.
$ numactl --hardwareOutput similar to the following displays:
available: 4 nodes (0-3) node 0 size: 1901 MB node 0 free: 1457 MB node 1 size: 1910 MB node 1 free: 1841 MB node 2 size: 1910 MB node 2 free: 1851 MB node 3 size: 1905 MB node 3 free: 1796 MB
0 hostname 0 hostname 0 1 hostname 1 hostname 1 2 hostname 2 hostname 2 3 hostname 3 hostname 3
Here is an example of how to set up the project for Solaris Version 9.
In this example, there is 1 physical node with 8 processors: one CPU will be used for the default project, three (3) CPUs will used by the Application Server, and four (4) CPUs for DB2. The instance name is db2inst1.
create system hostname create pset pset_default (uint pset.min = 1) create pset db0_pset (uint pset.min = 1; uint pset.max = 1) create pset db1_pset (uint pset.min = 1; uint pset.max = 1) create pset db2_pset (uint pset.min = 1; uint pset.max = 1) create pset db3_pset (uint pset.min = 1; uint pset.max = 1) create pset appsrv_pset (uint pset.min = 3; uint pset.max = 3) create pool pool_default (string pool.scheduler="TS"; boolean pool.default = true) create pool db0_pool (string pool.scheduler="TS") create pool db1_pool (string pool.scheduler="TS") create pool db2_pool (string pool.scheduler="TS") create pool db3_pool (string pool.scheduler="TS") create pool appsrv_pool (string pool.scheduler="TS") associate pool pool_default (pset pset_default) associate pool db0_pool (pset db0_pset) associate pool db1_pool (pset db1_pset) associate pool db2_pool (pset db2_pset) associate pool db3_pool (pset db3_pset) associate pool appsrv_pool (pset appsrv_pset)
system:0:::: user.root:1:::: noproject:2:::: default:3:::: group.staff:10:::: appsrv:4000:App Serv project:root::project.pool=appsrv_pool db2proj0:5000:DB2 Node 0 project:db2inst1,root::project.pool=db0_pool db2proj1:5001:DB2 Node 1 project:db2inst1,root::project.pool=db1_pool db2proj2:5002:DB2 Node 2 project:db2inst1,root::project.pool=db2_pool db2proj3:5003:DB2 Node 3 project:db2inst1,root::project.pool=db3_pool
0 hostname 0 hostname db2proj0 1 hostname 1 hostname db2proj1 2 hostname 2 hostname db2proj2 3 hostname 3 hostname db2proj3