11gR203 add node for RAC
주요 사항:
1. 다른 노드를 정지할 필요가 없다. 실행 중인 노드에 영향을 주지 않는다고 할 수 있다.
2. 설치 미디어를 다운로드할 필요가 없다.
단계:
--delete
:
1. Reconfigure the RDBMS Services in the cluster to take into account node 2 is gone.
1.1 Reconfigure the Service plb1 so that it is only running on the remaining instance.
[oracle@sage ~]$ srvctl modify service -d db112i -s plb1 -n -i db112i1 -f
1.2 Examine the configuration to ensure the service is removed from instance db112i2 and node Thyme.
[oracle@sage ~]$ srvctl status service -d db112i -s plb1
Service plb1 is running on instance(s) db112i1
[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
..
ora.db112i.plb1.svc
1 ONLINE ONLINE sage
2. Reconfigure the RDBMS Instances in the cluster to take into account node 2 is gone.
2.1. Remove the database instances. As this is an Administrator Managed database this can be performed through dbca. From the RAC Instance Management section in dbca follow the wizard to remove the Instance db112i2 from node 2 thyme.
[oracle@sage ~]$ dbca
[oracle@sage ~]$
[oracle@sage ~]$ srvctl config database -d db112i
Database unique name: db112i
Database name: db112i
Oracle home: /opt/app/oracle/database/11.2/db_1
Oracle user: oracle
Spfile: +DATA1/db112i/spfiledb112i.ora
Domain: vmdom
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: db112i
Database instances: db112i1
Disk Groups: DATA1
Services: plb1
Database is administrator managed
[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
..
ora.db112i.db
1 ONLINE ONLINE sage Open
ora.db112i.plb1.svc
1 ONLINE ONLINE sage
..
3. Remove the Node from the RAC Cluster
3.1 Using the Installer remove the failed node from Inventory of the Remaining Node(s)
[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
[oracle@sage bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/database/11.2/db_1 "CLUSTER_NODES={sage}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2601 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oracle/oraInventory
'UpdateNodeList' was successful.
4. Remove the Node from the Grid Cluster
The process for performing the removal of a failed node has been based on the node deletion processes documented in the Grid and RAC administration guides.
From any node that you are not deleting, run the following commands from the Grid_home/bin directory as root to delete the node from the cluster:
4.1 Stop the VIP resource for the node thyme
[root@sage bin]# ./srvctl stop vip -i thyme
4.2 Remove the VIP for the node thyme
[root@sage bin]# ./srvctl remove vip -i thyme -f
4.3 Check the state of the environment and ensure the VIP for node thyme is removed.
[root@sage bin]# ./crsctl stat res -t
..
ora.sage.vip
1 ONLINE ONLINE sage
..
4.4 Remove node 2, thyme from the Grid Infrastructure/clusterware
# crsctl delete node -n thyme
4.5 As the owner of the Grid Infrastructure Installation perform the following to clean up the Grid Infrastructure inventory on the remaining nodes (in this case node 1, sage).
[root@sage bin]# su - oracle
[oracle@sage ~]$ . oraenv db112i1
[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
[oracle@sage ~]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/grid "CLUSTER_NODES={sage}" CRS=TRUE -silent
4.6 As root now list the nodes that are a part of the cluster to confirm the node required (thyme) has been removed successfully and the only remaining node in this case is node sage.
At the end of this process only the node sage remains as a part of the cluster.
[root@sage bin]# ./olsnodes
sage
:
1.
$ olsnodes -s -t
If the node is pinned, then run the crsctl unpin css command. Otherwise,proceed to the next step.
2.Disable the Oracle Clusterware applications and daemons running on the node.
Run the rootcrs.pl script as root from the Grid_home/crs/install
directory on the node to be deleted, as follows:
# ./rootcrs.pl -deconfig -deinstall -force
Note: Before you run this command, you must stop the EMAGENT, as follows:
$ emctl stop dbconsole
3.From any node that you are not deleting
# crsctl delete node -n node_to_be_deleted
4.On the node you want to delete, run the following command as the user that
installed Oracle Clusterware from the Grid_home/oui/bin directory where
node_to_be_deleted is the name of the node that you are deleting:
$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
{node_to_be_deleted}" CRS=TRUE -silent -local
5.For a local home, deinstall the Oracle Clusterware home from the node that
you want to delete, as follows, by running the following command, where
Grid_home is the path defined for the Oracle Clusterware home:
$ Grid_home/deinstall/deinstall –local
Caution: If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.
6. On any node other than the node you are deleting, run the following command
from the Grid_home/oui/bin directory where remaining_nodes_list is a
comma-delimited list of the nodes that are going to remain part of your cluster:
$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent
--add
1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment.
2. Verify the integrity of the cluster and node3:
$ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
3. To extend the Grid Infrastructure home to the node3("CLUSTER_NEW_NODES={node3,node4,node5}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,node4-vip,node5-vip}")
Grid_home/oui/bin/addNode.sh directory on node1
If you are using Grid Naming Service (GNS), run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
If you are not using GNS, run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
Alternatively,
$ ./addNode.sh -responseFile file_name
$ vi file_name
RESPONSEFILE_VERSION=2.2.1.0.0
CLUSTER_NEW_NODES={node3}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}
4. If you have an Oracle RAC or Oracle RAC One Node database configured on the
cluster and you have a local Oracle home, then do the following to extend the
Oracle database home to node3:
$ $Oracle_home/oui/bin/addNode.sh "CLUSTER_NEW_NODES={node3}"
$ $Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.
5. Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.
6. Run the following CVU command to check cluster integrity.
$ cluvfy stage -post nodeadd -n node3 [-verbose]
문제 발생:
예찰이 통과되지 않아 설치를 계속할 수 없지만 실제적으로 설치 조건을 만족시켰다. 예를 들어/u01/11.2.0/grid 디렉터리가 존재하지만 예찰 시 검사에 실패했다.
prod02:/u01/11.2.0/grid/oui/bin$./addNode.sh -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "prod02"
:::::
Checking CRS home location...
PRVG-1013 : The path "/u01/11.2.0/grid" does not exist or cannot be created on the nodes to be added
Shared resources check for node addition failed
Check failed on nodes:
prod01
Checking node connectivity...
환경 변수 IGNORE 를 설정할 수 있습니다.PREADDNODE_다음과 같이 CHECKS=Y에서 해결합니다.
prod02:/u01/11.2.0/grid/oui/bin$cat addNode.sh
#!/bin/sh
OHOME=/u01/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ] <<<<<<<<<
then
$ADDNODE
EXIT_CODE=$?;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$?;
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$?;
fi
fi
exit $EXIT_CODE ;
prod02:/u01/11.2.0/grid/oui/bin$export IGNORE_PREADDNODE_CHECKS=Y
prod02:/u01/11.2.0/grid/oui/bin$./addNode.sh -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4727 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes prod01 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/11.2.0/grid
New Nodes
Space Requirements
New Nodes
prod01
/u01: Required 3.91GB : Available 17.60GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.4
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Server) 11.2.0.3.0
Installation Plugin Files 11.2.0.3.0
Universal Storage Manager Files 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Automatic Storage Management Assistant 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.3.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Oracle Net Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Enterprise Manager plugin Common Files 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.3.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Cluster Verification Utility Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle LDAP administration 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
PL/SQL 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Cluster Ready Services Files 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Monday, July 6, 2015 7:53:36 PM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Monday, July 6, 2015 7:53:39 PM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Monday, July 6, 2015 7:57:51 PM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/11.2.0/grid/root.sh #On nodes prod01
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
prod02:/u01/11.2.0/grid/oui/bin$
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
Express.js에서 오류를 처리하는 간단한 방법Express에서 오류를 처리하는 여러 가지 방법이 있습니다. 이를 수행하는 일반적인 방법은 기본 익스프레스 미들웨어를 사용하는 것입니다. 또 다른 방법은 컨트롤러 내부의 오류를 처리하는 것입니다. 이러한 처리 방식...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.