Check Votedisk
[oracle@node1 ~]$ crsctl query css votedisk
0. 0 /dev/sdb
Located 1 voting disk(s).
Check OCR Disk
[oracle@node1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 488612
Used space (kbytes) : 3856
Available space (kbytes) : 484756
ID : 1366752866
Device/File Name : /dev/sda
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
Check crs stach up with all resource
[oracle@node1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....I1.inst application 0/5 0/0 ONLINE OFFLINE
ora....I2.inst application 0/5 0/0 ONLINE OFFLINE
ora.JEDI.db application 0/0 0/1 ONLINE OFFLINE
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE OFFLINE
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE OFFLINE
ora....E2.lsnr application 0/5 0/0 ONLINE OFFLINE
ora.node2.gsd application 0/5 0/0 ONLINE OFFLINE
ora.node2.ons application 0/3 0/0 ONLINE OFFLINE
ora.node2.vip application 0/0 0/0 ONLINE OFFLINE OFFLINE
ora.node2.vip application 0/0 0/0 ONLINE OFFLINE
[oracle@node1 ~]$ oifcfg getif
eth0 10.0.0.0 global public
eth0 20.0.0.0 global cluster_interconnect
[oracle@node1 ~]$ srvctl config database
JEDI
[oracle@node1 ~]$ srvctl config database -d JEDI
node1 JEDI1 /u01/app/oracle/product/11.1.0/db_1
node2 JEDI2 /u01/app/oracle/product/11.1.0/db_1
[oracle@node1 ~]$ srvctl status database -d JEDI
Instance JEDI1 is running on node node1
Instance JEDI2 is not running on node node2
[oracle@node1 ~]$ srvctl config nodeapps -n node1
VIP exists.: /node1-vip/10.0.0.21/255.0.0.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.
--Adding new vip address
#As root user
# srvctl add network -k 2 -S 192.168.10.0/255.255.255.0/eth0
STOP CRS SERVICES
./crsctl stop cluster -all
./crsctl stop cluster
START CRS SERVICES
./crsctl start cluster -all
./crsctl start cluster
CHECK CRS SERVICES
./crsctl check cluster -all
./crsctl check cluster
./crsctl stat res -t -init
./crsctl start resource <resource-name1> <resource-name1>...........etc
START HAS
crsctl start has
crsctl enable has -> enable Oracle High Availability Services autostart
crsctl config has -> check if Oracle High Availability Services autostart is enabled/ disabled.
STOP HAS
crsctl stop has
crsctl disable has -> disable Oracle High Availability Services autostart
CHECK RESOURCE STATUS
crsctl status resource -t
crsctl stop resource -all -c <cluster name>
START DATABASE
srvctl start database -d ORCL
STOP DATABASE
srvctl stop database -d ORCL
STOP JUST RAC instance
srvctl stop instance -d ORCL -i orcl1
START JUST RAC instance
srvctl start instance -d ORCL -i orcl
Adding db/node manually
crsctl add database -d db_name -p <spfil> -a "disk group comma separated"
crsctl add instance -d db -i node3 -n host3
Disable db startup in next rebooy
crsctl disable database -d db
Show rac configuration
srvctl config database -d db_name
START DATABASE
srvctl start database -d ORCL
STOP DATABASE
srvctl stop database -d ORCL
STOP JUST RAC instance
srvctl stop instance -d ORCL -i orcl1
START JUST RAC instance
srvctl start instance -d ORCL -i orcl
Adding db/node manually
crsctl add database -d db_name -p <spfil> -a "disk group comma separated"
crsctl add instance -d db -i node3 -n host3
Disable db startup in next rebooy
crsctl disable database -d db
Show rac configuration
srvctl config database -d db_name
No comments:
Post a Comment