View
265
Download
2
Category
Preview:
Citation preview
Oracle Maximum Availability Architecture
1
Ricardo Portilho Proniricardo@nervinformatica.com.br
Esta obra está licenciada sob a licençaCreative Commons Atribuição-SemDerivados 3.0 Brasil.
Para ver uma cópia desta licença, visite http://creativecommons.org/licenses/by-nd/3.0/br/.
2
Alta Disponibilidade
3
• Confiança• Recuperabilidade• Detecção de erros em tempo hábil• Continuidade operacional
Características da Alta Disponibilidade
4
• Análise de impacto no negócio• Custo de tempo fora de operação• Objetivo de tempo de recuperação (RTO)• Objetivo de ponto de recuperação (RPO)• Meta de gerenciamento• Custo total de propriedade (TCO)• Retorno sobre o investimento (ROI)
Análise de Alta Disponibilidade
5
• Camada 1 (Faturamento, Vendas)• Camada 2 (Compras, Estoque)• Camada 3 (BI, Desenvimento)
Sistemas e Alta Disponibilidade
6
• Tempo máximo de parada tolerado.• Frequência máxima de paradas tolerada.• Custos facilmente mensuráveis (vendas, funcionários ociosos, multas
contratuais)• Custos dificilmente mensuráveis (processos judiciais)• Custos não mensusáveis (publicidade negativa, clientes irritados)
Custos e Alta Disponibilidade
7
Implantação de Alta Disponibilidade
8
• Fast-Start Fault Recovery• Oracle Restart• Oracle Real Application Clusters and Oracle Clusterware• Oracle RAC One Node• Oracle Data Guard• Oracle GoldenGate / Oracle Streams• Oracle Flashback Technology• Oracle Automatic Storage Management• Fast Recovery Area• Recovery Manager• Data Recovery Advisor• Oracle Secure Backup• Oracle Security Features• LogMiner• Oracle Exadata Storage Server Software (Exadata Cell)• Oracle Exadata Database Machine• Oracle Database File System (DBFS)• Oracle Automatic Storage Management Cluster File System (Oracle ACFS)• Client Failover• Automatic Block Repair• Corruption Prevention, Detection, and Repair
Soluções Oracle para Alta Disponibilidade
9
• Operating system and hardware upgrades -> Oracle RAC• Oracle Database patches -> Oracle RAC• Oracle Grid Infrastructure upgrades and patches -> Oracle RAC• Storage Migration -> Oracle ASM• Migrating to Exadata Storage -> Oracle MAA best practices• Upgrading Exadata Storage -> Exadata Patch Manager• Migrating a single-instance database to Oracle RAC -> Oracle Grid Infrastructure• Migrating to Oracle ASM -> Oracle Data Guard• Migrating a single-instance database to Oracle RAC -> Oracle Data Guard• Patch set and database upgrades -> Oracle Data Guard using SQL Apply• Oracle interim patches, Oracle clusterware upgrades and patches, Oracle ASM
upgrades, Operating System and Hardware Upgrades -> Oracle Data Guard Standby-First Patch Apply
• Migration across Windows and Linux -> Oracle Data Guard• Platform migration across the same endian format platforms -> Transportable database• Platform migration across different endian format platforms -> Transportable tablespace• Patch set and database upgrades, platform migration, rolling upgrades, and when
different character sets are required -> Oracle GoldenGate and Oracle Streams• Application upgrades -> Online Application Maintenance and Upgrades
Indisponibilidades Planejadas
10
• Site Failures -> Oracle Data Guard• Site Failures -> Oracle GoldenGate and Oracle Streams• Site Failures -> Recovery Manager• Computer Failures -> Oracle Real Application Clusters and Oracle Clusterware• Computer Failures -> Oracle RAC One Node• Computer Failures -> Fast-Start Fault Recovery• Computer Failures -> Oracle Data Guard• Computer Failures -> Oracle GoldenGate and Oracle Streams• Storage Failures -> Oracle Automatic Storage Management• Storage Failures -> Oracle Data Guard• Storage Failures -> RMAN with Fast Recovery Area and Oracle Secure Backup• Storage Failures -> Oracle GoldenGate and Oracle Streams• Data Corruption -> Oracle Exadata Storage Server Software (Exadata Cell) and Oracle ASM• Data Corruption -> Corruption Prevention, Detection, and Repair• Data Corruption -> Data Recovery Advisor and RMAN with Fast Recovery Area• Data Corruption -> Oracle Data Guard• Data Corruption -> Oracle GoldenGate and Oracle Streams• Human Errors -> Oracle Security Features• Human Errors -> Oracle Flashback Technology• Human Errors -> LogMiner• Lost writes -> Oracle Data Guard, RMAN, DB_LOST_WRITE_PROTECT• Lost writes -> Oracle Data Guard Oracle Exadata Storage Server Software (Exadata Cell)• Hangs or slow down - Oracle Database and Oracle Enterprise Manager
Indisponibilidades Não Planejadas
11
High Availability Overview:http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e17157/toc.htm
High Availability Best Practices:http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10803/toc.htm
Maiores informações
12
Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09
Ambiente ContingênciaOracle Data Guard Physical Standby: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10
Ambiente ObservadorOracle Client: nerv15
Cenário 1: Oracle RAC + Oracle Data Guard
13
Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09
Ambiente ContingênciaOracle Database: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10
Cenário 2: Oracle RAC + Oracle Golden Gate
14
Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09
Ambiente ContingênciaOracle RAC: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10
Ambiente ObservadorNFS: nerv15
Cenário 3: Oracle RAC Extended
15
Lab 1 – Instalação OEL 6
Hands On !
15
16
16
Lab 1.1: Instalação OEL 6
Nas máquinas nerv01, nerv02 e nerv11, instale o OEL.
- 1a tela: Install or upgrade an existing system - 2a tela: Skip - 3a tela: Next - 4a tela: English (English), Next - 5a tela: Brazilian ABNT2, Next - 6a tela: Basic Storage Devices, Next - 7a tela: Fresh Installation, Next - 8a tela: nerv01.localdomain, Next - 9a tela: America/Sao Paulo, Next - 10a tela: Nerv2014, Nerv2014, Next - 11a tela: Create Custom Layout, Next
17
17
Lab 1.2: Instalação OEL 6
17
- 12a tela: Crie as partições como abaixo, e em seguida, Next:sda1 200 MB /bootsda2 100000 MB /sda3 20000 MB /homesda5 16384 MB swapsda6 10000 MB /varsda7 10000 MB /tmpsda8 Espaço restante /u01
- 13a tela: Format- 14a tela: Write changes to disk- 15a tela: Next- 16a tela: Minimal- 17a tela: Reboot- Retire o DVD.
18
Lab 2 – Configuração OEL 6
Hands On !
18
19
Nas máquinas nerv01, nerv02 e nerv11, configure as placas de rede.
19
Lab 2.1 – Configuração OEL 6
20
Nas máquinas nerv01, nerv02 e nerv11, atualize o sistema operacional e execute a instalação dos pré-requisitos.# service network restart# yum -y update# yum -y install oracle-rdbms-server-11gR2-preinstall# yum -y install oracleasm-support# yum -y install unzip iscsi-initiator-utils java-1.7.0-openjdk parted# yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686
20
Lab 2.2 – Configuração OEL 6
21
Nas máquinas nerv01, nerv02 e nerv11, ACRESCENTAR AO FINAL do arquivo /etc/hosts:# Public192.168.0.101 nerv01.localdomain nerv01192.168.0.102 nerv02.localdomain nerv02192.168.0.121 nerv11.localdomain nerv11# Private192.168.1.101 nerv01-priv.localdomain nerv01-priv192.168.1.102 nerv02-priv.localdomain nerv02-priv192.168.1.121 nerv11-priv.localdomain nerv11-priv# Virtual192.168.0.111 nerv01-vip.localdomain nerv01-vip192.168.0.112 nerv02-vip.localdomain nerv02-vip192.168.0.131 nerv11-vip.localdomain nerv11-vip# Storage192.168.0.201 nerv09.localdomain nerv09192.168.0.202 nerv10.localdomain nerv10# Client192.168.0.2 nerv15.localdomain nerv15
21
Lab 2.3 – Configuração OEL 6
22
Nas máquinas nerv01, nerv02 e nerv11, executar os comandos abaixo.# groupadd oper# groupadd asmadmin# groupadd asmdba# groupadd asmoper# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle
# mkdir -p /u01/app/grid# mkdir -p /u01/app/11.2.0.4/grid# mkdir -p /u01/app/oracle/product/11.2.0.4/db_1# mkdir -p /u01/shared_config01# mkdir -p /u01/shared_config02# mkdir -p /u01/shared_config03
# chown -R oracle:oinstall /u01# chmod -R 775 /u01
# passwd oracle (Coloque como senha do usuário oracle: Nerv2014)
22
Lab 2.4 – Configuração OEL 6
23
Nas máquinas nerv01, nerv02 e nerv11, altere o SELinux de “enforcing” para “permissive”.# vi /etc/selinux/config
Nas máquinas nerv01, nerv02 e nerv11, desabilite o firewall.# chkconfig iptables off# chkconfig ip6tables off
Nas máquinas nerv01, nerv02 e nerv11, desabilite o NTP.# mv /etc/ntp.conf /etc/ntp.conf.org# reboot
23
Lab 2.5 – Configuração OEL 6
24
Nas máquinas nerv01 e nerv02 , com o usuário oracle, ACRESCENTAR NO FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=nerv01.localdomainexport ORACLE_UNQNAME=ORCLexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1export GRID_HOME=/u01/app/11.2.0.4/gridexport CRS_HOME=$GRID_HOMEexport ORACLE_SID=ORCL1export ORACLE_TERM=xtermexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi
24
Lab 2.6 – Configuração OEL 6
25
Na máquina nerv11, com o usuário oracle, ACRESCENTAR NO FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=nerv11.localdomainexport ORACLE_UNQNAME=ORCLexport ORACLE_BASE=/u01/appexport ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0.4/db_1export GRID_HOME=$ORACLE_BASE/grid/11.2.0.4/gridexport CRS_HOME=$GRID_HOMEexport ORACLE_SID=ORCLexport ORACLE_TERM=xtermexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi
25
Lab 2.7 – Configuração OEL 6
26
Lab 3 – Storage
Hands On !
26
27
Nas máquinas nerv09 e nerv10, crie 3 diretórios.# mkdir /shared_config01# mkdir /shared_config02# mkdir /shared_config03
Nas máquinas nerv09 e nerv10, adicionar no arquivo /etc/exports:/shared_config01 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_config02 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_config03 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Nas máquinas nerv09 e nerv10, iniciar o NFS Server:# service nfs start# chkconfig nfs on
27
Lab 3.1 – Storage (NFS)
28
Nas máquinas nerv01, nerv02 e nerv11, adicionar no arquivo /etc/fstab as TRÊS linhas abaixo.nerv09:/shared_config01 /u01/shared_config01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0nerv09:/shared_config02 /u01/shared_config02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0nerv09:/shared_config03 /u01/shared_config03 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
28
Lab 3.2 – Storage (NFS)
29
Nas máquinas nerv01, nerv02 e nerv11, executar:# mount /u01/shared_config01# mount /u01/shared_config02# mount /u01/shared_config03
Na máquina nerv01, executar:# mkdir /u01/shared_config01/rac01# mkdir /u01/shared_config02/rac01# mkdir /u01/shared_config03/rac01# chown -R oracle:oinstall /u01/shared_config01/rac01# chown -R oracle:oinstall /u01/shared_config02/rac01# chown -R oracle:oinstall /u01/shared_config03/rac01
Na máquina nerv11, executar:# mkdir /u01/shared_config01/rac11# mkdir /u01/shared_config02/rac11# mkdir /u01/shared_config03/rac11# chown -R oracle:oinstall /u01/shared_config01/rac11# chown -R oracle:oinstall /u01/shared_config02/rac11# chown -R oracle:oinstall /u01/shared_config03/rac11
29
Lab 3.3 – Storage (NFS)
30
30
Lab 3.4 – Storage (ASM)
Nas máquinas nerv09 e nerv10, crie 4 partições de 10GB, sem formatar.
Nas máquinas nerv09 e nerv10, configure o iSCSI server.# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5 initiator-address 192.168.0.101 initiator-address 192.168.0.102</target><target iqn.2010-10.com.nervinformatica:storage.asm01-02> backing-store /dev/sda6 initiator-address 192.168.0.101 initiator-address 192.168.0.102</target>...
# service tgtd start# chkconfig tgtd on
31
31
Lab 3.5 – Storage (ASM)
Nas máquinas nerv01, nerv02 e nerv11, ative o pacote iSCSI Initiator.# service iscsid start# chkconfig iscsid on
Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no Storage.# iscsiadm -m discovery -t sendtargets -p 192.168.0.201# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-01 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-02 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-03 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-04 -p 192.168.0.201 -l
Nas máquinas nerv01, nerv02 e nerv11, deixe APENAS os novos discos no arquivo /etc/iscsi/initiatorname.iscsi.InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-01InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-02InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-03InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-04
32
32
Lab 3.6 – Storage (ASM)
Nas máquinas nerv01, nerv02 e nerv11, REINICIE, e verifique se os discos foram configurados localmente.# fdisk -l
Nas máquinas nerv01 e nerv11, particione os novos discos.# fdisk /dev/sdbn <enter>p <enter>1 <enter><enter><enter>w <enter>
# fdisk /dev/sdcn <enter>p <enter>1 <enter><enter><enter>w <enter>
33
33
Lab 3.7 – Storage (ASM)Nas máquinas nerv01 e nerv11, particione os novos discos.# fdisk /dev/sddn <enter>p <enter>1 <enter><enter><enter>w <enter>
# fdisk /dev/sden <enter>p <enter>1 <enter><enter><enter>w <enter>
Nas máquinas nerv02, execute a detecção dos novos discos.# partprobe /dev/sdb# partprobe /dev/sdc# partprobe /dev/sdd# partprobe /dev/sde
34
Nas máquinas nerv01, nerv02 e nerv11, configure a ASMLib.# /etc/init.d/oracleasm configureoracle <enter>oinstall <enter>y <enter>y <enter># /etc/init.d/oracleasm status
Nas máquinas nerv01 e nerv11, crie os discos do ASM.# /etc/init.d/oracleasm createdisk DISK00 /dev/sdb1# /etc/init.d/oracleasm createdisk DISK01 /dev/sdc1# /etc/init.d/oracleasm createdisk DISK02 /dev/sdd1# /etc/init.d/oracleasm createdisk DISK03 /dev/sde1
Na máquina nerv02, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks
34
Lab 3.8 – Storage (ASM)
35
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK00# /etc/init.d/oracleasm querydisk -v -p DISK01# /etc/init.d/oracleasm querydisk -v -p DISK02# /etc/init.d/oracleasm querydisk -v -p DISK03
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01brw-rw----. 1 oracle oinstall 8, 49 Mar 3 08:40 DISK02brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK03
35
Lab 3.9 – Storage (ASM)
36
Lab 4 - Grid Infraestructure
Hands On !
36
37
Na máquina nerv01, configure o SSH sem senha.[oracle@nerv01 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ ssh nerv02
[oracle@nerv02 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ ssh nerv01
Nas máquinas nerv01 e nerv11, com o usuário oracle, altere o Response File , descompacte e execute o instalador do Grid Infrastructure.$ unzip -q p13390677_112040_Linux-x86-64_3of7.zip$ /home/oracle/grid/runInstaller -silent -responseFile /home/oracle/grid_nerv01.rsp
37
Lab 4.1 – Grid Infrastructure
38
Na máquina nerv01, com o usuário root, execute os seguintes scripts.# /u01/app/oraInventory/orainstRoot.sh# /u01/app/11.2.0.4/grid/root.sh
Na máquina nerv02, com o usuário root, execute os seguintes scripts.# /u01/app/oraInventory/orainstRoot.sh# /u01/app/11.2.0.4/grid/root.sh
Na máquina nerv11, com o usuário root, execute os seguintes scripts.# /u01/oraInventory/orainstRoot.sh# /u01/app/grid/11.2.0.4/grid/root.sh
38
Lab 4.2 – Grid Infrastructure
39
Na máquina nerv01, atualize o Cluster Inventory.$ /u01/app/11.2.0.4/grid/oui/bin/runInstaller -silent -updateNodeList \ ORACLE_HOME="/u01/app/11.2.0.4/grid" LOCAL_NODE=nerv01 \ CLUSTER_NODES=nerv01,nerv02 CRS=true
Na máquina nerv01, crie o LISTENER do Grid.$ /u01/app/11.2.0.4/grid/bin/netca \ -silent -responseFile /u01/app/11.2.0.4/grid/assistants/netca/netca.rsp
Na máquina nerv11, atualize o Cluster Inventory.$ /u01/app/grid/11.2.0.4/grid/oui/bin/runInstaller -silent -updateNodeList \ ORACLE_HOME="/u01/app/grid/11.2.0.4/grid" LOCAL_NODE=nerv11 \ CLUSTER_NODES=nerv11 CRS=true
Na máquina nerv11, crie o LISTENER do Grid.$ /u01/app/grid/11.2.0.4/grid/bin/netca \ -silent -responseFile /u01/app/grid/11.2.0.4/grid/assistants/netca/netca.rsp
39
Lab 4.3 – Grid Infrastructure
40
Lab 5 – Oracle Database Software
Hands On !
40
41
Nas máquinas nerv01 e nerv11, com o usuário oracle, adeque o Response File, descompacte e execute o instalador do Oracle Database Software.$ unzip -q p13390677_112040_Linux-x86-64_1of7.zip$ unzip -q p13390677_112040_Linux-x86-64_2of7.zip$ /home/oracle/database/runInstaller -silent -responseFile /home/oracle/db_nerv01.rsp
Na máquina nerv01, com o usuário root, execute o script abaixo.# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina nerv02, com o usuário root, execute o script abaixo.# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina nerv11, com o usuário root, execute o script abaixo.# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
41
Lab 5.1 – Oracle Database Software
42
LAB 6 – ASM
Hands On !
42
43
Nas máquinas nerv01 e nerv11, crie a Instância ASM, com o DISKGROUP DATA.$ $GRID_HOME/bin/crsctl status res -t
$ $GRID_HOME/bin/asmca -silent -configureASM -sysAsmPassword Nerv2014 \ -asmsnmpPassword Nerv2014 -diskGroupName DATA -diskList \ '/dev/oracleasm/disks/DISK00,/dev/oracleasm/disks/DISK01' -redundancy NORMAL \-au_size 1 -compatible.asm '11.2.0.0.0' -compatible.rdbms '11.2.0.0.0' \-diskString '/dev/oracleasm/disks/*'
$ $GRID_HOME/bin/crsctl status res -t
43
Lab 6.1 – ASM
44
Na máquina nerv01, crie o DISKGROUP FRA, e o habilite no outro Nó.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1$ sqlplus / AS SYSASMSQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/DISK02' NAME DATA_0002, '/dev/oracleasm/disks/DISK03' NAME DATA_0003;SQL> EXIT;$ srvctl start diskgroup -g FRA -n nerv02$ srvctl enable diskgroup -g FRA -n nerv02$ $GRID_HOME/bin/crsctl status res -t
Na máquina nerv11, crie o DISKGROUP FRA.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM$ sqlplus / AS SYSASMSQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/DISK02' NAME DATA_0002, '/dev/oracleasm/disks/DISK03' NAME DATA_0003;SQL> EXIT;$ $GRID_HOME/bin/crsctl status res -t
44
Lab 6.2 – ASM
45
Lab 7 – Oracle Database
Hands On !
45
46
Na máquina nerv01, crie o Banco de Dados.$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1$ export ORACLE_SID=ORCL1$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName ORCL -sid ORCL \ -sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv01,nerv02 \ -characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 40 \ -sampleSchema true \ -emConfiguration NONE \ -continueOnNonFatalErrors false
46
Lab 7.1 – Oracle Database
47
Na máquina nerv11, crie o Banco de Dados.$ export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0.4/db_1$ export ORACLE_SID=ORCL$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName ORCL -sid ORCL \ -sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv11 \ -characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 20 \ -sampleSchema true \ -emConfiguration NONE \ -continueOnNonFatalErrors false
47
Lab 7.2 – Oracle Database
48
Lab 8 – RAC + Data Guard
Hands On !
48
49
49
Lab 8.1 – Data Guard
Nas máquinas nerv01, nerv02 e nerv11, deixe o tnsnames do ORACLE_HOME como abaixo.PROD = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac01-scan.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCL) ) )
DR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = nerv11.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DR) ) )
50
50
Lab 8.2 – Data Guard
Nas máquinas nerv01, nerv02 e nerv11, acrescente as linhas abaixo no final do arquivo $GRID_HOME/network/admin/listener.ora.SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=ORCL_DGMGRL) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1) (SID_NAME=ORCL1) ) )
Nas máquinas nerv01, nerv02 e nerv11, teste a nova configuração do LISTENER.$ export ORACLE_HOME=$GRID_HOME$ $GRID_HOME/bin/lsnrctl status$ $GRID_HOME/bin/lsnrctl stop$ $GRID_HOME/bin/lsnrctl start$ $GRID_HOME/bin/lsnrctl status
51
51
Lab 8.3 – Data Guard
Na máquina nerv01, habilite os pré-requisitos do Data Guard.$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1$ export ORACLE_SID=ORCL1$ srvctl stop database -d ORCL$ srvctl start instance -d ORCL -i ORCL1 -o mount$ sqlplus / AS SYSDBASQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER DATABASE FORCE LOGGING;SQL> ALTER DATABASE FLASHBACK ON;SQL> EXIT;$ srvctl stop database -d ORCL$ srvctl start database -d ORCL
Na máquina nerv01, altere a localização do SNAPSHOT CONTROLFILE.$ rman target /RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA/snapcf.f';
52
52
Lab 8.4 – Data Guard
Na máquina nerv01, crie um STANDBY CONTROLFILE.SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stb.ctl';
Na máquina nerv01, execute um Backup do Banco de Dados e Archives.RMAN> BACKUP DATABASE FORMAT '/home/oracle/Backup_Banco_%U.rman';RMAN> BACKUP ARCHIVELOG ALL FORMAT '/home/oracle/Backup_Archives_%U.rman';
Na máquina nerv01, copie o PASSWORD FILE para a máquina nerv11.$ scp $ORACLE_HOME/dbs/orapwORCL1 nerv11:$ORACLE_HOME/dbs/orapwORCL
Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.$ scp /home/oracle/stb.ctl nerv11:/home/oracle/
Na máquina nerv01, copie o BACKUP para a máquina nerv11.$ scp /home/oracle/Backup_*.rman nerv11:/home/oracle/
53
53
Lab 8.5 – Data Guard
Na máquina nerv11, altere o parâmetro DB_UNIQUE_NAME.SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='DR' SCOPE=SPFILE;SQL> SHUTDOWN IMMEDIATE;SQL> STARTUP NOMOUNT;
Na máquina nerv11, restaure o CONTROLFILE.RMAN> RESTORE CONTROLFILE FROM '/home/oracle/stb.ctl';SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Na máquina nerv11, corrija os metadados do RMAN.RMAN> CROSSCHECK BACKUP;RMAN> CROSSCHECK ARCHIVELOG ALL;RMAN> DELETE NOPROMPT EXPIRED BACKUP;RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;RMAN> CATALOG START WITH '/home/oracle/Backup';
Na máquina nerv11, restaure o banco de dados.RMAN> RESTORE DATABASE;RMAN> RECOVER DATABASE;
54
54
Lab 8.6 – Data Guard
Teste o Na máquina nerv11, habilite os pré-requisitos do Data Guard.SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER DATABASE FLASHBACK ON;
Na máquina nerv01, configure o Data Guard Broker.SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1 = '+FRA/DR1.DAT' SCOPE=BOTH;SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2 = '+FRA/DR2.DAT' SCOPE=BOTH;
Teste a resolução de nomes (ping) entre as três máquinas.Teste a configuração SQL*Net (tnsping) entre as três máquinas.Teste a autenticação via PASSWORD FILE entre as três máquinas.[oracle@nerv01 ~]$ sqlplus SYS/Nerv2014@DR AS SYSDBA[oracle@nerv02 ~]$ sqlplus SYS/Nerv2014@DR AS SYSDBA[oracle@nerv11 ~]$ sqlplus SYS/Nerv2014@PROD AS SYSDBA
Nas máquinas nerv01 e nerv11, inicie o Data Guard Broker.SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE;
55
55
Lab 8.7 – Data Guard
Teste o Na máquina nerv01, crie a configuração do Data Guard Broker.$ dgmgrl SYS/Nerv2014@PRODDGMGRL> CREATE CONFIGURATION 'DRSolution' AS PRIMARY DATABASE IS ORCL CONNECT IDENTIFIER IS PROD;
Na máquina nerv01, adicione a máquina nerv11 na configuração.DGMGRL>
Na máquina nerv01, habilite a configuração.DGMGRL> SHOW CONFIGURATION;DGMGRL> ENABLE CONFIGURATION;DGMGRL> SHOW CONFIGURATION;
Nas três máquinas, acompanhe o Alert Log.
56
56
Lab 8.8 – Data Guard
Teste o Na máquina nerv11, crie STANDBY LOGFILEs.SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Nas máquina nerv01 crie STANDBY LOGFILEs.SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
57
Na máquina nerv01, verifique detalhes de um banco de dados.DGMGRL> SHOW DATABASE VERBOSE ORCL;DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'ArchiveLagTarget'=600;DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'StandbyFileManagement'=AUTO;DGMGRL> SHOW DATABASE ORCL 'ArchiveLagTarget';DGMGRL> SHOW DATABASE ORCL 'StandbyFileManagement';
Na máquina nerv01, altere o Protection Mode.DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
Na máquina nerv01, verifique se o Protection Mode foi alterado.DGMGRL> SHOW CONFIGURATION;DGMGRL> SHOW DATABASE VERBOSE ORCL;DGMGRL> SHOW DATABASE VERBOSE DR;DGMGRL> SHOW INSTANCE VERBOSE “ORCL1” ON DATABASE ORCL;DGMGRL> SHOW INSTANCE VERBOSE “ORCL2” ON DATABASE ORCL;DGMGRL> SHOW INSTANCE VERBOSE “ORCL” ON DATABASE DR;
Lab 8.9 – Data Guard
58
Na máquina nerv01, execute SWITCHOVER para a máquina nerv11, sempre acompanhando os Alert Logs.DGMGRL> SHOW CONFIGURATION;DGMGRL> SWITCHOVER TO DR;
Na máquina nerv11, execute SWITCHBACK para a máquina nerv01, sempre acompanhando os Alert Logs.DGMGRL> SHOW CONFIGURATION;DGMGRL> SWITCHOVER TO ORCL;
Lab 8.10 – Data Guard
59
Desligue as máquinas nerv01 e nerv02.
Execute FAILOVER para a máquina nerv11.$ dgmgrl SYS/Nerv2014@DRDGMGRL> FAILOVER TO DR;DGMGRL> SHOW CONFIGURATION;
Ligue as máquinas nerv01 e nerv02, e na máquina nerv11, execute o REISNTATE.DGMGRL> REINSTATE DATABASE ORCL;
Na máquina nerv11, execute o SWITCHOVER.DGMGRL> SWITCHOVER TO ORCL;
Lab 8.11 – Data Guard
60
Lab 9 – Fast-Start Failover
Hands On !
60
61
Lights out administration
62
Na máquina nerv01, configure o Fast-Start Failover.DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverLagLimit=600;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 30;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverAutoReinstate = TRUE;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverPmyShutdown = TRUE;DGMGRL> EDIT DATABASE ORCL SET PROPERTY FastStartFailoverTarget=DR;DGMGRL> ENABLE FAST_START FAILOVER;DGMGRL> SHOW CONFIGURATION;DGMGRL> SHOW FAST_START FAILOVER;
Lab 9.1: Fast-Start Failover
63
Na máquina nerv15, inicie o observador.$ ssh dba01@nerv15$ mkdir /home/dba01/dba01$ cd /home/dba01/dba01$ dgmgrl -logfile /home/oracle/dba01/dba01.log SYS/Nerv2014@DR_RAC01DGMGRL> START OBSERVER;
Desligue as máquinas nerv01 e nerv02, e aguarde o FAILOVER.
Ligue as máquinas nerv01 e nerv02.
Aguarde o REINSTATE.
Execute o SWITCHOVER.
Lab 9.2: Fast-Start Failover
64
Na máquina nerv01, crie dois Services como abaixo.$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLTP -l PRIMARY -w 1 -z 10$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10
Na máquina nerv11, crie dois Services como abaixo.$GRID_HOME/bin/srvctl add service -d ORCL -s OLTP -l PRIMARY -w 1 -z 10$GRID_HOME/bin/srvctl add service -d ORCL -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10
Na máquina nerv01, inicie os dois Services.$GRID_HOME/bin/srvctl start service -d ORCL -s OLTP$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP
Na máquina nerv01, gere alguns Archived Redo Logs e espere replicar para o DR.
Na máquina nerv01, pare o Service OLAP.$GRID_HOME/bin/srvctl stop service -d ORCL -s OLAP
Na máquina nerv11, inicie o Service OLAP.$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP
Lab 9.3: Fast-Start Failover
65
Lab 9.4: Fast-Start Failover
Na máquina nerv15, adicione estas duas entradas ao tnsnames.ora, e teste sua conexão após um novo Failover.
OLTP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLTP)) )
OLAP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLAP)) )
66
Lab 10 – ACFS
Hands On !
66
67
• Mirroring• Stripping• Replicação• Snapshots• Alta Disponibilidade
Vantagens ACFS
68
• Produto recente (11gR2)• Instalação complexa• Dependência do Kernel• Depêndencia dos componentes do Grid Infrastructure• Ainda não suportado pelo UEK 3
Bug ID 16318126Oracle ASM Cluster File System (ACFS) is currently not supported for use with UEK R3.http://docs.oracle.com/cd/E37670_01/E51472/E51472.pdf
Desvantagens ACFS
69
Na máquina nerv09, crie 1 diretório.# mkdir /shared_ogg
Na máquina nerv09, adicionar no arquivo /etc/exports:/shared_ogg *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Na máquina nerv09, reiniciar o NFS Server:# service nfs restart
Nas máquinas nerv01 e nerv02, adicionar no arquivo /etc/fstab a linha abaixo.nerv09:/shared_ogg /u01/shared_ogg nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
Na máquina nerv01, executar:# mkdir /u01/shared_ogg# mount /u01/shared_ogg# mkdir /u01/shared_ogg/rac01# chown -R oracle:oinstall /u01/shared_ogg/rac01
Na máquina nerv02, executar:# mkdir /u01/shared_ogg# mount /u01/shared_ogg
69
Lab 10.1: NFS
70
Lab 11 – Golden Gate Unidirecional
Hands On !
70
71
Lab 11.1: Golden Gate UnidirecionalNa máquina nerv11, crie um novo banco de dados.$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName BI -sid BI \ -sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv11 \ -characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 20 -sampleSchema true -emConfiguration NONE \ -continueOnNonFatalErrors false
Na máquina nerv11, coloque o banco BI em modo ARCHIVELOG.export ORACLE_SID=BISQL> SHUTDOWN IMMEDIATE;SQL> STARTUP MOUNT;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE FORCE LOGGING;SQL> ALTER DATABASE OPEN;
72
Lab 11.2: Golden Gate Unidirecional
Na máquina nerv01, instale o Golden Gate.$ cd /home/oracle$ unzip 121200_fbo_ggs_Linux_x64_shiphome.zip$ cd /home/oracle/fbo_ggs_Linux_x64_shiphome/Disk1/$ ./runInstaller -silent -responseFile /home/oracle/ogg_nerv01.rsp
Na máquina nerv11, instale o Golden Gate.$ cd /home/oracle$ unzip 121200_fbo_ggs_Linux_x64_shiphome.zip$ cd /home/ora5cle/fbo_ggs_Linux_x64_shiphome/Disk1/$ ./runInstaller -silent -responseFile /home/oracle/ogg_nerv11.rsp
Na máquina nerv01, verifique se o MANAGER está em funcionamento.cd /u01/shared_ogg/rac01$ ./ggsciGGSCI> info all
Na máquina nerv11, verifique se o MANAGER está em funcionamento.cd /u01/app/oracle/product/11.2.0.4/ogg$ ./ggsciGGSCI> info all
73
Lab 11.3: Golden Gate Unidirecional
Na máquina nerv01, habilite os pré-requisitos do Golden Gate.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;SQL> ALTER SYSTEM SET RECYCLEBIN=OFF SCOPE=SPFILE;$GRID_HOME/bin/srvctl stop database -d ORCL; $GRID_HOME/bin/srvctl start database -d ORCL
Na máquina nerv01, crie o usuário para o Golden Gate.cd /u01/shared_ogg/rac01/$ORACLE_HOME/bin/sqlplus / AS SYSDBASQL> CREATE TABLESPACE OGG;SQL> CREATE USER OGG IDENTIFIED BY Nerv2014 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;SQL> GRANT EXECUTE ON UTL_FILE TO OGG;@marker_setup.sql OGG <enter>@ddl_setup.sql OGG <enter>@role_setup.sql OGG <enter>@ddl_enable.sql
74
Lab 11.4: Golden Gate Unidirecional
Na máquina nerv11, habilite os pré-requisitos do Golden Gate.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;SQL> ALTER SYSTEM SET RECYCLEBIN=OFF SCOPE=SPFILE;$GRID_HOME/bin/srvctl stop database -d BI; $GRID_HOME/bin/srvctl start database -d BI
Na máquina nerv11, crie o usuário para o Golden Gate.cd /u01/app/oracle/product/11.2.0.4/oggexport ORACLE_SID=BI$ORACLE_HOME/bin/sqlplus / AS SYSDBASQL> CREATE TABLESPACE OGG;SQL> CREATE USER OGG IDENTIFIED BY Nerv2014 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;SQL> GRANT EXECUTE ON UTL_FILE TO OGG;@marker_setup.sql OGG <enter>@ddl_setup.sql OGG <enter>@role_setup.sql OGG <enter>@ddl_enable.sql
75
Lab 11.5: Golden Gate Unidirecional
Na máquina nerv01, adicione o processo EXTRACT.GGSCI> add extract ext1, tranlog, THREADS 2, begin nowGGSCI> add exttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/lt, extract ext1
Na máquina nerv01, edite o arquivo de parâmetros do processo EXTRACT.GGSCI> edit params ext1extract ext1userid OGG@ORCL, password Nerv2014rmthost nerv11, mgrport 7809rmttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/ltTRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2014ddl include mapped objname SCOTT.*;table SCOTT.*;
76
Lab 11.6: Golden Gate Unidirecional
Na máquina nerv11, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBALGGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint
Na máquina nerv11, crie a tabela de CHECKPOINT.GGSCI> dblogin userid OGGNerv2014 <enter>GGSCI> add checkpointtable OGG.checkpoint
Na máquina nerv11, adicione o processo REPLICAT.GGSCI> add replicat rep1, exttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/lt, checkpointtable OGG.checkpoint
Na máquinas nerv11, edite o arquivo de parâmetros do processo REPLICAT.GGSCI> edit params rep1replicat rep1ASSUMETARGETDEFSuserid OGG@BI, password Nerv2014discardfile /u01/app/oracle/product/11.2.0.4/ogg/dircrd/rep1_discard.txt, append, megabytes 10DDLmap SCOTT.*, target SCOTT.*;
77
Lab 11.7: Golden Gate Unidirecional
Nas máquinas nerv01, nerv02 e nerv11, adicione o ASM ao tnsnames.ora do ORACLE_HOME.ASM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = nerv01.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = +ASM) ) )
Nas máquinas nerv01 e nerv11, habilite o usuário SCOTT.SQL> ALTER USER SCOTT IDENTIFIED BY TIGER ACCOUNT UNLOCK;
78
Lab 11.8: Golden Gate Unidirecional
Na máquina nerv01, inicie o processo EXTRACT.GGSCI> info allGGSCI> start extract ext1GGSCI> info all
Na máquina nerv11, inicie o processo REPLICAT.GGSCI> info allGGSCI> start replicat rep1GGSCI> info all
Na máquina nerv01, acompanhe o log de erros.$ tail -f /u01/shared_ogg/rac01/ggserr.log
Na máquina nerv11, acompanhe o log de erros.$ tail -f /u01/app/oracle/product/11.2.0.4/ogg/ggserr.log
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.
79
Lab 12 – Golden Gate Bidirecional
Hands On !
79
80
Lab 12.1: Golden Gate Bidirecional
Na máquina nerv11, verifique se o MANAGER e o REPLICAT estão em funcionamento.cd /u01/app/oracle/product/11.2.0.4/ogg$ ./ggsciGGSCI> info all
Na máquina nerv11, adicione o processo EXTRACT.GGSCI> add extract ext2, tranlog, THREADS 1, begin nowGGSCI> add exttrail /u01/shared_ogg/rac01/dirdat/lt, extract ext2
Na máquina nerv11, edite o arquivo de parâmetros do processo EXTRACT.GGSCI> edit params ext2extract ext2userid OGG@BI, password Nerv2014rmthost nerv01-vip, mgrport 7809TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2014rmttrail /u01/shared_ogg/rac01/dirdat/ltddl include mapped objname SCOTT.*;table SCOTT.*;
81
Lab 12.2: Golden Gate Bidirecional
Na máquina nerv01, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBALGGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint
Na máquina nerv01, crie a tabela de CHECKPOINT.GGSCI> dblogin userid OGGNerv2014 <enter>GGSCI> add checkpointtable OGG.checkpoint
Na máquina nerv01, adicione o processo REPLICAT.GGSCI> add replicat rep2, exttrail /u01/shared_ogg/rac01/dirdat/lt, checkpointtable OGG.checkpoint
Na máquinas nerv01, edite o arquivo de parâmetros do processo REPLICAT.GGSCI> edit params rep2replicat rep2ASSUMETARGETDEFSuserid OGG@ORCL, password Nerv2014discardfile /u01/shared_ogg/rac01/dircrd/rep1_discard.txt, append, megabytes 10DDLmap SCOTT.*, target SCOTT.*;
82
Lab 12.3: Golden Gate Bidirecional
Na máquina nerv11, inicie o processo EXTRACT.GGSCI> info allGGSCI> start extract ext2GGSCI> info all
Na máquina nerv01, inicie o processo REPLICAT.GGSCI> info allGGSCI> start replicat rep2GGSCI> info all
Na máquina nerv01, acompanhe o log de erros.$ tail -f /u01/shared_ogg/rac01/ggserr.log
Na máquina nerv11, acompanhe o log de erros.$ tail -f /u01/app/oracle/product/11.2.0.4/ogg/ggserr.log
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv11 e nerv01.
83
Lab 13 – Golden Gate High Availability
Hands On !
83
84
84
Lab 13.1: Golden Gate HA
Na máquina nerv01, crie um VIP e um Resource para o Golden Gate.# /u01/app/11.2.0.4/grid/bin/appvipcfg create -network=1 -ip=192.168.0.141 -vipname=rac01-ogg-vip -user=root
# /u01/app/11.2.0.4/grid/bin/crsctl start resource rac01-ogg-vip -n nerv01
# vi /u01/shared_ogg/rac01/ogg_action.sh
# chmod +x /u01/shared_ogg/rac01/ogg_action.sh
# chown oracle:oinstall /u01/shared_ogg/rac01/ogg_action.sh
# /u01/app/11.2.0.4/grid/bin/crsctl add resource ogg -type cluster_resource -attr "ACTION_SCRIPT=/u01/shared_ogg/rac01/ogg_action.sh, CHECK_INTERVAL=30, START_DEPENDENCIES='hard(rac01-ogg-vip,ora.orcl.db) pullup(rac01-ogg-vip)', STOP_DEPENDENCIES='hard(rac01-ogg-vip)'"
# /u01/app/11.2.0.4/grid/bin/crsctl setperm resource rac01-ogg-vip -u user:oracle:r-x
# /u01/app/11.2.0.4/grid/bin/crsctl setperm resource ogg -o oracle
85
85
Lab 13.2: Golden Gate HA
Na máquina nerv01, verifique e inicie o Resource do Golden Gate.$ $GRID_HOME/bin/crsctl status res -t$ $GRID_HOME/bin/crsctl stop res ogg$ $GRID_HOME/bin/crsctl status res -t$ $GRID_HOME/bin/crsctl start res ogg
Reinicie a máquina nerv01, e verifique se o Golden Gate é iniciado na máquina nerv02.
86
Lab 14 – RAC Extended
Hands On !
86
87
87
Lab 14.0: RAC Extended
Na máquina nerv01, desabilite a replicação via Data Guard.DGMGRL> DISABLE FAST_START FAILOVER;DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE;DGMGRL> REMOVE CONFIGURATION;SQL> ALTER SYSTEM SET DG_BROKER_START=FALSE;
Na máquina nerv01, desabilite a replicação via Golden Gate.# /u01/app/11.2.0.4/grid/bin/crsctl stop resource ogg# /u01/app/11.2.0.4/grid/bin/crsctl delete resource ogg# /u01/app/11.2.0.4/grid/bin/crsctl stop resource rac01-ogg-vip# /u01/app/11.2.0.4/grid/bin/appvipcfg delete -vipname=rac01-ogg-vip
88
88
Lab 14.1: RAC Extended
Na máquina nerv11, execute novamente o Lab 1 e 2.
Na máquina nerv10, apague o conteúdo dos discos iSCSI.# dd if=/dev/zero of=/dev/sda5 bs=512 count=10000
Na máquina nerv09, altere o Servidor iSCSI para permitir acesso aos discos para a máquina nerv11.# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5 initiator-address 192.168.0.101 initiator-address 192.168.0.102 initiator-address 192.168.0.121</target>...# service tgtd restart
Também na máquina nerv10, altere o Servidor iSCSI para permitir acesso aos discos para as máquinas nerv01 e nerv02.
89
89
Lab 14.2: RAC Extended
Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no Storage.# iscsiadm -m discovery -t sendtargets -p 192.168.0.202# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-01 -p 192.168.0.202 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-02 -p 192.168.0.202 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-03 -p 192.168.0.202 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-04 -p 192.168.0.202 -l# iscsiadm -m discovery -t sendtargets -p 192.168.0.201# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-01 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-02 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-03 -p 192.168.0.201 -l# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-04 -p 192.168.0.201 -l
Nas máquinas nerv01, nerv02 e nerv11, adicione os novos discos no arquivo /etc/iscsi/initiatorname.iscsi....InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-01InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-02InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-03InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-04
90
90
Lab 14.3: RAC Extended
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram configurados localmente.# fdisk -l
Na máquina nerv01, particione os novos discos.# fdisk /dev/sdf (e sdg, sdh, sdi)n <enter>p <enter>1 <enter><enter><enter>w <enter>
91
91
Lab 14.4: RAC Extended
Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.# partprobe /dev/sdb# partprobe /dev/sdc# partprobe /dev/sdd# partprobe /dev/sde# partprobe /dev/sdf# partprobe /dev/sdg# partprobe /dev/sdh# partprobe /dev/sdi
Na máquinas nerv11, configure a ASMLib.# /etc/init.d/oracleasm configureoracle <enter>oinstall <enter>y <enter>y <enter># /etc/init.d/oracleasm status
92
Na máquina nerv01, crie os novos discos do ASM.# /etc/init.d/oracleasm createdisk DISK04 /dev/sdf1# /etc/init.d/oracleasm createdisk DISK05 /dev/sdg1# /etc/init.d/oracleasm createdisk DISK06 /dev/sdh1# /etc/init.d/oracleasm createdisk DISK07 /dev/sdi1
Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK04# /etc/init.d/oracleasm querydisk -v -p DISK05# /etc/init.d/oracleasm querydisk -v -p DISK06# /etc/init.d/oracleasm querydisk -v -p DISK07
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01...
92
Lab 14.5: RAC Extended
93
93
Lab 14.6: RAC Extended
Na máquina nerv11, adicionar no arquivo /etc/fstab as TRÊS linhas abaixo.nerv09:/shared_config01 /u01/shared_config01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0nerv09:/shared_config02 /u01/shared_config02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0nerv09:/shared_config03 /u01/shared_config03 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
Na máquinas nerv11, executar:# mount /u01/shared_config01# mount /u01/shared_config02# mount /u01/shared_config03
94
Nas máquinas nerv01, nerv02 e nerv11, remova a pasta /home/oracle/.ssh.$ rm -rf .ssh
Na máquina nerv01, configure o SSH sem senha.[oracle@nerv01 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh[oracle@nerv01 ~]$ ssh oracle@nerv11 mkdir -p .ssh[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
94
Lab 14.7: RAC Extended
95
Na máquina nerv02, configure o SSH sem senha.[oracle@nerv02 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
95
Lab 14.8: RAC Extended
96
Na máquina nerv11, configure o SSH sem senha.[oracle@nerv11 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
96
Lab 14.9: RAC Extended
97
Na máquina nerv01, execute a instalação do Grid na máquina nerv11.$ cd $GRID_HOME/oui/bin$ ./addNode.sh -silent “CLUSTER_NEW_NODES={nerv11}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={nerv11-vip}”
Na máquina nerv11, com o usuário root, execute os seguintes scripts.# /u01/app/oraInventory/orainstRoot.sh# /u01/app/11.2.0.4/grid/root.sh
Na máquina nerv01, execute instalação do Oracle na máquina nerv11.$ cd $ORACLE_HOME/oui/bin$ ./addNode.sh -silent "CLUSTER_NEW_NODES={nerv11}"
Na máquina nerv11, com o usuário root, execute o script abaixo.# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina nerv01, execute a adição da instância.$ $GRID_HOME/bin/srvctl add instance -d ORCL -i ORCL3 -n nerv11
97
Lab 14.10: RAC Extended
98
Na máquina nerv01, conclua a adição do nó.SQL> ALTER SYSTEM SET INSTANCE_NUMBER=3 SID='ORCL3' SCOPE=SPFILE;SQL> ALTER DATABASE ADD LOGFILE THREAD 3;SQL> ALTER DATABASE ADD LOGFILE THREAD 3;SQL> ALTER DATABASE ENABLE PUBLIC THREAD 3;SQL> ALTER SYSTEM SET THREAD=3 SID='ORCL3' SCOPE=SPFILE;SQL> CREATE UNDO TABLESPACE UNDOTBS3;SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS3 SID='ORCL3' SCOPE=SPFILE;
$ $GRID_HOME/bin/srvctl start instance -d ORCL -i ORCL3
98
Lab 14.11: RAC Extended
99
Na máquina nerv01, faça a preparação para a criação dos novos FAILGROUPs.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER SYSTEM SET ASM_POWER_LIMIT = 11;SQL> ALTER DISKGROUP DATA REBALANCE POWER 11;SQL> ALTER DISKGROUP FRA REBALANCE POWER 11;
99
Lab 14.12: RAC Extended
100
Na máquina nerv01, crie os novos FAILGROUPs.SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK04' NAME DATA_0004;SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK05' NAME DATA_0005;SQL> ALTER DISKGROUP DATA DROP DISK DATA_0000;SQL> ALTER DISKGROUP DATA DROP DISK DATA_0001;SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK00' NAME DATA_0000;SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK01' NAME DATA_0001;SQL> ALTER DISKGROUP DATA DROP DISK DATA_0005;SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK05' NAME DATA_0005;
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;
100
Lab 14.13: RAC Extended
101
Na máquina nerv01, crie os novos FAILGROUPs.SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;
SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK06' NAME DATA_0006;SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK07' NAME DATA_0007;SQL> ALTER DISKGROUP FRA DROP DISK DATA_0002;SQL> ALTER DISKGROUP FRA DROP DISK DATA_0003;SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK02' NAME DATA_0002;SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK03' NAME DATA_0003;SQL> ALTER DISKGROUP FRA DROP DISK DATA_0007;SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK07' NAME DATA_0007;
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;
101
Lab 14.14: RAC Extended
102
Na máquina nerv01, selecione os FAILGROUPs preferidos para leituras.SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM1';SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM2';SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPB' SCOPE=BOTH SID='+ASM3';
102
Lab 14.15: RAC Extended
103
Lab 15 – RAC Extended Quorum
Hands On !
103
104
Na máquina nerv15, crie 1 diretório.# mkdir /shared_config
Nas máquinas nerv15, adicionar no arquivo /etc/exports:/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Nas máquinas nerv15, iniciar o NFS Server:# service nfs start# chkconfig nfs on
104
Lab 15.1: RAC Extended Quorum
105
105
Lab 15.2: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, adicionar no arquivo /etc/fstab a linha abaixo.nerv15:/shared_config /u01/shared_config15 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
Nas máquinas nerv01, nerv02 e nerv11, executar:# mkdir /u01/shared_config15# mount /u01/shared_config15
Na máquina nerv01, executar:# mkdir /u01/shared_config15/rac01# chown -R oracle:oinstall /u01/shared_config15/rac01
106
106
Lab 15.3: RAC Extended Quorum
Nas máquinas nerv09 e nerv10, crie 1 partição de 1GB, sem formatar.
Nas máquinas nerv09 e nerv10, adicione o disco ao iSCSI server.# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-05> backing-store /dev/sda21 initiator-address 192.168.0.101 Initiator-address 192.168.0.102 initiator-address 192.168.0.121...
# service tgtd restart
107
107
Lab 15.4: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no Storage.# iscsiadm -m discovery -t sendtargets -p 192.168.0.201# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-05 -p 192.168.0.201 -l# iscsiadm -m discovery -t sendtargets -p 192.168.0.202# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-05 -p 192.168.0.202 -l
Nas máquinas nerv01, nerv02 e nerv11, adicione o novo disco no arquivo /etc/iscsi/initiatorname.iscsi.InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-05InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-05
108
108
Lab 15.5: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram configurados localmente.# fdisk -l
Nas máquinas nerv01, particione os novos discos.# fdisk /dev/sdjn <enter>p <enter>1 <enter><enter><enter>w <enter>
# fdisk /dev/sdkn <enter>p <enter>1 <enter><enter><enter>w <enter>
109
Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.# partprobe /dev/sdj# partprobe /dev/sdk
e os discos do ASM.# /etc/init.d/oracleasm createdisk DISK09 /dev/sdj1# /etc/init.d/oracleasm createdisk DISK10 /dev/sdk1
Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks
109
Lab 15.6: RAC Extended Quorum
110
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK09# /etc/init.d/oracleasm querydisk -v -p DISK10
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01brw-rw----. 1 oracle oinstall 8, 49 Mar 3 08:40 DISK02brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK03brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK05brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK06brw-rw----. 1 oracle oinstall 8, 49 Mar 3 08:40 DISK07brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK08brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK09brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK10
110
Lab 15.7: RAC Extended Quorum
111
111
Lab 15.8: RAC Extended Quorum
Na máquina nerv01, crie o FAILGROUP para o Voting Disk.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> CREATE DISKGROUP CONFIG NORMAL REDUNDANCY FAILGROUP FG1 DISK '/dev/oracleasm/disks/DISK09' NAME DATA0009 FAILGROUP FG2 DISK '/dev/oracleasm/disks/DISK10' NAME DATA0010 ATTRIBUTE 'compatible.asm' = '11.2.0.0';
Na máquina nerv01, crie um arquivo para o Voting Disk.# dd if=/dev/zero of=/u01/shared_config15/rac01/asm01 bs=10M count=58
Na máquina nerv01, altere a permissão do disco.# chown -R oracle:oinstall /u01/shared_config15/rac01/
Na máquina nerv01, altere a localização de discos.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER SYSTEM SET asm_diskstring='/dev/oracleasm/disks/*', '/u01/shared_config15/rac01/*' SCOPE=BOTH SID='*';
112
112
Lab 15.9: RAC Extended Quorum
Na máquina nerv01, adicione o FAILGROUP de QUORUM.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER DISKGROUP CONFIG ADD QUORUM FAILGROUP FG3 DISK '/u01/shared_config15/rac01/asm01';
Na máquina nerv01, habilitar o novo FAILGROUP para as outr máquinas.$GRID_HOME/bin/srvctl start diskgroup -g CONFIG -n nerv02$GRID_HOME/bin/srvctl enable diskgroup -g CONFIG -n nerv02$GRID_HOME/bin/srvctl start diskgroup -g CONFIG -n nerv11$GRID_HOME/bin/srvctl enable diskgroup -g CONFIG -n nerv11
113
113
Lab 15.10 RAC Extended Quorum
Na máquina nerv01, altere a localização do OCR para o novo DISKGROUP.# /u01/app/11.2.0.4/grid/bin/ocrcheck# /u01/app/11.2.0.4/grid/bin/ocrconfig -delete /u01/shared_config03/rac01/ocr3# /u01/app/11.2.0.4/grid/bin/ocrconfig -delete /u01/shared_config02/rac01/ocr2# /u01/app/11.2.0.4/grid/bin/ocrconfig -add +CONFIG# /u01/app/11.2.0.4/grid/bin/ocrcheck# /u01/app/11.2.0.4/grid/bin/ocrconfig -delete /u01/shared_config01/rac01/ocr1# /u01/app/11.2.0.4/grid/bin/ocrcheck
Na máquina nerv01, altere a localização do Voting Disk para o novo DISKGROUP.# /u01/app/11.2.0.4/grid/bin/crsctl query css votedisk# /u01/app/11.2.0.4/grid/bin/crsctl replace votedisk +CONFIG# /u01/app/11.2.0.4/grid/bin/crsctl query css votedisk
114
114
Lab 15.11: RAC Extended Quorum
Na máquina nerv01, configure o tempo permitido de DOWNTIME.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '11.2';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '11.2';SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.asm' = '11.2';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '11.2';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '11.2';SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.rdbms' = '11.2';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'disk_repair_time' = '30m';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'disk_repair_time' = '30m';SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'disk_repair_time' = '30m';
Desligue o Storage de Produção, e teste o funcionamento do Site Produção e DR.
select * from v$asm_operation;
Recommended