I. Download and Install VirtualBox
1. Go to https://www.virtualbox.org/wiki/Downloads
2. Click on VirtualBox 5.1.26 for Windows hosts X86/amd64 link to download it into the C:\DBA\Introductionsfolder
3. Double click on the file (VirtualBox-5.1.26-117224-Win.exe) you just downloaded. TheWelcome to…screenshown below will appear.
4. Click the Next button. The Custom Setup screen shown below will appear
5. Click the Next button to go to the Custom Setup screen shown below
6. Click on the Next button to go to the Warning: Network Interfaces screen shown below
7. Click Yes to proceed to the Ready to Install screen shown below
8. Click Install to start the installation of Virtual Box. The screen shown below will appear
9. If you have an Internet Security software program - McAfee, Norton, etc. -, it might block the installation. If you see/get a message asking you if you want the installation to proceed, click Yes to allow the installation to continue.
10. Click the Finish button shown below once the installation completes successfully
11. The Welcome to VirtualBox! screen shown below will appear
Hurray!!!!You have just successfully installed Virtual Box!
II. Create a Virtual Machine in VirtualBox
1. Open VirtualBox and click on the New button on the top left corner of the screen shown below
2. Type the name – Oracle Linux -of the new VM. The Type and Version of the operating system are automatically selected as shown below
3. Click Next and set the Memory Size as shown below. Keep in mind that you can always adjust this setting after creating the virtual machine(VM)
4. Click Next and check the create a virtual hard drive now radio button shown below.
5. Click theCreate button and check the VDI (Virtual Disk Image) radio button shown below.
6.
Click Next
and check the Dynamically
allocated radio button as shown below
7. Click Next and provide a Name and specify a Size (180 GB) for the disk. Because of the dynamic allocation, I have no qualms about increasing the size of the disk a little. This is something you cannot adjust later on.
8. Click Create and you will see the definition of the Virtual Machine (VM) below.
Hurray!!!! You have just created a Virtual Machine/Host/Server.
III. Download and Install WinSCP
WinSCP is an open source free SFTP client, FTP client, WebDAV client and SCP client for Windows. Its main function is file transfer between a local and a remote computer. Beyond this, WinSCP offers scripting and basic file manager functionality.
12. Go to http://winscp.net/eng/download.php
13. Click on the blue [Download WinSCP] link as shown below
14. Click onInstallerlink shown below
15. Download the winscp5104RC-setup.exeexecutable into any folder, preferably into C:\DBA\Introductions
16. Once the download completes, go the folder containing thewinscp575setup.exefile
17. Double-click on the file to start the installation
18. Click Yeswhen the message, “Do you want to allow this app to make changes to your PC” appears. The screen below will appear
19. Click OK to accept the default langauage – English. The Welcome to the WinSCP Setup Winzard screen shown below will appear
20. Click the Next button. The License Agreement screen below will appear
21. Clickthe Accept button. The Setup type screen below will appear
22. Click theNextbutton. The Ready to Install screen shown below will appear
23. Click Install to start the installation of WinSCP. The screen shown below will appear once the installation completes successfully.
24. Click Finish to complete the installation. The Login screen below will appear
25. Click Close to exit the screen.
Hurray!!! You have successfully installed WinSCP!
IV. Download and Install TeamViewer
TeamViewer is a versatile software used to remotely access and control a computer over the internent. It is also used to manage online audio and video meetings.
Please follow the instructions below to download and install TeamViewer
26. Create a folder on your C drivecalled DBA
27. Go to the DBA folder and create another folder called Introductions
28. Go to https://www.teamviewer.com/en/index.aspx
29. Click on the green Download Now button to start the download. You can either choose to download the software into C:\DBA\Introductions or allow it to go to the default Downloads folder.
30. Once the download is complete, go to the location where it was downloaded into. In my case, the software was downloaded into the Downloads folder as shown below
31. Double-click on the blue TeamViewer_Setup_en.exe icon shown above to start the installation. The screen below will appear
32. Check the Basic Installation and Personal/Non-commercial use options as shown above
33. Click the Accept Finish button to start the installation.
34. Click Yes if you are prompted by “Do you want to allow this app to make changes to your PC?”
35. The installation will proceed quickly as shown by the screen below
36. Once the installation completes, the screen below will appear
That’s all there’s to it! You have successfully installed TeamViewer!
189ORACLE ENTERPRISE LINUX SERVER INSTALLATION
1. Register/Login to edelivery.oracle.com
2.
Search
Oracle Linux as seen below:
3.
Select/click
on Oracle Linux 7.0.0.0.0 to add to Cart
as shown below:
4.
Click
Select Softwarelink in 3(see yellow
mark on right of screen above) to get screen below:
5.
Scroll to the right
to see size of file and click Continue
at the bottom right
6.
Accept License
Agreement and click continue
7.
Click Download
***NOTE***:
First time installations,Akamai app will need to be run from edelivery.oracle.com site
8.
When
prompted by “Browse For Folder” box, navigate to C:\DBA\Introductions
folder as below:
9. Click Okwhen done
10. After you click Ok in 9 above,Akamai will automatically start the“1 of 4” download DON’T close the Akamai Window until download completes.
11.
Make sure you get the
below screen comes up to validate
successful download via akamai
12.
Navigate
to your C:\DBA\Introductions to view
downloaded files
14.To kick-off OEL7
Installationgo to your high-lighted
VM
15.Click Settings
above>Click Storage >Right-click
“Controller-IDE” >Click Ok as
below
16.Click “Add Optical Drive”
17. Select “Choose
disk”>Navigate to C:\DBA\Introductions
18.Select V46135-01.iso file
19.Click Open
20.Screen should come up as below:
21.Click Ok
22.Click Start
(on top of screen)to have screen below:
Click SOFTWARE SELECTION(select boxes as shown to the right below) >
Click SYSTEMabove
Click "DONE" in top LEFT above
Click NETWORK & HOSTNAME> Click OFF button to turn Networking ON
Rename Hostname to yourname.amag.com> click Configure
For Method, select MANUAL>input your Windows IP from start>cmd>ipconfig (Wireless/LAN iP)
Click Begin Installation
Enter root password and user oracle
Server REBOOTs in 15minsfrom 32GB machine. Might take upto 45mins in OS with RAM less than 32GB
Click License and Accept>Done
Click Finish Configuration
Click Forward
Select No, I prefer to register at a later time
Login with your password created:
Login to terminal and test ping to www.yahoo.com and see if there is internet connection
Below output means there is internet connection. Good!!!
HURRAY!!!...Successfully installed OEL7. Relax, take a break, go for a Walk!!!
Instructions for installing Oracle 11.2.0.4.
(Continued from OEL7 successful install document above)
The OS configuration is executed as root.
Login to your server as root &Add hostname and ip address in /etc/hosts file by entering the command vi /etc/hosts as shown below:
1. Add the name of the Linux server in the /etc/hosts file as shown below. The file must contain a fully qualified name for the server as shown below.
Login as root and do: vi /etc/hosts
<IP-address><fully-qualified-machine-name><machine-name>
2.
Logon to your Linux server as root
3. As the root user,docd/etc/yum.repos.d directory as shown below
4.
Check to see if the file – public-yum-ol7.repo
- that configures repository locations is in the directory by doing pwd. Proceed to the next step since the file is in the
directory
The screen below also appears
And this one, too
6. The yum installation logs messages about kernel changes in the
file /var/log/oracle-rdbms-server-11gR2-preinstall/results/orakernel.log
and it makes backups of current system settings in the directory /var/log/oracle-rdbms-server-11gR2-preinstall/backup
7. Run the command below if you plan to use the "oracle-validated" package to perform all your prerequisite setup
8. All necessary prerequisites will be performed automatically. It is probably worth doing a full update as well but this is not strictly necessary. Run the yum update command as shown below[For now, Skip “yum update” command and proceed to 9 below]
9. Run the commands below to add the following required groups
10. Run the command below to create and add the oracle user to the various groups
11. Run the command below to change the password for the oracle user
12. Dovi/etc/pam.d/loginto add the following high-lightedlines if it does not already exist.
13. Add the following kernel parameters to the /etc/sysctl.conf file. You will need to do
vi /etc/sysctl.confto start.
kernel.shmmni = 4096
kernel.shmmax = 4398046511104
kernel.shmall = 1073741824
kernel.sem = 250 32000 100 128
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
14. Apply kernel parameters by typing sysctl –pas shown below:
15. Do vi /etc/security/limits.conf to add the following lines to set shell limits for user oracle
oracle soft nproc131072
oracle hard nproc 131072
oracle soft nofile 131072
oracle hard nofile 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 50000000
oracle hard memlock 50000000
The following two installation files were downloaded from Oracle’s website (www.support.oracle.com). Files should now reside in your C:\DBA\Introductions folder
i. p13390677_112040_Linux-x86-64_1of7.zip
ii. p13390677_112040_Linux-x86-64_2of7.zip
1. As root user, create groups and users that will be associated with the database as below:
root@www.kida1.com:/home/oracle$groupadd -g 54321 oinstall
root@www.kida1.com:/home/oracle$groupadd -g 54322 dba
root@www.kida1.com:/home/oracle$groupadd -g 54323 oper
root@www.kida1.com:/home/oracle$groupadd -g 54324 backupdba
root@www.kida1.com:/home/oracle$groupadd -g 54325 dgdba
root@www.kida1.com:/home/oracle$groupadd -g 54326 kmdba
root@www.kida1.com:/home/oracle$groupadd -g 54327 asmdba
root@www.kida1.com:/home/oracle$groupadd -g 54328 asmoper
root@www.kida1.com:/home/oracle$groupadd -g 54329 asmadmin
root@www.kida1.com:/home/oracle$/usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba oracle
root@www.kida1.com:/home/oracle$
[root@www ~]# useradd grid
[root@www ~]# usermod -u 54322 -g oinstall -G dba grid
[root@www ~]# chown -R grid:oinstall /u01/app/11.2.0/
[root@www ~]# chmod -R 775 /u01
[root@www ~]# chown oracle:oinstall /u01/app/oracle
[root@www ~]# passwd grid
Changing password for user grid.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully
2. As root user, create directory structures that will be used as the base ($ORACLE_BASE) and home ($ORACLE_HOME) location for the Oracle11gR2 software binaries as shown below
mkdir -p /u01
mkdir -p /u01/app/oracle
mkdir -p /u01/app/11.2.0/grid
usermod -G dba,vboxsf oracle
3. Change the ownership and permissions of the directories you just created(above) as shown below
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
chown -R grid:oinstall /u01/app/11.2.0/
chmod 775 /u01/app/11.2.0/
4. Login as oracle and then update ~/.bash_profile sttings as below:
5. Add the entries below in the.bash_profile file for the oracle user
umask 022
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4
export AGENT_HOME=/u01/app/oracle/oem_agent/core/12.1.0.5.0
export ORACLE_SID=amadb
export ORACLE_DB=amadb
export ORACLE_UNQNAME=amadb
export ORACLE_HOME=$GRID_HOME
export PATH=$HOME:/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:$PATH
export
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$GRID_HOME/bin:$ORACLE_BASE/scripts:$AGENT_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export TMOUT=0
alias oem='cd /u01/app/oracle/oem_agent/core/12.1.0.5.0/bin'
alias goasm='. $HOME/.goasm'
alias godb=' $HOME/.godb'
alias pfile='cd $ORACLE_HOME/dbs'
alias home='cd /home/oracle/.*'
alias sql='sqlplus "/ as sysdba"'
alias cdoh='cd $ORACLE_HOME'
alias cdob='cd $ORACLE_BASE'
alias tns='cd $ORACLE_HOME/network/admin'
alias env='env | grep oracle'
export PS1='\u@\H:\w$'
MAIL=/usr/mail/${LOGNAME:?}
See Sample ~/.bash_profile screenshot below
6. Vi /etc/selinux/config file as root, set the SELINUX flag to permissive as shown below:
7. If you have the Linux firewall enabled, you will need to disable or configure it as shown here or here. The following is an example of disabling the firewall.
8. In Oracle Enterprise Linux 7 /tmp data is stored on tmpfs, which consumes memory and is too small. Run the systemctl mask tmp.mount command to revert it back to storage and reboot the machine as show below
9. Reboot the server as shown below so that the changes you made above will take effect
I.
1. Create directory /home/software/11gR2 as root user,give oracle user ownership/permission
#mkdir –p /home/software/11gR2
#chown oracle:oinstall /home/software/11gR2
#chmod 775 /home/software/11gR2
2. Transfer – you will need to use Winscp- the two zipped files to the /home/software/11gR2 directory on your Linux server.
3. Unzip the first zip file as shown below
4. Unzip the second zipped file as shown below
5. Once the extraction is complete, type clear and then ls –ltrato see the contents of the directory as shown below
6. Go to the database/stage/cvu/cv/admin directory as shown below
7. Make a copy of the cvu_config file calledcvu_config_old file as shown below
8. Edit the cvu_configfile and change the following line:
CV_ASSUME_DISTID=OEL4 to CV_ASSUME_DISTID=OEL6
9. Save(:wq!) the file
10. Go (cd)to the database directory and kick off the installation script (runInstaller) as shown below
11. As Root# vi /etc/ssh/sshd_config> shift-? X11Forwarding
12. If X11Forwarding=no =>change to yes
13. Add under X11Forwarding yes >AllowX11Forwarding yes
14. Save(:wq!)>Root#ssh –X oracle@IP>echo $DISPLAY
15. Root#xclock (should display clock)
16. Run the commands below to install xclock
17. The installation of xclockcontinues as shown below
18. Type y to accept and continue with the installation
19. Go to the location of the oracle software and kick off the installation as shown below
20. Uncheck the I wish to receive security updates via My Oracle Support option as shown below
21. The screen below will appear
22. Click Yes. The screen below will appear
23. Check the Skip software updates option
24. Click Next. The screen below will appear
25. Check the Create and configure a database option
26. Click Next. The screen below will appear
27. Check the Server Class option
28. Click Next. The screen below will appear
29. Check the Single instance database installation option
30. Click Next. The screen below will appear
31. Check the Typical Install option
32. Click Next. The screen below will appear
33. Most of the information shown above will be pre-populated by default.
34. Enter amadb as your Global database name
35. Enter a password you will remember in the Administrative password box. Keep in mind that you can always reuse passwords
36. Click Next. The screen below will appear
37. Leave the default selection. Basically, leave everything as is on the screen.
38. Click the Next button. The screen below will appear
39. Click the Installbutton.The installation will proceed as shown below
40. If you get the error message shown below, proceed to step #41.
41. Vi /u01/app/oracle/product/11.2.0.4/sysman/lib/ins_emagent.mkas shown below
42.
43. Edit the ins_emagent.mk file and make the following change:
$(SYSMANBIN)emdctl:
$(MK_EMAGENT_NMECTL)
T0
$(SYSMANBIN)emdctl:
$(MK_EMAGENT_NMECTL) -lnnz11
44. Click the Retry button. The screen below will appear
45. Once the installation completes, the following screen will appear
46. Click OK
47. Open another session as the root user
48. Go (cd) each of the locations shown in the screenshot and run the referenced script as show below
49. Go back to your oracle installation and click the OK button as shown below. The window below appears
50. Click Close. The screens below might or might not appear.
51. Connect to your database as shown below
CONGRATULATIONS,FOR A SUCCESSFUL INSTALLATION OF ORACLE DATABASE!
RELAX, SMILE and TAKE a WALK…!!!
**NOTE the link to your OEM web console** https://www.kida1.com:1158/em
STANDBY DATAGUARD Creation
Oracle Enterprise Linux 6.5
INTRODUCTION
Database Install
Database Login: How-To (. Oraenv >lsnrctl start)
-Proceed to installation
*RAC is part of 11g install
*ORACLE DICTIONARY(tablespaces) for user tablespaces: dba_tablespaces
*ORACLE DICTIONARY(views) for user views: dba_views
*ORACLE DICTIONARY(instance) for database instance: V$instance
*ORACLE DICTIONARY(database) for database: V$database
*Displays current date: Select TO_CHAR(Current_date,'dd-mon-yyyy hh:mm:ss') from dual;
*Displays current date: Select TO_CHAR(SYSDATE,'dd-mon-yyyy hh:mm:ss') from dual;
*User table:dba_Users (e.g. SYS,SYSDBA,etc)
*database table:dba_tables
*database indexes:dba_indexes
*database parameters: V$Parameter
database session:V$session
database processes:V$processes
*database instance:V$instance
*database architecture/structure:V$database (select name from V$database). To know SERVER NAME on which your database reside, do select HOST_NAME from V$database
Note: All the V$session,V$Processes, V$.... can be viewed using also by:
*SHOW PARAMETER Session
*Show paramter processes
*Show parameter instances
*Show parameter controlfile, show parameter init.ora,show parameter pfile,spfile
*show parameter datafile, show parameter redolog, show parameter archivedlog
*After installation,the database is empty. So adding theV$DATAFILES,V$CONTROLFILES,V$REDOLOGs,init.ora,archivedlogs, makes the database to come ALIVE
*Creating a new user(user/pw,grant resource,tablespace temp,default tablespace) is just EMPTY. Adding object(tble,trg,etc) to user's tablespace makes user to become a SCHEMA.
Name of Database=AMADB/Kenxmxra
Note: Server/Node/ox(unix),machine,host,client are all Synonyms
*Arithmetic Operators(+,-,*,/)
*Column Alias (empno AS Employee_number)
*Concatenation: symbol is || e.g. select 'name='||ename as employee _name from scott.emp;
*Literal
*If a row contains a Column with no data in it, then it is said to be NULL
*If NULL is part of another expression, results will ALWAYS be NULL(e.g NULL+2)
*Issue with NULL values in expression can be solved by NVL which puts a value where NULL appears.NVL can be used with DATES,character,number datatypes
-NVL takes 2 parameter(column you're checking for NULL and value to return if first paramenter is NULL e.g. select ename
sal*12+NVL(Comm,0) as total_renumeration from scott.emp;
*DISTINCT is used to prevent DUPLICATE rows from being selected in a query e.g.
select DISTINCT deptno from scott.emp;
*Ordering/Sorting Data e.g. orderby 1 is used to sort rows in alphetical order.
*Row Restriction(i.e. Where clause)
*like'%s%'; ename exactly 4 xters: like '____'; (i.e. 4 underscore signs, no spaces)
*Not equal to (<>,!=), Not greater than (!>), is not Null, not in, not between,not like
*AT SQLPLUS>/ (is EXECUTE previous query command)
*AT SQLPLUS> ed (edit - opens running query for editing)
%%%%%%%%%%%%%%
So the minimum environment variables must be set explicitly or in the oraenv.sh script or in your profile.
ORACLE_HOME=
ORACLE_SID=
export PATH=$PATH:$ORACLE_HOME/bin ---bin is where your sqlplus executable is.
Below is just a small example
with :
-->suppose my instance name is amadb and database is also the same then we
have to specify the following parameter in the pfile :
$ cd $ORACLE_HOME/dbs ----- default location for pfile/spfile
$ export ORACLE_SID=amadb
$ vi init$ORACLE_SID.ora --- opens the file initamadb.ora file since we have
set the ORACLE_SID=amadb in the above step
instance_name=amadb
db_name=amadb
: wq!
I am not specifying other parameters since you need clarificaiton in the
instance name and db name..
Now, you can connect to the db :
$ sqlplus / as sysdba
SQL> startup nomount ------- starts the instnace in nomount state and you
can check the parameters of instances as well
SQL> show parameter instance_name
SQL> show parameter db_name
Also you can check the view :
SQL > select instance_name,status from v$instance ;
Hope its clear.. Cheers!
Actions
4. Re: sqlplus - ORACLE_SID - UNIX
878302 Sep 8, 2011 4:13 AM (in response to 848959) The
error message ORA-12162 "TNS:net service name is incorrectly
specified", is very misleading.
It suggests that there is a problem with the tnsnames.ora file contents, but in
reality the message ORA-12162 "TNS:net service name is incorrectly
specified" results from improperly setting your ORACLE_SID value.
To fix this error in Windows, set your $ORACLE_HOME:
export ORACLE_SID=amadb
In Linux, these commands sets ORACLE_HOME and $ORACLE_SID as follows:
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4; export ORACLE_HOME
ORACLE_SID=amadb; export ORACLE_SID
ORACLE_HOME
SQLPLUS is One of the Many TOOLS in ORACLE_HOME(C:\....\product\...) use to REMOTE to the DATABASE.TOAD is one of the tools used in ORACLE as well, etc.To See commands in SQLPLUS, do SHOW ALL or Help>TOPIC
LSNRCTLis needed when you have many DATABASES that you want to connect(remotely) to(e.g.DB1,DB2,DB3,DB4,DB5,DB6,DB7,DB8) e.g. @DB3…etc.(Note:LSNRCTL is not needed when connecting DIRECTLY(locally) to a database.Only for REMOTE connection that LSNRCTL is needed)
From Cmd Line(windows/unix):Set ORACLE_SID=AMADB(Windows)/EXPORT ORACLE_SID=AMADB(UNIX) means that you(DBA) wants to call Environment variables for the AMADB database to be set for you to use. If you have many databases e.g. DB1,DB2,DB3,DB4,DB5,DB6,DB7,DB8,etc, then you will need to connect to individual environment by calling their variables(ie.Set ORACLE_SID=DB4 calls DB4 variables for you to connect).
At Command Line, to get to RMAN environment, do SET ORACLE_SID=amadb(amadb's env)>RMAN>Startup
PFILE
Note: BACKUP this database before something goes wrong again.
SCREENSHOTS
WHERE UPPER (column e.g. Account_status) LIKE '%LOCK%' to check account_status=>either it's open or expired and locked. Hence use DISTINCT in your select statemt. |
|
I. Querying More Than One Table |
|
1. Joins :Table joins mean, You're joining the 2(several tables), then comparing(performing) the conditions(e.g. =,>,on clause,<,null, etc..) |
|
|
ESTABLISHED FACTS prior to start using YOUR new DATABASE
***NOTE***
A database consists of objects e.g. Tables,etc. Most of the data in a database are arranged in the form of columns and rows (just like in EXCEL)
*After installation, the database is empty. So adding the V$DATAFILES,V$CONTROLFILES,V$REDOLOGs,init.ora,archivedlogs, makes the database to come ALIVE
*Creating a new user (user/pw,grant resource,tablespace temp,default tablespace) is just EMPTY. Adding object (tble,trg,etc) to user's tablespace makes user to become a SCHEMA.
Name of Database=AMADB/Kenxmxra
Note: Server/Node/OS(unix),machine,host,client are all Synonyms
*Arithmetic Operators (+,-,*,/)
*Column Alias (empno AS Employee_number)
*Concatenation: symbol is || e.g. select 'name='||ename as employee _name from scott.emp;
*Literal
*If a row contains a Column with no data in it, then it is said to be NULL
*If NULL is part of another expression, results will ALWAYS be NULL(e.g NULL+2)
*Issue with NULL values in expression can be solved by NVL which puts a value where NULL appears.NVL can be used with DATES,character,number datatypes
-NVL takes 2 parameter(column u're checking for NULL and value to return if first paramenter is NULL e.g. select ename, sal*12+NVL(Comm,0) as total_renumeration from scott.emp;
*DISTINCT is used to prevent DUPLICATErows from being selected in a query e.g.
select DISTINCT deptno from scott.emp;
*Ordering/Sorting Data e.g. orderby 1 is used to sort rows in alphabetical order.
*Row Restriction (i.e. Where clause)
*like'%s%'; ename exactly 4 xters: like '____'; (i.e. 4 underscore signs, no spaces)
*Not equal to (<>,!=), Not greater than (!>), is not Null, not in, not between, not like
*AT SQLPLUS>/ (is EXECUTE previous query command)
*AT SQLPLUS>ed (edit - opens running query for editing)
ORACLE_HOME
SQLPLUS is One of the Many TOOLS in ORACLE_HOME(C:\....\product\...) use to REMOTE to the DATABASE.TOAD is one of the tools used in ORACLE as well, etc.To See commands in SQLPLUS, do SHOW ALL or Help>TOPIC
LSNRCTL is needed when you have many DABABASES that you want to connect(remotely) to(e.g.DB1,DB2,DB3,DB4,DB5,DB6,DB7,DB8) e.g. @DB3…etc.(Note:LSNRCTL is not needed when connecting DIRECTLY(locally) to a database.Only for REMOTE connection that LSNRCTL is needed)
From Cmd Line(windows/unix):Set ORACLE_SID=AMADB(Windows)/EXPORT ORACLE_SID=AMADB(UNIX) means that you(DBA) wants to call Environment variables for the AMADB database to be set for you to use. If you have many databases e.g. DB1,DB2,DB3,DB4,DB5,DB6,DB7,DB8,etc, then you will need to connect to individual environment by calling their variables(ie.Set ORACLE_SID=DB4 calls DB4 variables for you to connect).
ORATAB(/etc/oratab): If many databases, then do . oraenv>select your SID
RMAN: At Command Line, to get to RMAN environment, do SET ORACLE_SID=Ken(Ken's env)>RMAN>Startup |
Note: BACKUP this database before something goes wrong again. |
*DUAL: This is a dummy table which spits out anything you type in (e.g. Select my name is KenChando from DUAL;)
-Select Months_between('07-23-202',SYSDATE) from Dual; => Dual Calculates from 07-23-2022 minus current date(system date on PC)
-Select Next_day (Sysdate,'Sun') from dual;=>next Sunday's date from today.
-Select Add_Months(Sysdate,12) from dual;=> 12 is added to current system date/time from today.
-Select Last_day (Sysdate) from dual;=> displays last day of the month from the current date of your (pc/database/server).
-Select Next_day (Sysdate,'Sun') from dual;=>next Sunday's date from today.
-Select TO_CHAR (Sysdate) "Date of Today" from dual;=>sends current Sysdate to a COLUMN called "TO_CHAR" which is read /displayed by the system.
-Select Next_day (Sysdate,'Sun') from dual;=>next Sunday's date from today.
-select MAX(Sal) As HIGHEST_SALARY from scott.emp;
-select MIN(Sal) As LOWEST_SALARY from scott.emp;
-select AVG(Sal) As Average_Monthly_SALARY from scott.emp;
-select SUM(Sal) As Total_Monthly_SALARY from scott.emp;
Modifying Data & the Database (Data Manipulation Language (DML))[INSERT,UPDATE,DELETE,Transaction Processing(i.Commit,ii.Rollback,iii.SavePoint) |
Using Data Definition Language (DDL) Statements (Tables,Indexes,Synonyms,Privileges,Views,Sequences) |
Row & Group Functions
Row functions: Character functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions001.htm
Single-Row Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions002.htm
Aggregate Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions003.htm
Analytic Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm
Object Reference Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions005.htm
Model Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions006.htm
OLAP Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions007.htm
Data Cartridge Functions: http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions008.htm
Querying from more than one table
JOINS (Equijoin, Cartesian, Outer, Self, etc.): See SQL for Beginners (Pg. 105)
Sub Queries
Set Operators (MINUS, UNION, etc)
*To check how many OBJECTS (# of tables, indexes, triggers,sequence, functions, procedures, etc) in a database, check Select *DBA_OBJECTS table.
*To check Table VIEWs (e.g. how many users has VIEWed a particular table) in a database, check Select *DBA_VIEWS table.
We will cover the different types of SQL statements today, July 29. 2014.
Types of SQL Statements
RE: http://docs.oracle.com/database/121/SQLRF/statements_1001.htm
There are five types of SQL statements, namely:
Data Definition Language (DDL) Statements
RE: http://docs.oracle.com/cd/E11882_01/appdev.112/e10766/tdddg_objects.htm
Data Manipulation Language (DML) Statements
RE: http://docs.oracle.com/cd/E11882_01/appdev.112/e10766/tdddg_dml.htm
Transaction Control Statements
COMMIT
ROLLBACK
SAVEPOINT
SETTRANSACTION
SETCONSTRAINT
Session Control Statements
ALTER SESSION
SET ROLE
System Control Statement
ALTER SYSTEM
Please read the materials in the links included in this notes and ask as many questions during class as possible. Keep in mind that you run a script file – a file containing a bunch of commands –as shown below.
To find out what user(System,DBA,etc) has what kind of System privileges, do check DBA_SYS_PRIVS data dictionary
*Select * from DBA_SYS_PRIVS; (Question: Does DBA has the right to drop table?)
*To See all the DISTINCT privileges(100 of them) in the DBA_SYS_PRIVS, do Select DISTINCT Privilege from DBA_SYS_PRIVS; (e.g. Drop Table)
*To See ALL the people(users) who can grant privileges in ORACLE, do Select DISTINCT grantee from DBA_SYS_PRIVS; (e.g. DBA)
*To See if YES/NO any of the privilege has been granted by a grantee, do Select DISTINCT admin_option from DBA_SYS_PRIVS; (Yes)
NOTE: ALWAYS ask yourself that what to you want to do in terms of privileges (You can first see all then...)
For Example, do I want to:
* ALTER a SYSTEM
*ALTER SESSION (occurs when you're switching from one CONTAINER to another or from one DATABASE session to another (e.g. while working on ORCL database, u switch from ORCL to AMADB)
*ALTER a Database
*ALTER a TABLE,TRIGGER,INDEX,SEQUENCE,
*DROP a DATABASE,TABLE(objects),
*AUDIT a SYSTEM
See More HERE
Do I have the right permissions(Privilege) to carry out the above task?If not, then it needs to be GRANTED by someone e.g. DBA with higher privilege. If a user is already existing in the database, then you can only INCREASE his permission AND NOT Createf a NEW USER by Create user command.
NOTE: I have just ONE DATABASE =AMADB and many sessions which I can use Command Lines to open(upto 248) and many processes could be running at same time (upto 150). When session is updated, needs to hit commit to confirm change if DML
SEE how to UPDATE an existing TABLE (single update, bulk update,etc) |
||
*Show parameter Processes, Show Parameter Sessions (displays all running processes on a database and all current sessions on the database). Don't confuse DATABASE and SESSIONs, PROCESSES or INSTANCES |
||
PLAYING WITH TABLES: Create your own Table(CHANDO) and then grant permissions,update,delete,insert,alter. Search table by Select * from dba_tables where table_name='CHANDO'; |
||
Note: When you CREATE any TABLE(e.g. CHANDO), it is defaultly sent to the DBA_Tables dictionary view (i.e. all tables have their default TABLESPACE after creation unless specified otherwise) |
||
When you create a new TRIGGER, it goes to the DBA_Triggers tablespace. When you create a new user, it goes to DBA_Users tablespace, new DATABASE, it goes to V$DATABASE tablespace |
||
|
||
KEN, you've created your own table(CHANDO). Create your own TRIGGER,PROCEDURE,FUNCTION, grant the necessary permissions to your objects to other users. |
|
|
WHERE UPPER (column e.g. Account_status) LIKE '%LOCK%' to check account_status=>either it's open or expired and locked. Hence use DISTINCT in your select statemt.
I. Querying More Than One Table
1.
Joins :Table joins mean, You're joining the 2(several tables), then
comparing(performing) the conditions(e.g. =,>,on clause,<,null, etc..)
RE: http://dwhlaureate.blogspot.com/2012/08/joins-in-oracle.html |
Also: http://docs.oracle.com/cd/E11882_01/server.112/e26088/queries006.htm |
Profile Create Syntax:CP-SEs-CPu-CPu-Connect-Idle-logicalses-logicalreads-comp-privateSGA-Failedloginattempts-pwlife-pwreuse-pwgrace-pwverify
dba_profile; dba_role
Privileges,Roles and Profiles
*Default Profile sets all RESOURCE limits to UNLIMITED (i.e. User(s) would have no limits as to how many resources(e.g tables,users,tbs, etc) they can access/use
*ALWAYS describe your table to see STRUCTURE, that way you can create profile, roles, privileges following the right syntax as example in the select * from dba_tables(profile,roles,privileges)
PROFILE CREATION by Ken CHANDO. Do check it from dba_Profiles;
GRANTING ROLES and PRIVILEGES
REVOKING Roles and Privileges
Table names and Dictionary View
TO SEE ALL PRIVILEGES GRANTED THAT STARTS WITH "CR"
*SEE ALL SYSTEM PRIVILEGES GRANTED THAT STARTS WITH "CR"
SELECT * FROM DBA_SYS_PRIVS
WHERE PRIVILEGE LIKE '%CR%'
ORDER BY 1
PROFILE CREATION Profile Create Syntax:CP-SEs-CPu-CPu-Connect-Idle-logicalses-logicalreads-comp-privateSGA-Failedloginattempts-pwlife-pwreuse-pwgrace-pwverify
from DBA_Profiles CREATE PROFILE C##PROCESS LIMIT
SESSIONS_PER_USER UNLIMITED
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL UNLIMITED
CONNECT_TIME UNLIMITED
IDLE_TIME UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED Ken Created TELECOM table. Can he EXPORT or IMPORT THEM?
COMPOSITE_LIMIT UNLIMITED
PRIVATE_SGA UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_LOCK_TIME 1
PASSWORD_GRACE_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION NULL;
ALTER table (RENAME COLUMNS)
CREATING TABLESPACES
*In Oracle 12C, a SINGLE Container Database(CDB) is allowed to host multiple separate pluggable Databases (PDB)
*Management of tablespaces in the CDB is no differerent from those in non-CDB database.Provided you're logged in as a Privileged user and pointing to the root container, the usual commands are all available.
CONN /AS SYSDBA
SQL>Show CON_NAME
CON_NAME
------------------------------
CDB$ROOT
*Managing Control Files: (V$CONTROLFILE, V$DATABASE, V$PARAMETER, V$CONTROLFILE_RECORD_SECTION)
-A CONTROL FILE is a SMALL BINARY file that records the PHYSICAL structure of the database.
If DBA(you) wants to alter your control file, then do:
-Select name from V$Controlfile; >Alter Database BACKUP Controlfile to TRACE AS (location)C:\DBA_WORK\NEW_CF.sql';
*Managing REDO LOG :(V$LOGFILE,V$LOG,V$LOG_HISTORY)
-A REDO Log: Consists of 2 or more Pre-allocated files that store CHANGES made to the database as they occur.
*ARCHIVED Redo Logs :(V$ARCHIVED_LOG,V$BACKUP_REDOLOG,V$ARCHIVED_PROCESSES)
-Archived Redo Logs is a destination that holds filled group REDO log . The process of turning REDO LOG files into ARCHIVED Logs, is called ARCHIVING and this happens only when the database is running in ARCHIVELOG mode.Archive logging could be MANUAL or AUTOMATIC
Note: If Redolog1=ARC1, then database is running in Archivelog mode meaning that the LGWR(log writer process ) cannot be REUSE since redolog(LGWR)=Archivelog(LGWR) which is corrupt. The
background process(ARCn) automates ARCHIVING operations when performing automatic archiving
*Managing TABLESPACEs(DBA_TABLESPACES,V$TABLESPACE,DBA_DATA_FILES,DBA_USERS)
TABLESPACE:
For this to exist, there must be a Database>Files(controlfile,datafile,redo logs)
*The first tablespace created in a database is SYSTEM(i.e. contains data dictionary and basic information about the database server)is SYSTEM tablespace (SYSTEM=User too but when objects are added to SYSTEM, it becomes a TABLE space(i.e. USER+objects=Tablespace)
Differentiate: User>Tablespace>SCHEMA(SYSTEM,SYSAUX
Manipulating TABLESPACES:
Create Tablespace or Create Temporary Tablespace(needs user to have right privilege).
After you create tablespace, you might want to alter it, do: Alter TABLESPACE or Alter DATABASE(needs privilege to do this). CREATE UNDO TABLESPACE (designed to contain UNDO records...e.g after row 10 created, it's been deleted , you can find it in the UNDO Table space(just like Recycle BIN and then You(DBA) can decide to restore(deleted items) by using the ROLLBACK command>name of item(deleted/dropped,etc))
-Locally managed Tablespaces Vs Dictionary-managed tablespaces
*Managing DATAFILES and TEMPFILES(V$DATAFILE,DBA_DATA_FILE and V$TEMPFILE)
-Datafiles are PHYSICAL files of the operating system that stores the data of all logical structures in the database.Needs to be EXPLICITLY created for each tablespace.
*Managing UNDO(V$UNDOSTAT,V$ROLLSTAT,V$TRANSACTION,DBA_UNDO_EXTENTS,DBA_HIST_UNDOSTAT)
-Oracle allows ROLLBACK of data(objects,etc) if they haven't been COMMIT; yet.
-DBA determines how long to retend UNDO(deleted) objects e.g for 30mins, 72hours,1 Week by:
UNDO_RETENTION
To set the minimum UNDO retention period: Do _Set UNDO_RETENTION in the init parameter file.
SQL>ALTER SYSTEM Set UNDO_RETENTION = 1800 SCOPE=BOTH;
To verify: Show parameter UNDO;
*ORACLE MANAGED FILES(OMF):
OMF works well with Logical Volume Manager(LVM) where DBA just needs to specify ONLY the file system directories in which the database AUTOMATICALLY creates, names and manages files at the database object level. LVM creates and deletes files in the following:
TABLESPACES, REDO Log files, CONTROL files,Archived Logs,Block change tracking files,FLASHBACK logs,RMAN backups
-LVM doesn't affect the creation or naming of ADMINISTRATIVE files such as :TRACE files,audit files,alert logs,core files.
*DECENDING ORDER (Sorting) e.g. Select * from DBA_ROLES, order by Role DESC;
To know all ACCOUNTs that are Open in your database,do:
select username,account_status,profile,default_tablespace,temporary_tablespace from dba_users
where account_status='OPEN';
User CREATE CONTROLFILE to create a new Control File and to rename your database whose control file is CORRUPT or can't be accessed.
An alternative to the CREATE CONTROLFILE statement is ALTER DATABASE BACKUP CONTROLFILE TO TRACE, which generates a SQL script in the trace file to re-create the controlfile. If your database contains any read-only or temporary tablespaces, then that SQL script will also contain all the necessary SQL statements to add those files back into the database. Please refer to the ALTER DATABASE "BACKUP CONTROLFILE Clause" for information creating a script based on an existing database controlfile.
To create a control file, you must have the SYSDBA system privilege.
The database must not be mounted by any instance. After successfully creating the control file, Oracle mounts the database in the mode specified by the CLUSTER_DATABASE parameter. The DBA must then perform media recovery before opening the database. If you are using the database with Real Application Clusters, you must then shut down and remount the database in SHARED mode (by setting the value of the CLUSTER_DATABASE initialization parameter to TRUE) before other instances can start up.
DBMS_Utility contains a list of all Dictionary Views
INSTANCES
SGA contains memory buffers that is started each time a database is started.
Note: The DATABASE needs to be OPENED for an instance to manipulate it.
Instance vs Database differences
Listener
TNSnames.ora
Unlocking Accounts and Resetting Oracle passwords
Control File Purpose or Role
DATABASE LINKS (Open database)
DIFFERENCEs between DATABASE and INSTANCE in ORACLE
To KNOW where each USER's Default Tablespace and Temp Tablespace is located:select username,account_status,profile,default_tablespace,temporary_tablespace from dba_users
ALL Accounts in a database(status,users sys,sysdba) is in DBA_USERs;
Topic
1. Oracle Enterprise Manager (OEM) Grid Control
Ø Managing the Oracle Database Instance
i. Start and stop the Oracle database its components through OEM
ii. Use Oracle Enterprise Manager to monitor and manage your database
iii. Describe database shutdown options
2. FLASHBACK TABLE:http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9012.htm
3. FLASHBACK DATABASE: http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsynta023.htm
http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm
system/"Autumn2013Love!"
To view LISTs of SCHEMA in a DATABASE, do desc DBA_OBJECTS>select…
*LOOKING at OBJECTS types in USER_OBJECTs ( in a database)
select DISTINCT object_name,object_type from User_objects
where object_name like 'S%'
order by object_name
*To VIEW ALL CATALOGS in a DATABASE, do SELECT * from CAT;
BACKUP and RECOVERY
*A backup is a copy of data.Backups are divided into PHYSICAL and LOGICAL backups.
-Physical backup is a copy of the PHYSICAL DATABASE and can be done with either Recovery Manager(RMAN) or Operating system utilities.
_Logical Backup contains LOGICAL DATA e.g. TABLES and Stored PROCEDURES extracted with an ORACLE utility and stored in a BINARY file.
*There are 2 ways to perform ORACLE BACKUP and RECOVERY. That is Recovery Manager and User-managed backup and recovery.
Recovery Manager (RMAN) is an Oracle utility that can back up, restore, and recover database files. It is a feature of the Oracle database server and does not require separate installation.
You can also use operating system commands for backups and SQL*Plus for recovery. This method, also called user-managed backup and recovery, is fully supported by Oracle, although use of RMAN is highly recommended because it is more robust and greatly simplifies administration.
*Whether RMAN or User-managed is used, you can supplement your Physical backups with Logical backups of SCHEMA objects made, using the EXPORT utility. The utility writes data from Oracle database to BINARY operating system files. You can later use IMPORT to restore this data into a database.
Using the Oracle Data Pump API
Data Pump Export: http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm
Data Pump Import: http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_import.htm
Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.Import can also be used to load a target database directly from a source database with no intervening dump files. This is known as a network import.
References
http://docs.oracle.com/cd/B19306_01/server.102/b14220/backrec.htm
Oracle Database Backup & Recovery Guide: http://docs.oracle.com/cd/E11882_01/backup.112/e10642.pdf
Data Pump Export Modes
Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:
Full Export Mode
Schema Mode
Table Mode
Tablespace Mode
Transportable Tablespace Mode
Note:
A number of system schemas cannot be exported because they are not user schemas; they contain Oracle-managed data and metadata. Examples of system schemas that are not exported include SYS, ORDSYS, and MDSYS.
Invoking Data Pump Export
The Data Pump Export utility is invoked using the expdp command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.
Note:
Do not invoke Export as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.
The following sections contain more information about invoking Export:
Data Pump Export Interfaces
Data Pump Export Modes
Network Considerations
Note:
It is not possible to start or restart Data Pump jobs on one instance in an Oracle Real Application Clusters (RAC) environment if there are Data Pump jobs currently running on other instances in the Oracle RAC environment.
Data Pump IMPORT:
Note:
Although Data Pump Import (impdp) functionality is similar to that of the original Import utility (imp), they are completely separate utilities and their files are not compatible. See Chapter 20, "Original Export and Import" for a description of the original Import utility.
Invoking Data Pump Import
The Data Pump Import utility is invoked using the impdp command. The characteristics of the import operation are determined by the import parameters you specify. These parameters can be specified either on the command line or in a parameter file.
Note:
Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.
Note:
Be aware that if you are performing a Data Pump Import into a table or tablespace created with the NOLOGGING clause enabled, a redo log file may still be generated. The redo that is generated in such a case is generally for maintenance of the master table or related to underlying recursive space transactions, data dictionary changes, and index maintenance for indices on the table that require logging.
The following sections contain more information about invoking Import:
Data Pump Import Interfaces
Data Pump Import Modes
Network Considerations
When a DATA PUMP is created, check the jobs,sessions and at:
*DBA_DATAPUMP_JOBS
*DBA_DATAPUMP_SESSIONS
*V$SESSION_LONGOPS
TABLESPACE
CREATE bigfile/smallfile/UNDO/TEMPORARY/USER tablespace CHANDO
datafile/tempfile 'temp01.dbf' /'bigtbs01.dat'/'undotbs_01.f' size 5M AutoEXTEND ON;
SELECTING DATA from 2+ tables(needs to add the JOIN clause)
Example 3-9 Selecting Data From Multiple Tables WIth the SQL JOIN USING Syntax
***-- the following SELECT statement retrieves data from two tables -- that have a corresponding column (department_id) -- note that the employees table has been aliased to e and departments to d SELECT e.employee_id, e.last_name, e.first_name, e.manager_id, department_id, d.department_name, d.manager_id FROM employees e JOIN departments d USING (department_id);
*** -- the following SELECT retrieves data from three tables -- two tables have the corresponding column (department_id) and -- two tables have the corresponding column (location_id) SELECT e.employee_id, e.last_name, e.first_name, e.manager_id, department_id, d.department_name, d.manager_id, location_id, l.country_id FROM employees e JOIN departments d USING (department_id) JOIN locations l USING (location_id);
CREATING ALIAS for TABLES e.g. (You're mapping ALL the columns with the alias letter ('e.), test (subset)= User_tables)
CREATE TABLE test AS
SELECT t. table_name, t. tablespace_name, s.extent_management
FROM user_tables t, user_tablespaces s
WHERE t.tablespace_name = s. tablespace_name
AND 1=2;
FLASHBACK
*FLASHBACK TABLE: Use FLASHBACK TABLE statement to restore a previous state of a table in the event of a human or application error (e.g. deleted,added new records,etc)
Note:
Oracle strongly recommends that you run your database in automatic undo mode by leaving the UNDO_MANAGEMENT initialization parameter set to AUTO, which is the default. In addition, set the UNDO_RETENTION initialization parameter to an interval large enough to include the oldest data you anticipate needing. For more information refer to the documentation on the UNDO_MANAGEMENT and UNDO_RETENTION initialization parameters.
Note:
* Remember to: Set UNDO_RETENTION = 1800;(for e.g.)
*Before issueing the FLASHBACK TABLE statement, first record the SCN #of the table you want to do a FLASHBACK.This is needed just in case you want to restore by FLASHBACK to previous table you just changed.
FLASHBACK DATABASE for information on reverting the entire database to an earlier version
the flashback_query_clause of SELECT for information on retrieving past data from a table
Oracle Database Backup and Recovery User's Guide for additional information on using the FLASHBACK TABLE statement
*To use the FLASHBACK TABLE statement, you MUST have the right privilege to do so.
-To flash back a table to an earlier SCN or timestamp, you must have either the FLASHBACK object privilege on the table or the FLASHBACK ANY TABLE system privilege. In addition, you must have the SELECT, INSERT, DELETE, and ALTER object privileges on the table.
ORACLE 12c
Description |
CREATE TABLEJOSEPHINE.EMP AS 'SELECT * FROM SCOTT.EMP'; (CREATES EXACT REPLICA OF SCOTT.EMP TABLE AND NAME IT AS JOSEPHINE.EMP WHERE SCOTT=SCHEMA,JOSEPHINE=SCHEMA) |
Once you create a table, ROLE should be created (e.g. Create ROLE FINANCE)>Grant privilege to that ROLE e.g. grant SELECT ON JOSEPHINE.EMP to FINANCE(mean, when Josephine logs in, she can only do Select on that table. She will not be able to DROP,INSERT or do anything else apart from SELECT on her provided table. |
*CONNECT role should be 1st granted before SELECT/any other role provided to any user. This is so because before EXERCISING any ROLE, user MUST first CONNECT to the database.(Only SYS, and SYSTEM) has this CONNECT role by default. |
*DBA_USERS(anything DBA_*** can only be query by SYSTEM or SYS users).Anyone else will need to be granted the privilege(assign a ROLE) to query DBA_***(objects).Anything ALL_TABLES can be viewed by all SCHEMA(users) regardless if they have rights or not.This table is to be viewed by ALL(anyone) who logs into the database. |
*When you're login as a PARTICULAR schema(e.g. C##SAMUEL), you don't need to specify C##SAMUEL.EMP(it's optional) but if you're login as SCOTT and you want to query something in the C##SAMUEL schema, then you need to have it as C##SAMUEL.EMP(i.e. Employee table under C##SAMUEL schema/profile. You need to have the required privilege to do that otherwise, it's not gonna let you do the changes - just like in WINDOWs where kchando can't change OWNER folders unless rights are granted to Kchando to do so). |
*To have C##SAMUEL to SELECT from any DICTIONARY in your DATABASE, then you need to grant SELECT ANY DICTIONARY to C##SAMUEL |
*TO See(you=SYS,SYSTEM) All Tables that belong to a particular who is login to a database check USERS_TABLES(even user currently login can see tables s/he has created under her profile/Schema(as s/he has login) but users without right privilege will not be able to see tables created in the DBA_Tables. |
*FASTEST WAY TO CREATE A TABLE: CREATE TABLE MIKE.EMPASSelect * from Scott.Emp;>Create Role ACCOUNTING;>Grant SELECT on MIKE.EMP to ACCOUNTING (You can decide to rename the column not to be same to original table as your create it) |
*TIME WAS 11:37(E)AND 11:42(G) 11:47(E). To return to the old time, do a FLASHBACK (from V$DATABASE):FLASHBACK(the exact time) lets you see what was initially in DUAL before UPDATEs were made(changes made) |
*PROFILE CREATION SCRIPT(FROM C##SAMUEL USER/PROFILE/SCHEMA LOGIN TO DBASE)
CREATE USER C##IVR
IDENTIFIED BY "Ken4mira"
DEFAULT TABLESPACE IVR_DATA
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;
=====
*GRANTING ROLES TO C##IVR
=====
GRANT CONNECT,CREATE TABLE,CREATE SEQUENCE,CREATE PROCEDURE TO C##IVR;
===
*CREATE TABLESPACE
=====
CREATE BIGFILE TABLESPACE IVR_DATA DATAFILE (FOR INDEX: IVR_INDEX)
'C:\ORACLE_DATA\NSIYEP\NSIYEP\telecomm_app_dat_01.dbf' SIZE 100M AUTOEXTEND ON NEXT 50M MAXSIZE 10G
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
=====
*PROFILE CREATION
=====
CREATE PROFILE C##PROCESS LIMIT (OR CREATE PROFILE IVR_PROCESS)
SESSIONS_PER_USER UNLIMITED
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL UNLIMITED
CONNECT_TIME UNLIMITED
IDLE_TIME UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED
COMPOSITE_LIMIT UNLIMITED
PRIVATE_SGA UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_LOCK_TIME 1
PASSWORD_GRACE_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION NULL;
==
*ALTER USER'S QUOTA
===
ALTER USER IVR(OR C##IVR) QUOTA UNLIMITED ON USERS;
==
*ALTER USER'S TABLESPACE
==
GRANT UNLIMITED TABLESPACE TO IVR;
RMAN
RMANTarget / path (RMAN Target / means target the path of the directory and write to it) |
MODIFY PARAMETERS
Modifying PARAMETER file
*TO IMPORT/EXPORT multiple files
TO IMPORT files f01,f02,f03,f04,f05.dmp,you can do each by itself or user the wildcard mask of %U.dmp
Fo example in your IMPORT parameter file
*DUMPFILE=141001_FB4_%U.dmp (i.e for...FB4_01,...FB4_02=...FB4_%U.dmp(for all))
*REMAP_TABLESPACE=FB4_DATA2:FB4_DATA (meaning anywhere during the import FB4_DATA which is not in my database(where import is done) should be FORWARDED(remapped) to the FB4_DATA(wc I have in my database)
*REMAP_SCHEMA=FB4DBA:FB4 (i.e. during IMPORT of FB4DBA into database,there is no schema called FB4 currently in my database. So once FB4DBA is imported,point ALL entries to FB4(this will automatically create FB4 schema as well.)
EXCLUDE ...clause in parameter file: SCHEMA =.....
SCHEMA=FB4 EXCLUDE=TABLE:"IN('FB4.X','FB4.Y')" (where FB4=schema,X=table; FB4=schema,Y=table)
*GRANT import FAILING...=>person who is being granted those privilege/role on tables(insert,select,etc) doesn't exist my my current database(i.e. where I'm importing the .dmp(dump) files to).This is OK(Don't panic)
*WHENEVER YOU WANT TO IMPORT files,make sure you have the DUMP Files(.dmp) moved to the directory/folder you specified in your database(i.e. create directory BACKUP_FILES AS 'C:\DP_EXPORT....'>Grant.... e.g. BACKUP_FILES. Otherwise,error dumpfile not found will come up.
DBCA/DBUA
*FROM WEEK XIV
1. Open DBCA(>pin to Taskbar)>Create a new DATABASE called RMANCAT
Note: Creating RMANCAT database will add to your BIGIDYdatabase (i.e. you have now 2 databases. To connect to the right one, do SET ORACLE_SID=RMAN/BIGIDY)>echo $ORACLE_SID to confirm right database chosen.
2.
Note:
*If a target databaseis not registered in the recovery catalog, then RMAN cannot use the catalogto store metadata for operations on this database
See more here https://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcatdb.htm#BRADV89652
Note: SETTING UP RMAN CATALOG for the very first time in an ENTERPRISE
STEPS for RMAN Recovery_Catalog creation
A.
1. Create a user in a TABLESPACE (e.g. rman in amadbts TBSP)
2. Grant privilege to user (rman) above as owner to recovery catalog
[i.e. grant recovery_catalog_owner to rman;]
B.
1. Login to RMAN and do [Create Catalog;]
*You can specify the TBSP that you want the rman catalog to be created in
[i.e. create Catalog tablespace amadbtbs;]
*You can check the results via sqlplus by[Select Table_name,from User_tables;]
2. Register DATABASE in the Rman recovery Catalog you just created above
[I.e. Login to your target database (username/passwd@rmancat) then do: REGISTER DATABASE;]
3. Verify that DATABASE registration to RMAN recovery catalog was successful by;
REPORT SCHEMA;
*RMAN: Used to take a Physical backup of a database.
*A database is a collection of files DATAFILES (V$DATAFILE),REDO LOG files(V$LOGFILE),CONTROL files(V$CONTROLFILE),PARAMETER file(V$PARAMETER)
*DIFFERENCES between HOT BACKUP and COLD BACKUP
-Hotbackup Cold BACKUP
-Archivelog Mode Either Or
-Backup the db while Backup the db while it is shutdown/closed
it is up and running
-Allows you to do a point Does not allow a point in time recovery.Only the last backup
in time Recovery(PITR)
Connection to RMAN:
rman TARGET SYS/Ken4mira
*SOFTWARE DEVELOPMENT LIFECYCLE (SDLC)
Analyze->Design->Develop->Test->Implement->(Back to Analyze )
*Incremental Backup Strategy
-Level 0 (Full Backup)[Every Sunday e.g]
-Level 1 (Only BLOCKS changed since Last FULL BACKUP)(Every other day)[alter database enable block change tracking]
*TO FREE MORE SPACE during EXP/IMP, go to:
RMAN>Delete Archivelog All;
*INSTANCE
*CLONE
Analogy: Resume-Cold Backup
*Cold backup of resume=>Exit open resume>Copy to a new location>open(@4:59pm will mean 4:59pm no changes)
*If you open the Closed resume(startup)=>Single instance.(Double-clicking resume to open=Double (2) instances of resume opened
ANALOGY RESUME: INSTANCE (2+ copies of same resume opened in same machine(server/pc))
*Only 1 instance of a database(resume) can be opened on a SINGLE machine(Server/PC)
*If multiple (e.g.4+) instances(of same DATABASE) is to be opened on a SINGLE machine(Server/PC), then RAC comes in.
ANALOGY:RESUME-CLONE
*DATABASE Name=BIGIDY=>CLONE=rename BIGIDY database to something else e.g. MIKE database without changing it's contents
TOAD tool
*TOAD:It is a tool used to access and Run Querries in a Relational database environment.
ISSUES:
*JOINT issues
*FILTER issues(Where...)
*Not EQUAL to operator
*Functions
*UNIONs
*SEARCHES (DBA learn how to search the OPTIMALLY. This will greatly enhance your Database for optimal performance.)
*INDEXing (DBA do a optimize your indexing for great DATAbase performance)
*Execution of QUERY(DBA design a good strategy for QUERY execution for OPTIMAL performance of your database.
*DATAGUARD:
GOLDEN GATE
ASM
OCI
Performance Tuning
*Which hourly rate are you looking at?
*What is your hourly rate?
DISASTER RECOVERY (Week X_exercise_Disaster_Recovery)
*NOTE: STEPS involved in STARTUP
STARTUP=(startup nomount>alter database mount>alter database open)
ARCHIVE MODE=(startup nomount>alter database nomount>alter database open)
*After DATABASE RESTORATION/RECOVERY
1. Shutdown immediate>Startup>select status from V$instance/name V$database;
Note: Even if you did but a LEVEL 1 backup(hotbackup/inconsistent backup) and you want to RESTORE a DATABASE from a DISASTER, ORACLE will go all the WAY to LEVEL 0 Recovery>Restore>Mon...Thurs@1:00pm>DPITR(10:00pm-Thurs)=Arc logfiles from 1:00pm - 9:59pm will be added to bring it to PITR.
DATAGUARDDATAGUARD LECTURES
ADVANTAGES of DATAGUARD
* It helps guard/protect your DATA in case of FAILURE(e.g. Pdtn Server Failure).[It takes ur data and puts it ELSEWHERE.It makes it available for FAILOVER in case of FAILURE]
*DataGuard revalidates the LOG RECORDS to pre-application of ANY LOG corruptions.
*DataGUARD facilitates/enables GEOGRAPHICALLY dispersed sites...
*DataGUARD has flexible CONFIGURATION options for Protection Level
*DataGUARD enables REPORTING and BACKUPs to be diverted to STANDBY
*DataGUARD enables AUTOMATIC RESYNC for failed primary
*Enables SWITCH over for MAINTENANCE (i.e. can make PRIMARY site to STANDBY or STANDBY to PRIMARY e.g. backup,maintenance reasons)
DATAGUARD: DR site: Has only SELECT permissions(i.e. users can only READ data from TABLE but can't modify(insert,delete,etc).
-PRODUCTION and STANDBY databases are ALWAYS in SYNC with EACH other.
KINDS of DATAGUARDs
1. LOGICAL DATAGUARDs (Standby Databases e.g. SQL apply e.g. DDL e.g. create, alter,drop...)
2. PHYSICAL DATAGUARDS (i.e. Physical Standby DATABASES e.g. REDO APPLY= change of data e.g. insert,delete, DML(tables structure modification))
UNIX COMMANDS
Vi Modes:(Arrow Keys)
*Normal (:q!=>quit(forced !))or Esc
*Insert/Append(Start inserting where you're at=> hit i = insert mode)
*ESC (takes you from any current mode e.g. insert, append)
*i>a (insert>start appending where I'm at)
*Shift+A: Takes me right at END of line
*r(replace):r=replace 1 xter; Shift+r (permanent replace mode)>Esc(back out)
*Delete:x-key deletes right at Xter where cursor mark is Delete whole line=>hit DD(2wice)
*Shift+J: appends previous line and the current one>ESC back tonormal mode
*Shift+ZZ (saves and exit)
*SEARCH through a file: /(forward slash) or ?=search forward or backward (/.css (looks for .css))>n (search next) or shift+n (previous)
*Copy-Paste: entire line:press donw YY(hold down)>P(put=paste) or a little bit>select text v(visual mode)>move cursor(arrow)>y>p
*Dealing with WINDOWS(normal mode>:vi usage(2 windows)>Ctrl+w>up/down arrow(T/D)>all normal cmds apply (e.g. search ctrl+w>search file name(/file name))>:q(exit)(:wq=save exit))
FINAL PROJECT
MY LAST DBA_PROJECT:
My last project encompassed both a database and application upgrade. MAXIMO, an IBM-owned application used for supply chain management was being upgraded from v6.3.1 to 7.5 and all its oracle databases were being upgraded from v10.2.0.5 to 11.2.0.3. We had four environments: Development, Test, QA and Production. Development and Test were both single instance databases while QA and Production were two-node RAC. The production database was configured for a single instance physical standby database.
The database upgrade was carried out as follows:
We created empty/shell 11.2.0.5 databases and simply migrated (using Data Pump – Export/Import) all the users and application schema from the legacy to the new databases. We could afford to use Data Pump because the legacy databases were < 350 GB.
All our databases were either running on AIX 6.1or Red Hat Linux 2.1
We used OEM heavily for job (RMAN backups and other custom jobs) scheduling, routine database tasks like tablespace resizing, unlocking accounts, and monitoring.
------------------------------------------------------------------------------------------------------------------------------------------------------------------
*Job Description:
Application=APPS=Softwares 200+ e.g. billing software/app
Finance APP(line of business)
Mobile Work Hand-held Device APP(duke energy)
Power Outage Delivery APP(Duke)
Outage Management System(Duke)
*Each DATABASE is tied to an APPLICATION
Monitoring Tool: OEM (login to OEM) or GRID control. If a backup fails...,shutdown database without a blackout
*If you login to OEM, you can see all the Servers in the company, all the Databases in the company
Logical Design: Creating columns(description of an entity)
Physical Design: Once you actually specify the columns, datatypes of that entity e.g Create table(script),index
*Memory Max Targe....
Ability to perform Backup and Recovery Tasks: IMPDP/EXPDP -logical, Physical backup: RMAN
Backup Strategy: For PDTN: =incremental strategy (Sundays:L0, Mon-Sat: Level 1)
*Apply a patch: Bug causes a malfunction with oracle application. Oracles fixes it and sends you the patch. The tool used is OPATCH to apply Oracle patch.[Yes, I use Opatch all the time]
*Upgrade: In my last company, we upgrade from 10.2.0.5 to 11.2.0.3. The strategy used was to MT2 11...Exp of 10.2 to IMPT in 11.2.0.3
*Developer requests DBA to create a user called PAUL>Schema....(e.g. has 1000 tables)=Application Schema.Software that makes the Database work(PASSP01)
*QUESTION: What's your experience with SHELL Script?
On a scale of 0-7, I can give myself a 7. I use it to get the job done. I'm not saying I'm an Expert
***I won't consider myself an EXPERT in Shell Script but I can write a schell script to get the job done***
*In my previous environment, I have worked with a 2-node RAC and ASM
*I've been in the IT space for more than 6-years.
**6+ months contract: 1099/W-2: $60/hour: I'm opened to NEGOTIATE based on the complexity of the TASK
**Availability: Immediately
Note: When you create a NEW USER, you must grant CONNECT role or CREATE SESSION privilege for user to connect to any database. Otherwise,error user lacks "create session" privileges.
AIX: Version 6.1
Linux: 2.6
Windows
Sun Solaris
E-business suite(experience): This is an APPLICATION
QUESTION:
*What's can you say is YOUR STRENGTH?
-OEM/Grid Control(11g)
-RMAN->Physical Backup
-Data Pump->Logical Backup (importdp/exportdp)
-RAC: 2 node :HA
-DG : Single instance : MOUNT mode
-DR:
*DATAGUARD: A standby database in case of Disaster Recovery (DR)
Single instance DATAGUARD that is started but NoT MOUNTED
*FAILOVER(unplanned swith over/Disaster) vs SWITCH OVER(intentionally switch over to the DATAGUARD/DR)
*TELL ME about YOURSELF...use doc in interview from Mike
*CUSTOMER service: Premium,Timely customer service.
*Can support 3,4,5+ nodes RAC(same concept)
MIGRATE: EXPORT from 1-DATABASE(PASSP01) to PASSD01(export schema and import it to PASSP01)
OEM: Monitoring,GRID control,(monitors DATABASE)after entering EMAIL address in OEM setup. When database goes down, you(DBA) will have to email alert to let you know that DATABASE is down
*DBARTIZAN= TOAD (used by Bank of America)
*Ticketing system: PICASSO, REMEDY
TYPICAL day: Comes to office>Check Emails>Login to Ticketing system(Remedy)>
BLACK-OUT: Go to OEM and create a BLACK-OUT.This means, OEM DON't ALERT me that a database is down.IF you want to SHUTDOWN a database and don't want OEM to generate an alert email, you(DBA) black-out a database
PRIORITY: LOW, MEDIUM,HIGH(only for PDTN databases goes down->tied to OEM linked to REMEDY)
MAXIMO: This is an IBM-owened APPLICATION used for SUPPLY CHAIN Management. IBM produces the Software and sells it out to companies that deals with Supplies. These comapanies(e.g. Duke-Energy) goes ahead and install it and login to it to manage supply activities. Behind it is a DATABASE containing SUPPLY data(tables,datafiles,schemas,etc)
Note: Behind EVERY application, there is a DATABASE (application is the User-friendly form for front-end users not to write Select query to view(DML/DDL) data from database.It's the DBA's job to do the latter.)
AMAG MISCELLAEOUS NOTES
a
tHi Bruce,
Find steps performed for spujul2015 patching in DC2LAB.
A. Steps for Rolling Patch on DC2LAB Cluster[ 10.236.28.165(d2lsenpsh165)/10.236.28.166(d2lsenpsh166)]
See the steps, I followed to do the spujul2015 rolling patch in the DC2LAB below:
1. Download and unzip patch p20803576_112030_Linux-x86-64.zip to primary node(10.236.28.165] from Oracle Support
2. cd $ORACLE_HOME/patches (cd /u01/app/oracle/patches)
3. mkdir spuapr2015
4. cd /u01/app/oracle/patches/spuapr2015 > mkdir patch
5. scp / win scp p20803576_112030_Linux-x86-64.zip to /u01/app/oracle/patches/spuapr2015
6. cd /u01/app/oracle/patches/spuapr2015 > unzip patch p20803576_112030_Linux-x86-64.zip
7. Get count of invalid objects using script sh_invalid_objects.sql from /u01/app/oracle/scripts directory
8. If invalid objects, then run at sql prompt ?/rdbms/admin/utlrp.sql script [i.e. SQL>@?/rdbms/admin/utlrp.sql]
9. Execute sh_invalid_objects script to see if there are any more invalid objects. If none, then proceed to 10 below
10. Create restore point for recovery at sql prompt [i.e. sql> create restore point before_spuapr2015 guarantee flashback database; ]
11. Sudo to root and shut down instance and all nodeapps services on primary (d2lsenpsh165) node:
sudo su –
. .godb
srvctl stop crs
12. Apply the patch on primary (d2lsenpsh165) node as follows:
- Set current directory to the directory where the patch is located and then run OPatch utility by entering the following commands:
cd /u01/app/oracle/patches/spuapr2015/patch#
opatch napply -skip_subset -skip_duplicate
13. Once the patch is applied in primary node (d2lsenpsh165), OPatch will prompt you to apply patch on remote node (d2lsenpsh166)
NOTE: Before you continue patching on remote node(d2lsenpsh166) after the prompt, do the following:
-open a new terminal and login to primary node(d2lsenpsh165) to start another session
-start crs services for primary node(d2lsenpsh165) by running: srvctl start crs
-Verify that the services in primary node is fully operational
14.Login to remote node(d2lsenpsh166) in another session and stop crs services as follows:
sudo su –
cd /u01/app/11.2.0.3/grid/bin
. .godb
srvctl stop crs
With all services in remote node (d2lsenpsh166) still shutdown,
15.Return to patching session window on primary node (d2lsenpsh165) and apply the patch to remote node(d2lsenpsh166) responding to prompts
16.Once patch is applied to remote node(d2lsenpsh166),restart crs services on d2lsenpsh166 node using window in which you stopped crs as follows:
-srvctl start crs
-Allow a couple of minutes for crs to start
-Verify that all services are started
Note: Verify patch applied on either node using OPatch lsinventory
POST spujul2015 PATCH INSTALLATION
==================================
17.Apply post patch script to ONLY one node of cluster. On primary node(d2lsenpsh165) ONLY, run catbundle.sql script to load modified SQL Files into database: As oracle user do:
#cd $ORACLE_HOME/rdbms/admin
#sqlplus /nolog
SQL> connect / as sysdba
SQL> @catbundle.sql cpu apply
SQL> quit
**NOTE**catbundle must only be run on one node of the cluster.
12. Check the log files in $ORACLE_HOME/cfgtoollogs/catbundle for any errors:
catbundle_CPU_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_CPU_<database SID>_GENERATE_<TIMESTAMP>.log
where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS
13. check for invalid objects (run sh_invalid_objects.sql script and compare to same from Step 2)
# scripts
# sql
SQL> @/u01/app/oracle/scripts/sh_invalid_objects.sql
-- if invalid objects ---run
SQL> @?/rdbms/admin/utlrp.sql
SQL> @sh_invalid_objects
14. Check registry history:
from scripts directory on either node:
# sql
SQL> @/u01/app/oracle/scripts/sh_reghist.sql
<< RAC Patching is complete >>
15.Once verification is complete, drop the restore points BEFORE_spuapr2015
# sql
SQL> drop restore point before_spuapr2015;
B. Steps for Standalone(DR) Patch on DC2LAB [ 10.236.28.242(d2lsenpsh242)]
The steps for spujul2015 patching for Standalone (DR) is as follows:
1. Download and unzip patch p20803576_112030_Linux-x86-64.zip to DR node(10.236.28.242] from Oracle Support
2. cd $ORACLE_HOME/patches (cd /u01/app/oracle/patches)
3. mkdir spujul2015
4. cd /u01/app/oracle/patches/spujul2015 > mkdir patch
5. scp / win scp p20803576_112030_Linux-x86-64.zip to /u01/app/oracle/patches/spujul2015
6. cd /u01/app/oracle/patches/spujul2015 > unzip patch p20803576_112030_Linux-x86-64.zip
7. Get count of invalid objects using script sh_invalid_objects.sql from /u01/app/oracle/scripts directory
8. If invalid objects, then run at sql prompt ?/rdbms/admin/utlrp.sql script [i.e. SQL>@?/rdbms/admin/utlrp.sql]
9. Execute sh_invalid_objects script to see if there are any more invalid objects. If none, then proceed to 10 below
10. Create restore point for recovery at sql prompt [i.e. sql> create restore point before_spujul2015 guarantee flashback database; ]
11. Shutdown all oracle services [sql>shutdown immediate]
12. Stop all listeners [lsnrctl stop]
13. Apply patch on Standby DR by doing the following:
- Set current directory to the directory where the patch is located and then run OPatch utility by entering the following commands:
cd /u01/app/oracle/patches/spuapr2015/patch#
opatch napply -skip_subset -skip_duplicate
14. Once verification is complete, drop the restore points from STANDBY DR node via: SQL> drop restore point before_spujul2015;
**NOTE** I Didn’t do catbundle.sql cpu apply on Standalone node (DR) because it wasn’t’ very explicit to do so from the Oracle Support site. I would need your thought here Bruce.
Ken,
Also, on the issue of applying catbundle on standby, you should not do that. Catbundle applied on one node of the cluster is sufficient for the cluster as well as the standby.
Bruce
OEM
====
CREATING NOTIFICATION RULES in OEM
1. SETUP>INCIDENTRULES>CREATE RULE>enter Name of Rule/Description>Select Target(Job/Metric Extensions/Self Update)>Select Target(Database Server/all target(Mission Critical/Production/Staging/Test/Development=>You can specify(+ADD)/EXCLUDE Database(target(s)) you want/don't want RULE to APPLY)>Save
2. You can view/edit Rules set on specific target(database(s)):SETUP>Incident Rules>EDIT Rule(REMEDY Monitoring)>select RULES>EDIT rule>Select Event>Conditional Actions>Review
3. IWMS[Training Database] Notification Rules: EVENTS alerts: Incident Rules>View Rule Set: IWMS [Training Database] Notification Rules>applies to/AlertLog/Tablespaceallocation/Tablespace Full/Recovery Area/Archive Area/Database Services/FAST Recovery=>Severity=send CRITICAL Warnings….on Threshold reached or above
*PLATFORMS
To see the different PLATFORMS that host ORACLE database in your enterprise: ENTERPRISE>CONFIGURATION>INVENTORY and USAGE DETAILS [14 RHEL(v5.11)/8 SUN OS/3 RHEL(v6.6)/1 RHEL(v5.10)
*SQL PERFORMANCE ANALYZER
To see how system changes impacts SQL performance by identifying variations in SQL execution plans and statistics caused by system change. It works by running the SQL statements in SQL Tuning SET one-after-another from a single instance session before and after the change(e.g. patching,upgrade,etc). For SQL statement executed, SQL Performance analyzer captures the execution plan and statistics and stores them in the TARGET database.
How TO…: To run the SQL PERFORMANCE ANALYZER: Go To ENTERPRISE>QUALITY MANAGEMENT>SQL PERFORMANCE ANALYZER>SEARCH database Target Name>Select Target Database(e.g. BASSP)>Continue>Login>ADVISOR CENTRAL[ADDM/Maximum availability architecture/Segment Advisor/Streams Performance Advisor/Automatic Undo Management/Memory Advisors/SQL Advisors/Data Recovery Advisor/MTTR Advisor/SQL Performance Analyzer]>Select SQL Performance Analyzer WorkFlow item[Upgrade from 9i or 10.1/Upgrade from 10.2 or 11g/Parameter Change/Optimizer Statistics/Exadata Simulation/Guided WorkFlow]
*DATABASE INSTANCE e.g: BASSD> CHECKER CENTRAL>ADVISOR CENTRAL>Checkers/undo Segment Integrity Check/Redo Integrity Check/DB Structure Integrity Check/CF Block Integrity Check/Data Block Integrity Check/Dictionary Integrity Check/Transaction Integrity Check
*OEM DATABASEPERFORMANCE: Case study database= BASSD
1. CHECK for BLOCKING SESSIONS: BASSD>Performance>Blocking Sessions>/Top Consumers/Duplicate SQL/Instance LOCKS/Instance Activity/SQL Response Time
2. Check for DATABASE REPLAY: Performance>Database Replay
3. Check for SEARCH SESSIONS:Performance>Search Sessions
4. Check for Adaptive Thresholds: Performance>Adaptive Thresholds
5. Check for Real-Time ADDM: Performance>Real-Time ADDM
6. Check for Emergency Monitoring: Performance>Emergency Monitoring
7. Check for Memory Advisor: Performance>Memory Advisor
8. Check for Advisors Home: Performance>Advisors Home
9. Check for AWR: Performance>AWR>AWR Report/AWR Administration/Compare Period ADDM/Compare Period Reports
10. Check for SQL: Performance>SQL>SQL Tuning Advisor/SQL Performance Analyzer/SQL Access Advisor/SQL Tuning Sets/SQL Plan Control/Optimizer Statistics/Cloud Control SQL History/Search SQL/Run SQL/SQL Worksheet
11. Check for SQL Monitoring: Performance>SQL Monitoring
12. Check for ASH Analytics: Performance>ASH Analytics
13. Check for TOP Activity: Performance>Top Activity
*OEM DATABASE ORACLE DATABSE: Case study database= BASSD
1. Home: Oracle Database>Home
2. Monitoring: Oracle Database>Monitoring>User Defined Metrics/All Metrics/Metric and Collection Settings/Metric Collection Errors/Status History/Incident Manager/Alert History/Blackouts
3. Diagnostics: Oracle Database>Diagnostics>Support Workbench/Database Instance Health
4. Control: Oracle Database>Control>Startup/Shutdown/Create Blackout/End Blackout
5. Job Activity: Oracle Database>Job Activity
6. Information Publisher Reports: Oracle Database>Information Publisher Reports
7. Logs: Oracle Database>Logs>Text Alert Logs Contents/Alert Log Errors/Archive/Purge Alert Log/Trace Files
8. Provisioning: Oracle Database>Provisioning>Create Provisioning profile/Create Database Template/Clone Database Home/Clone Database/Upgrade Oracle Home&Database/Upgrade Database/Activity
9. Configuration: Oracle Database>Configuration>Last Collected/Topology/Search/Compare/Comparison Job Activity/History/Save/Saved
10. Compliance: Oracle Database>Compliance>Results/Standard Associations/Real-Time Observations
11. Target Setup: Oracle Database>Target Setup>Enterprise Manager Users/Monitoring Configuration/Administrator Access/Remove Target/Add to Group/Properties
12. Target Information: Oracle Database>Target Information
*OEM DATABASE AVAILABILITY: Case study database= BASSD
1. Check for High Availability Console: Availability>High Availability Console/MAA Advisor/BACKUP & RECOVERY[Schedule Backup/Management Current Backups/Backup Reports/Restore Points/Perform Recovery/Transactions/Backup Settings/Recovery Settings/Recovery Catalog Settings]/Add Standby Database
*OEM DATABASESCHEMA: Case study database= BASSD
1. Users: Schema>Users
2. Database Objects>Schema>Database Objects>Tables/Indexes/Views/Synonyms/Sequences/Database Links/Directory Objects/Reorganize Objects [desc dba_ob>select * 4m ob]
3. Programs: Schema>Programs/Packages/Package Bodies/Procedures/Functions/Triggers/Java Classes/Java Sources
4. Materialized Views: Schema>Materialized Views>Show all/Logs/Refresh Groups/Dimensions
5. User Defined Types: Schema>User Defined Types>Array Types/Object Types/Table Types
6. Database Export/Import: Schema>Database Export/Import>Transport Tablespaces/Export to Export Files/Import from Export Files/Import from Database/Load Data from User Files/View Export & Import Jobs
7. Change Management: Schema>Change Management>Data Comparisons/Schema Change Plans/Schema Baselines/Schema Comparisons/Schema Synchronizations
8. Data Discovery and Modeling: Schema>Data Discovery and Modeling
9. Data Subsetting: Schema>Data Subsetting
10. Data Masking Definitions: Schema>Data Masking Definition
11. Data Masking Format Library: Schema>Data Masking Format Library
12. XML Database: Schema>XML Database>Configuration/Resources/Access Control Lists/XML Schemas/XML Type Tables/XML Type Views/XML Type Indexes/XML Repository Events
13. Text Manager: Schema>Text Indexes/Query Statistics
14. Workspaces: Schema>Workspaces
*OEM DATABASEADMINISTRATION: Case study database= BASSD
1. Initialization parameters: Administration>Initialization Parameters
2. Security: Administration>Security>Home/Reports/Users/Roles/Profiles/Audit Settings/Transparent Data Encryption/Oracle Label Security/Virtual Private Database policies/Application Contexts/Enterprise User Security/Database Vault
3. Storage: Administration>Storage>Control Files/Datafiles/Tablespaces/Make Tablespace Locally Managed/Temporary Tablespace Groups/Rollback Segments/Segment Advisor/Automatic Undo Management/Redo Log Groups/Archive Logs
4. Oracle Scheduler: Administration>Oracle Scheduler>Home/Jobs/Job Classes/Schedules/Programs/Windows/Window Groups/Global Attributes/Automated Maintenance Tasks
5. Streams Replication: Administration>Streams Replication>Setup Streams/Manage Replication/Setup Advanced Replication/Manage Advanced Replication/Manage Advanced Queues
6. Migrate to ASM: Administration>Migrate to ASM
7. Resource Manager: Administration>Resource Manager
8. Database Feature Usage: Administration>Database Feature Usage
******************************************************************************************************************************************************
VIEWING INCIDENTS that happened on your DATABASE (e.g. night before)
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Monitoring>Alert History/Incident Manager>/Events without Incidents/My Open incidents & Problems/Unassigned incidents…
CHECK HEALTH of DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Diagnostics>Database Instance Health
SHUTDOWN DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Control>Startup/Shutdown
VIEW ALERT LOG (Errors) on DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Logs>AlertLog Errors
CLONE/UPGRADE a DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Provisioning>Clone Database/Upgrade Database
MONITOR SQL STATEMENTS
1. Go to TARGETs>DATABASES><database_name>PERFORMANCE>SQL Monitoring/SQL>/SQL TUNING/OPTIMIZER Statistics/Run SQL…>BLOCKING SESSIONS
BACKUP & RECOVERY DATABASE
1. Go to TARGETs>DATABASES><database_name>AVAILABILITY>BACKUP & RECOVERY
DATABASE ADMINISTRATION
1. Go to TARGETs>DATABASES><database_name>ADMINISTRATION>Security(Users,Roles,Profiles)>Storage(Control Files,Datafiles,Tablespace,Rollback segments,Archive Logs)
********************************************************************************************************************************************************
OEM TEMPLATES(SQL scripts) for TASKS
1. DASHBOARD: TARGET>Systems>Members>DASHBOARD
2. TEMPLATE: [looking at the metrics of ALL 14 systems/database at once]>(DB_Name)>DASHBOARD[
BRUCE
=====
[7/31/2015 8:54 AM] Franklin, Bruce:
Ken, gm
[7/31/2015 8:54 AM] Franklin, Bruce:
happy Friday
[7/31/2015 8:54 AM] Chando, Kenneth:
hi Bruce good morning. Thanks Bruce and same to you
[7/31/2015 8:54 AM] Chando, Kenneth:
excellent job...
[7/31/2015 8:54 AM] Franklin, Bruce:
question for you... have you applied that JAVA patch in the lab?
[7/31/2015 8:55 AM] Chando, Kenneth:
I'm about to patch the 165/166 cluster with the OJVN
[7/31/2015 8:55 AM] Chando, Kenneth:
just about to. Finished creating GRP
[7/31/2015 8:55 AM] Chando, Kenneth:
shutting down the database
[7/31/2015 8:55 AM] Franklin, Bruce:
ok, once you are done please send me the steps
[7/31/2015 8:55 AM] Chando, Kenneth:
ok, I will
[7/31/2015 8:59 AM] Chando, Kenneth:
one thing I would like to learn from you Bruce is the Standalone duplicate steps. Not in a hurry. Whenever you're free
[7/31/2015 8:59 AM] Franklin, Bruce:
sure thing
[7/31/2015 8:59 AM] Franklin, Bruce:
we can do that later
[7/31/2015 9:00 AM] Chando, Kenneth:
got you.
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 10:05 AM] Franklin, Bruce:
Ken, you are planning to apply the OJVM patch to ORCLDR standby , correct?
[7/31/2015 10:06 AM] Chando, Kenneth:
yes as well as on .165/.166 cluster
[7/31/2015 10:06 AM] Franklin, Bruce:
ok
[7/31/2015 10:06 AM] Chando, Kenneth:
almost done with cluster
[7/31/2015 10:07 AM] Franklin, Bruce:
how are you coming with getting access on the DHS side?
[7/31/2015 10:07 AM] Chando, Kenneth:
Angela Knouse said, she's waiting on my case closure to PAR approval
[7/31/2015 10:07 AM] Franklin, Bruce:
i am ready to put you to work ;)
[7/31/2015 10:08 AM] Chando, Kenneth:
hahaha...I'm excited...
[7/31/2015 10:09 AM] Franklin, Bruce:
maybe that will be done in time so that you can assist with some of the patching for July SPU and OJVM... i am lining up the schedules with each of my customers
[7/31/2015 10:09 AM] Franklin, Bruce:
give you some good exposure
[7/31/2015 10:11 AM] Chando, Kenneth:
great idea Bruce.
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 12:08 PM] Franklin, Bruce:
hey Ken, question for you...
[7/31/2015 12:08 PM] Chando, Kenneth:
ok sir
[7/31/2015 12:08 PM] Chando, Kenneth:
ride on
[7/31/2015 12:08 PM] Franklin, Bruce:
how much experience do you have with OEM setup?
[7/31/2015 12:09 PM] Franklin, Bruce:
as in the notification piece
[7/31/2015 12:09 PM] Chando, Kenneth:
mostly I have administration support but I'm a fast learner and would be glad if you challenge me with some tasks
[7/31/2015 12:10 PM] Chando, Kenneth:
just finished patching the cluster with OJVN. No issues
[7/31/2015 12:10 PM] Franklin, Bruce:
is it OJVN or OJVM?
[7/31/2015 12:10 PM] Chando, Kenneth:
about to work on the Standalone one after I go to the rest room
[7/31/2015 12:11 PM] Chando, Kenneth:
Will make the steps available to you after I complete the Standalone one. That should be easier since it's just one node
[7/31/2015 12:11 PM] Franklin, Bruce:
do you apply the patch with opatch utility?
[7/31/2015 12:12 PM] Chando, Kenneth:
no worries Bruce. I love it...I am eager to assist you in any way. I know you have alot in your plate
[7/31/2015 12:13 PM] Chando, Kenneth:
feel free to assign them. When I'm stuck, I will always reach back to you
[7/31/2015 12:14 PM] Chando, Kenneth:
will be right back, rushing to the rest room
[7/31/2015 12:18 PM] Chando, Kenneth:
I'm back Bruce
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 3:21 PM] Franklin, Bruce:
hey Ken
[7/31/2015 3:21 PM] Chando, Kenneth:
hi Bruce. patching finished
[7/31/2015 3:22 PM] Franklin, Bruce:
working on the other side, and took a lunch break, too
[7/31/2015 3:22 PM] Chando, Kenneth:
trying to complete the steps
[7/31/2015 3:22 PM] Chando, Kenneth:
wow...so you're energetic to go...Lol
[7/31/2015 3:22 PM] Chando, Kenneth:
just kidding Bruce...
[7/31/2015 3:23 PM] Franklin, Bruce:
ok, you will have the steps documented for applying the SPU and the JAVA patches today?
[7/31/2015 3:23 PM] Chando, Kenneth:
yes, I will...
[7/31/2015 3:24 PM] Chando, Kenneth:
You will get it via email
[7/31/2015 3:24 PM] Franklin, Bruce:
did i ever send you an example of how i do a playbook type document for that?
[7/31/2015 3:25 PM] Chando, Kenneth:
I don't think so
[7/31/2015 3:25 PM] Chando, Kenneth:
wouldn't mind if you make it available
[7/31/2015 3:25 PM] Franklin, Bruce:
it is really simple but helps when working with our Service Account Managers for submitting a change request
[7/31/2015 3:25 PM] Franklin, Bruce:
i will send it to you now via email
[7/31/2015 3:25 PM] Chando, Kenneth:
cool
[7/31/2015 3:29 PM] Franklin, Bruce:
just sent
[7/31/2015 3:30 PM] Franklin, Bruce:
2 playbook files
[7/31/2015 3:30 PM] Chando, Kenneth:
thanks. Just got it
[7/31/2015 3:35 PM] Chando, Kenneth:
Bruce, it's quite similar to the one Lionel sent to me. That's what I have been using too and the steps I'm compiling now might incorporate some components from these playbooks
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 4:08 PM] Chando, Kenneth:
hi Bruce, I just sent the steps I used for DR. I'm still working on the Cluster steps. Will try to finish that by end of day. I'm heading home. Have a great day and a awesome weekend
[7/31/2015 4:08 PM] Franklin, Bruce:
thanks
[7/31/2015 4:09 PM] Chando, Kenneth:
yw!
[7/31/2015 4:09 PM] Franklin, Bruce:
you too
[8/5/2015 11:59 AM] Franklin, Bruce:
hey Ken
[8/5/2015 11:59 AM] Franklin, Bruce:
gm
[8/5/2015 11:59 AM] Chando, Kenneth:
hi Bruce gm
[8/5/2015 11:59 AM] Franklin, Bruce:
finally
[8/5/2015 11:59 AM] Chando, Kenneth:
I'm trying to get the link to the OJVN
[8/5/2015 11:59 AM] Franklin, Bruce:
got off the Remedy bridge call
[8/5/2015 11:59 AM] Chando, Kenneth:
wow...I saw notification that status is back...
[8/5/2015 12:00 PM] Chando, Kenneth:
you made it happen Bruce...Lol
[8/5/2015 12:05 PM] Chando, Kenneth:
[8/5/2015 12:05 PM] Chando, Kenneth:
just sent the link to you via email as well
[8/5/2015 12:06 PM] Chando, Kenneth:
Bullet one is correct. I guess there was a typo on bullet 5
[8/5/2015 12:06 PM] Franklin, Bruce:
ok; thank you sir
[8/5/2015 12:06 PM] Chando, Kenneth:
the zip file in bullet 5 is the spu which is not for the OJVN
[8/5/2015 12:06 PM] Chando, Kenneth:
you're welcome!
[8/5/2015 12:10 PM] Franklin, Bruce:
yes; that is why i wanted to clarify; i had previously download the spu and knew that probably wasn't the correct file name
[8/5/2015 12:10 PM] Franklin, Bruce:
i am assembling all my documents to get the RFCs going for patching
[8/5/2015 12:11 PM] Franklin, Bruce:
might see if we can get you involved, at least to shadow me on this round
[8/5/2015 12:12 PM] Franklin, Bruce:
we'll talk with Lionel about that
[8/5/2015 12:25 PM] Franklin, Bruce:
as for the order of patching, do the standard PSU, followed by the ojvm?
[8/5/2015 12:26 PM] Chando, Kenneth:
ok Bruce no worries. Anytime...
[8/5/2015 12:27 PM] Franklin, Bruce:
LOL ... that was question
[8/5/2015 12:27 PM] Franklin, Bruce:
;)
[8/5/2015 12:27 PM] Chando, Kenneth:
hahaha...:)
[8/5/2015 12:27 PM] Chando, Kenneth:
I thought that was information
[8/5/2015 12:27 PM] Chando, Kenneth:
yep...go ahead
[8/5/2015 12:40 PM] Franklin, Bruce:
so, that is the correct order for the patching... the Database PSU July, followed by the JVM PSU July?
[8/5/2015 12:41 PM] Chando, Kenneth:
yes, I did follow that order and had no issues
[8/5/2015 12:41 PM] Franklin, Bruce:
ok; thanks
[8/5/2015 12:41 PM] Chando, Kenneth:
yw
[8/5/2015 2:04 PM] Franklin, Bruce:
are you meeting with us?
[8/5/2015 2:05 PM] Chando, Kenneth:
yes
[8/5/2015 4:48 PM] Chando, Kenneth:
hi Bruce
[8/5/2015 4:49 PM] Chando, Kenneth:
wanted to find out when do you plan to do the OJVN install for me to shadow?
[8/5/2015 4:49 PM] Chando, Kenneth:
is that going to be today?
[8/6/2015 9:51 AM] Franklin, Bruce:
Ken, good morning
[8/6/2015 9:51 AM] Franklin, Bruce:
just saw you text from yesterday
[8/6/2015 9:51 AM] Chando, Kenneth:
gm sir...
[8/6/2015 9:52 AM] Franklin, Bruce:
no install of anything on DHS side until we have an ICCB approved RFC
[8/6/2015 9:52 AM] Chando, Kenneth:
yep, was trying to get a time for which schedule patching will take place so that I can log that in my calendar not to forget
[8/6/2015 9:53 AM] Chando, Kenneth:
ok. So you've put in your RFC and now waiting for approval?
[8/6/2015 9:53 AM] Franklin, Bruce:
target it 8/21 for DNDO JACCIS and I plan to get an email out to the other SAMs today so we can set dates for CBP and EAIR
[8/6/2015 9:53 AM] Franklin, Bruce:
i will let you know
[8/6/2015 9:53 AM] Chando, Kenneth:
thanks Bruce!
[8/6/2015 9:53 AM] Franklin, Bruce:
also, please follow-up on the email i just sent you
[8/6/2015 9:54 AM] Chando, Kenneth:
Just FYI, I realized that DR in DC2LAB is around 22% free on FRA. I checked the archivelogs via RMAN Crosscheck and it's below 7days
[8/6/2015 9:54 AM] Franklin, Bruce:
ok
[8/6/2015 9:55 AM] Franklin, Bruce:
looks like maybe hardware or vm issues with disks
[8/6/2015 9:55 AM] Chando, Kenneth:
ok, will check email now
[8/6/2015 9:55 AM] Franklin, Bruce:
thanks
[8/6/2015 10:02 AM] Chando, Kenneth:
thanks Bruce, I will go ahead and start working on the cluster patch as per OPatch documentation
[8/6/2015 10:21 AM] Franklin, Bruce:
ok, just remember to check that in the future before applying a patch
[8/6/2015 10:21 AM] Chando, Kenneth:
I will Bruce. Thanks for pointing this out
[8/6/2015 10:22 AM] Franklin, Bruce:
otherwise, if we have issues and install with an older version than Oracle supports it will be difficult to get their assistance
[8/6/2015 10:23 AM] Franklin, Bruce:
i believe we are okay on this one since we've not had any issues
[8/6/2015 10:23 AM] Chando, Kenneth:
got you
[8/6/2015 10:31 AM] Franklin, Bruce:
Ken, did you remove the directories you created in $ORACLE_HOME/patches ?
[8/6/2015 10:32 AM] Franklin, Bruce:
for the ojvm and SPU patching
[8/6/2015 10:32 AM] Chando, Kenneth:
no I didn't
[8/6/2015 10:33 AM] Franklin, Bruce:
interesting... i don't see either on the 165 or 242 servers
[8/6/2015 10:33 AM] Chando, Kenneth:
the path I had them was /u01/app/oracle/patches
[8/6/2015 10:34 AM] Chando, Kenneth:
it's there on .165
[8/6/2015 10:35 AM] Chando, Kenneth:
oh, I see, you were looking probably in $ORACLE_HOME instead
[8/6/2015 10:35 AM] Franklin, Bruce:
yes\,. there is already a patches directory in $ORACLE_HOME
[8/6/2015 10:35 AM] Chando, Kenneth:
$ORACLE_HOME/patches I mean to say
[8/6/2015 10:36 AM] Franklin, Bruce:
i guess no one told you
[8/6/2015 10:36 AM] Franklin, Bruce:
lol
[8/6/2015 10:36 AM] Chando, Kenneth:
ok...I will be using that going forward. Per document from Lionel, that was point to /u01/app/oracle/patches
[8/6/2015 10:37 AM] Franklin, Bruce:
we should not expect the new guy to know everything that we know, eh?
[8/6/2015 10:37 AM] Chando, Kenneth:
hahaha...that's why you're there....
[8/6/2015 10:37 AM] Chando, Kenneth:
Thanks so much for guiding me...
[8/6/2015 10:37 AM] Franklin, Bruce:
he should have told you
[8/6/2015 10:37 AM] Franklin, Bruce:
from now on, since Lionel is leaving, we blame everything on him
[8/6/2015 10:37 AM] Franklin, Bruce:
got it?
[8/6/2015 10:38 AM] Franklin, Bruce:
:)
[8/6/2015 10:38 AM] Chando, Kenneth:
hahaha...:)
[8/6/2015 10:38 AM] Chando, Kenneth:
you're funny Bruce...that was quite hilarious
[8/6/2015 10:38 AM] Chando, Kenneth:
got you :)
[8/6/2015 10:38 AM] Franklin, Bruce:
i refuse to let work be boring or too serious
[8/6/2015 10:38 AM] Chando, Kenneth:
great attitude and it helps alot
[8/6/2015 10:38 AM] Franklin, Bruce:
it is a blessing from the Lord and He expects us to enjoy what we do
[8/6/2015 10:39 AM] Chando, Kenneth:
100% agreed
[8/6/2015 10:40 AM] Chando, Kenneth:
The Lord requests us to be deligent in all that we do and I try to keep up to that part even though sometimes one falters
[8/6/2015 10:41 AM] Chando, Kenneth:
live is what one makes it. If one wants joy, then one should make everything s/he does joyful. That's my take...
[8/6/2015 10:41 AM] Franklin, Bruce:
i agree
[8/6/2015 10:41 AM] Chando, Kenneth:
so everything you've been guiding me has always helped to make me joyful.
[8/6/2015 10:42 AM] Chando, Kenneth:
Thanks...I will be trying my best to note this good minute details down so that the next guy who comes to join the team don't make my same mistakes
PATCHING TRICKS
1. Steps: Open RFC>Approval>Install Patch
2. TRICK(S):Day 1: Prior to Open RFC, Creat Restore Point>have Patch downloaded and saved into a DIRECTORY in a NODE>Day 2: unzip>install after RFC approval
FLASHBACK
=========
1. Best, FLASHBACK DATABASE:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
run
{
FLASHBACK DATABASE TO RESTORE POINT 'MWMS_TRAINING_START';
SQL 'ALTER DATABASE OPEN RESETLOGS';
SQL 'DROP RESTORE POINT MWMS_TRAINING_START';
SQL 'CREATE RESTORE POINT MWMS_TRAINING_START GUARANTEE FLASHBACK DATABASE';
}
EXIT;
2. FLASHBACK SCN
SELECT oldest_flashback_scn, oldest_flashback_time
FROM gv$flashback_database_log;
VIEWING PATHS: cat .godb, cat .goasm
oracle@D2LSENPSH166[orcl2]# pwd
/home/oracle
oracle@D2LSENPSH166[orcl2]# cat .godb
IMPORTANT STEPS
==============
1. ALWAYS create a restore point or BACKUP of your controlfile, database, prior to doing any upgrade(changes)
2. ASM mappings via paths in cat .godb, cat .goasm
3. Map database version paths appropriately in the ~/.bash_profile (before restart of server)
4. Know the most recent database backupset number(important for restore)
COMMANDS
==========
[root@D2LSENPSH212 ~]# hostname
D2LSENPSH212
[root@D2LSENPSH212 ~]# sudo su - oracle
oracle@D2LSENPSH212[openview]# which version
/usr/bin/which: no version in (/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/u01/app/oracle/product/11.2.0.3/bin::/usr/local/bin:/bin:/usr/bin:/u01/app/oracle/product/11.2.0.3/OPatch)
oracle@D2LSENPSH212[openview]# sql
SQL*Plus: Release 11.2.0.3.0 Production on Sun Aug 30 13:53:12 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name from v$database;
NAME
---------
OPENVIEW
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@D2LSENPSH212[openview]#history
4 pwd
5 ping -a D2LSENPSH212
6 pwd
7 cd /tmp
8 ls -l
9 cp *.sql /u01/app/oracle/scripts
10 cd -
11 ls -l
12 who
13 alog
14 pwd
15 cd ..
16 ls
17 mkdir staging
18 ping -a D2LSENPSH212
19 pwd
20 cd staging
21 ls
22 mkdir upgrade
23 cd upgrade
24 pwd
25 pwd
26 mv upgrade /u01/app/oracle
27 ls
28 pwd
29 cd ..
30 mv upgrade /u01/app/oracle
31 cd ../upgrade
32 ls
33 pwd
34 mkdir 11.2.0.3
35 cd *
36 pwd
37 cd /tmp
38 ls -ltr
39 cdp *.zip /u01/app/oracle/upgrade/11.2.0.3
40 cp *.zip /u01/app/oracle/upgrade/11.2.0.3
41 exit
42 ls -l
43 cd /u01/app/oraInventory
44 sql
45 tail -f /u01/app/oraInventory/logs/installActions2014-11-17_06-29-47PM.log
46 exit
47 df -h
48 cd /u01/app
49 ls
50 cd oracle
51 ls
52 cd upgrade
53 ls
54 cd *
55 ls
56 ls -l
57 unzip p10404530_112030_Linux-x86-64_1of7.zip
58 unzip p10404530_112030_Linux-x86-64_2of7.zip
59 ls
60 unzip p10404530_112030_Linux-x86-64_3of7.zip
61 df -h
62 ls
63 view dbupgdiag.sql
64 pwd
65 cd /tmp
66 ls
67 cp dbupgdiag.sql /u01/app/oracle/upgrade/11.2.0.3
68 cp db.rsp /u01/app/oracle/upgrade/11.2.0.3
69 cp utlu112i_5.sql /u01/app/oracle/upgrade/11.2.0.3
70 ls
71 cd -
72 ls
73 sql
74 df -h
75 cd /u01/oradata/openview
76 ls
77 cd -
78 cd -
79 mkdir backup
80 cd backup
81 pwd
82 cd /u01/app/oracle/upgrade/11.2.0.3
83 sql
84 lsnrctl stat
85 lsnrctl stop
86 sql
87 cd $ORACLE_HOME
88 ls
89 cd ..
90 ls
91 mkdir 11.2.0.3
92 ls
93 pwd
94 cd ../upgrade
95 ls
96 cd 11*
97 pwd
98 ls
99 view db.rsp
100 mv db.rsp db_install_11203.rsp
101 pwd
102 ls
103 cd database
104 ls
105 ./runInstaller -silent -noconfig -ignorePrereq -responseFile /u01/app/oracle/upgrade/11.2.0.3/db_install_11203.rsp
106 pwd
107 ls
108 cd ..
109 ls
110 sql
111 sql
112 cd
113 ls -la
114 cp -p .bash_profile .bash_profilebkp
115 ps -ef |grep -i ora
116 view .bash_profile
117 . .bash_profile
118 cd/etc
119 cd /etc
120 ls
121 view oratab
122 pwd
123 cd /u01/app/oracle/product/11.2.0.3
124 cd dbs
125 ls
126 cd ../network/admin
127 pwd
128 ls
129 ls
130 view listener.ora
131 view sqlnet.ora
132 view tnsnames.ora
133 echo $ORACLE_HOME
134 cd $ORACLE_HOME/rdbms/admin
135 pwd
136 ls -l catupgrd.sql
137 ps -ef |grep -i pmon
138 lsnrctl stat
139 sql
140 ps -ef |grep -i pmon
141 sql
142 cd -
143 cd /u01/app/oracle/upgrade/11.2.0.3
144 ls
145 sql
146 alog
147 sql
148 ps -ef |grep -i pmon
149 pwd
150 ls
151 scp db_install_11203.rsp D2LSENPSH143:/tmp
152 scp db_install_11203.rsp root@D2LSENPSH143:/tmp
153 exit
154 cd /u01/app/oracle/product/11.2.0.3
155 ls
156 sql
157 cd $ORACLE_HOME/dba
158 ls
159 cd $ORACLE_HOME/dbs
160 ls
161 ls -ltr
162 mv OPENVIEW.ora initOPENVIEW.ora
163 ls -ltr
164 cd /u01/app/oracle/product/11.2.0.2
165 cd dbs
166 ls
167 cp *.ora /u01/app/oracle/product/11.2.0.3
168 cp ora* /u01/app/oracle/product/11.2.0.3
169 cd ../network/admin
170 ls
171 pwd
172 cp *.ora /u01/app/oracle/product/11.2.0.3/network/admin
173 cd
174 ls -la
175 . ..bash_profile
176 . .bash_profile
177 echo $ORACLE_HOME
178 cd /u01/app/oracle/product/11.2.0.3/dbs
179 ls
180 cp ora* /u01/app/oracle/product/11.2.0.3/dbs
181 pwd
182 ls
183 cd ..
184 ls
185 cd /u01/app/oracle/product/11.2.0.2/dbs
186 ls
187 cp ora* /u01/app/oracle/product/11.2.0.3/dbs
188 cp *.ora /u01/app/oracle/product/11.2.0.3/dbs
189 cd ../../
190 pwd
191 cd ../upgrade/11.2.0.3
192 pwd
193 ls
194 cd
195 cat .bash_profile
196 cd -
197 ls
198 ls
199 ls -l
200 ping -a D2LSENPSH212
201 sql
202 ps -ef |grep -i pmon
203 cd /tmp
204 ls -ltr
205 cd -
206 cd ../../
207 ls
208 mkdir patches
209 cd patches
210 mkdir spuoct2014
211 cd *
212 pwd
213 cd /tmp
214 cp p19271438_112030_Linux-x86-64.zip /u01/app/oracle/patches/spuoct2014
215 cd -
216 ls
217 unzip p19271438_112030_Linux-x86-64.zip
218 ls
219 cd 19271438
220 ls
221 cat README.txt
222 pwd
223 opatch napply -skip_subset -skip_duplicate
224 cd $ORACLE_HOME/rdbms/admin
225 sql
226 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2014Nov17_22_16_15.log
227 ps -ef |grep -i pmon
228 lsnrctl start
229 lsnrctl stat
230 lsnrctl stat
231 ps -ef |grep -i pmon
232 ps -ef |grep -i pmon
233 lsnrctl stat
234 tnsping openview
235 echo $TNS_ADMIN
236 cd $TNS_ADMIN
237 ls
238 cat tnsnames.ora
239 tnsping ov_net
240 df -h
241 cd
242 cat .bash_profile
243 cd /tmp
244 ls
245 scp dbupgdiag.sql root@D2LSENPSH143:/tmp
246 cp dbup*.sql /u01/app/oracle/upgrade/11.2.0.3
247 cd cd /u01/app/oracle/patches
248 ls
249 cd /u01/app/oracle/
250 cd patches
251 ls
252 cd *
253 ls
254 scp p19271438_112030_Linux-x86-64.zip D2LSENPSH143:/tmp
255 scp p19271438_112030_Linux-x86-64.zip root@D2LSENPSH143:/tmp
256 sql
257 cd
258 view .bash_profile
259 . .bash_profile
260 alog
261 df -h
262 exit
263 cd /u01/app/oracle/patches
264 ls
265 cd *
266 ls
267 scp p19271438_112030_Linux-x86-64.zip lionel.charles@D2LSEUTSH032.localdomain/tmp
268 scp p19271438_112030_Linux-x86-64.zip lionel.charles@D2LSEUTSH032:/tmp
269 cd
270 ls -la
271 cat .bash_profile
272 cd /u01/app/oracle/patches/spuoct2014
273 ls -l
274 scripts
275 ls -ltr
276 sql
277 ls -ltr
278 cat sh_tsdf.sql
279 ls -ltr
280 view sh_tsdf.sql
281 exit
282 sql
283 exit
284 sqlplus opc_op/opc_op@openview
285 grep 1521 /etc/services
286 sqlplus opc_op/opc_op@listener
287 pwd
288 cd network
289 cd /u01/app/
290 dir
291 cd oracle/product/11.2.0.3/
292 dir
293 wpd
294 pwd
295 cd network
296 cd admin
297 dir
298 ll
299 more tnsnames.ora
300 sqlplus opc_op/opc_op@connect_data
301 sqlplus -s
302 sqlplus -s
303 sqlplus -s
304 sqlplus -s
305 sqlplus -s
306 sqlplus opc_op/opc_op@openview
307 sqlplus
308 ll
309 cd /etc/opt/OV/share/conf/OpC/mgmt_sv/report
310 cd /etc/opt/OV/share/conf/OpC/mgmt_sv/
311 cd reports/C
312 dir
313 pwd
314 sqlplus -h
315 pwd
316 vi unmanaged.sql
317 sqlplus
318 REM ***********************************************************************
319 REM File: all_nodes.sql
320 REM Description: SQL*Plus report that shows all nodes in the node bank
321 REM Language: SQL*Plus
322 REM Package: HP OpenView Operations for Unix
323 REM
324 REM (c) Copyright Hewlett-Packard Co. 1993 - 2004
325 REM ***********************************************************************
326 column nn_node_name format A80 truncate
327 column label format A25 truncate
328 column nodetype format A12
329 column isvirtual format A3
330 column licensetype format A3
331 column hb_flag format A4
332 column hb_type format A6
333 column hb_agent format A3
334 set heading off
335 set echo off
336 set linesize 150
337 set pagesize 0
338 set feedback off
339 select ' HPOM Report' from dual;
340 select ' -----------' from dual;
341 select ' ' from dual;
342 select 'Report Date: ',substr(TO_CHAR(SYSDATE,'DD-MON-YYYY'),1,20) from dual;
343 select ' ' from dual;
344 select 'Report Time: ',substr(TO_CHAR(SYSDATE,'HH24:MI:SS'),1,20) from dual;
345 select ' ' from dual;
346 select 'Report Definition:' from dual;
347 select '' from dual;
348 select ' User: opc_adm' from dual;
349 select ' Report Name: Nodes Overview' from dual;
350 select ' Report Script: /etc/opt/OV/share/conf/OpC/mgmt_sv/reports/C/unmanaged_nodes.sql' from dual;
351 select ' ' from dual;
352 select ' ' from dual;
353 select ' <--Heartbeat-->' from dual;
354 select 'Node Machine Type Node Type Lic Vir Flag Type Agt' from dual;
355 select '-------------------------------------------------------------------------------- ------------------------- ------------ --- --- ---- ------ ---' from dual;
356 select
357 nn.node_name as nn_node_name,
358 nm.machine_type_str as label,
359 DECODE(no.node_type, 0, 'Not in Realm', 1, 'Unmanaged', 2,
360 'Controlled', 3, 'Monitored', 4, 'Msg Allowed', 'Unknown') as nodetype,
361 DECODE(no.license_type, 0, 'NO', 1, 'NO', 2, 'NO', 'YES') as licensetype,
362 DECODE(no.is_virtual, 0, 'NO', 1, 'YES', 'YES') as isvirtual,
363 DECODE(no.heartbeat_flag, 0, 'NO', 'YES ') as hb_flag,
364 DECODE(mod(no.heartbeat_type,4), 0, 'None', 1, 'RPC', 2, 'Ping',
365 'Normal') as hb_type,
366 DECODE(floor(no.heartbeat_type/4), 0, 'NO', 'YES') as hb_agent
367 from
368 opc_nodes no,
369 opc_node_names nn,
370 opc_net_machine nm
371 where
372 no.node_id = nn.node_id
373 and nn.network_type = nm.network_type
374 and no.machine_type = nm.machine_type
375 and no.node_type = 1
376 order by
377 nn_node_name;
378 select
379 np.pattern as nn_node_name,
380 'Node for ext. events' as label,
381 DECODE(no.node_type, 0, 'Not in Realm', 1, 'Unmanaged', 2,
382 'Controlled', 3, 'Monitored', 4, 'Msg Allowed ', 'Unknown') as nodetype,
383 DECODE(no.license_type, 0, 'NO', 1, 'NO', 2, 'NO', 'YES') as licensetype,
384 '---','--- ', '------','---'
385 from
386 opc_nodes no,
387 opc_node_pattern np
388 where
389 no.node_id = np.pattern_id
390 and no.node_type = 1
391 order by
392 nn_node_name;
393 quit;
394 aqlplus
395 sqlplus
396 sqlplus
397 exit
398 sqlplus opc_op/opc_op@//d2lsenpsh212:1521/openview
399 sqlplus opc_op/opc_op@openview
400 sqlplus
401 exit
402 dir
403 sqlplus
404 exit
405 sqlplus
406 exit
407 sqlplus
408 cd /etc/init.d
409 dir
410 ./ovoracle status
411 ovoracle start
412 exit
413 dir
414 dir
415 ll =a
416 dir
417 ls -l
418 ls -al
419 vi .bash_profile
420 exit
421 vi .bash_profile
422 vi OVTrcSrv
423 pwd
424 cd /etc/init.d
425 dir
426 vi ovoracle
427 ./ovoracle
428 ovoracle start_msg
429 ovoracle start
430 exit
431 echo $ORACLE_HOME
432 exit
433 sqlplus
434 cd /u01/app/oracle/product/
435 dir
436 cd 11.2.0.3
437 dir
438 vi initOPENVIEW.ora
439 vi initopenview.ora
440 vi init.ora
441 vi /u01/oradata/openview/control03.ctl
442 echo $PATH
443 vi /etc/oratab
444 sqlplus
445 ex
446 sqlplus
447 pwd
448 ll
449 ./sqlplus
450 cd sqlplus
451 dir
452 ll
453 cd bin
454 dir
455 idr
456 ll
457 cd ../
458 dir
459 ll
460 cd admin
461 idr
462 ll
463 dir
464 cd ../
465 ll
466 cd ../
467 ll
468 cd network
469 dir
470 ll
471 cd admin
472 dir
473 ll
474 more listener.ora
475 ll
476 more shrept.lst
477 ls
478 ll
479 more sqlnet.ora
480
481 e
482 more /u01/app/oracle/product/11.2.0.3/network/log
483 pwd
484 cd ../log
485 ll
486 cd ../admin
487 dir
488 ll
489 more tnsnav.ora
490
491 ll
492 more tnsnames.ora
493
494 lsnrctl start
495 exit
496 dir
497 vi .bash_profile
498 sqlplus
499 lsnrctl status
500 lsnrctl stop
501 lsnrctl start
502 exit
503 ls
504 lsnrctl status
505 more /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
506 tail -50 /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
507 exit
508 lsnrctl status
509 llsnrctl stop
510 lsnrctl status
511 lsnrctl stop
512 lsnrctl start
513 lsnrctl stop
514 lsnrctl status
515 cd /u01/app/oracle/product/
516 ls
517 cd 11.2.0.2
518 dir
519 cd network/
520 dir
521 ll
522 lsnrctl status
523 l
524 ll
525 cd admin
526 dir
527 ll
528 vi listener.ora
529 pwd
530 cd ../../11.2.0.3
531 pwd
532 cd ../../../11.2.0.3
533 dir
534 cd admin
535 cd admin
536 ls
537 cd network
538 cd admin
539 ll
540 vi listener.ora
541 cd
542 ll -al
543 vi .bash_profile
544 cd /opt/OV/OMU/adminUI/
545 exit
546 lsnrctl start
547 pwd
548 exit
549 lsnrcltl status
550 lsnrcltl status
551 lsnrctl status
552 exit
553 exit
554 sqlplus
555 exit
556 sqlplus
557 opcsv -status
558 exit
559 sqlplus
560 vi /etc/hosts
561 exit
562 sqlplus
563 exit
564 sqlplus / as sysdba
565 sql
566 opcsv -start
567 exit
568 sql
569 exit
570 pwd
571 scripts
572 ls
573 alog
574 rman taget /
575 rman target /
576 sql
577 rman target /
578 df -h
579 sql
580 alog
581 sql
582 sqlplus
583 exit
584 lsnrctl start
585 more /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
586 tail -200 /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
587 sql
588 pwd
589 exit
590 lsnrctl -status
591 ps -ef | grep 1521
592 exit
593 lsnrctl start
594 exit
595 lsnrctl status
596 lsnrctl start
597 lsnrctl status
598 alog
599 lsnrctl
600 cd /u01/app/oracle/product/11.2.0.3/network/admin/
601 ls -al
602 vi listener.ora
603 cd /u01/app/oracle/product/11.2.0.3/network/log
604 LS -AL
605 ls -al
606 pwd
607 ls -al
608 lsnrctl start
609 cd /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert
610 ls -al
611 more log.xml
612 ls -al
613 vi log.xml
614 lsnrctl status
615 lsnrctl start
616 vi /etc/hosts
617 alog
618 lsnrctl
619 ls -al /etc/hosts
620 vi /etc/hosts
621 more /etc/hosts
622 lsnrctl start
623 snrctl status
624 lsnrctl status
625 exit
626 ps -ef |grep -i 11.2.0.3
627 ps -ef |grep -i 11.2.0.2
628 cd /u01/app/oracle/product
629 ls
630 mv 11.2.0.2 11.2.0.2_tobedeleted
631 exit
632 ls
633 vi .bash_profile
634 more .bash_profile
635 more echo "$ORACLE_DB"
636 echo "$ORACLE_DB"
637 echo "$ORACLE_DB" | tr -s '[:upper:]' '[:lower:]'
638 echo $bdump
639 cd /u01/app/oracle/diag/rdbms/
640 ls
641 cd openview/
642 ls
643 cd openview/
644 ls
645 cd trace/
646 dir
647 ll
648 more openview_vktm_9942.trc
649 ll
650 more openview_vktm_9942.trm
651 more openview_vktm_4572.trm
652
653 t
654 exit
655 sql
656 lsnrctl status
657 ls
658 ll
659 sql
660 exit
661 sql
662 exit
663 sql
664 sqlplus
665 exit
666 sql SYSDBA
667 sqlplus / as sysdba
668 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
669 ll
670 ll | grep "Jan 20"
671 tail alert_openview.log
672 sqlplus / as sysdba
673 pwd
674 ll | grep "Jan 20"
675 date
676 ll /u01/app/oracle/product/11.2.0.3/dbs/
677 ll /u01/app/oracle/product/11.2.0.3/srvm/admin/
678 cd /u01/oradata/
679 ll
680 cd openview/
681 ll
682 date
683 ll
684 ls
685 vi control01.ctl
686 pwd
687 cd /u01/app/oracle/admin/
688 ll
689 cd openview/
690 cd ud
691 ll
692 cd create/
693 ll
694 cd ../
695 ll
696 cd arch/
697 ll
698 cd ../
699 ll
700 cd pfile/
701 ll
702 vi initopenview.ora
703 ll
704 cd ../
705 ll
706 ll
707 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
708 ll
709 ls
710 ll | more
711
712 cd /u01/app/oracle/diag/rdbms/openview/tr
713 cd /u01/app/oracle/diag/rdbms/openview/
714 ll
715 cd openview/
716 ll
717 cd trace/
718 ll
719 ll | more
720 find / -name init.ora
721 exit
722 cd $ORACLE_HOMm
723 cd $ORACLE_HOME
724 ll
725 ls
726 cd admin
727 ls
728 pef
729 pwd
730 cd
731 pwd
732 cd /u01/app/
733 ls
734 cd ora
735 cd oracle/
736 l
737 ls
738 ll
739 cd admin
740 ll
741 cd openview/
742 ll
743 cd pfile/
744 ll
745 vi initopenview.ora
746 ll
747 ll
748 dir
749 cd ../
750 ll
751 cd create/
752 ll
753 pwd
754 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
755 ll
756 ll
757 ll | more
758 ll | more
759 more cdmp_20150102144837/
760 cd cdmp_20150102144837/
761 ll
762 cd ..
763 ll
764 ls
765 ls
766 pwd
767 cd /u01/app/oracle
768 sql
769 exit
770 ps -ef |grep -i pmon
771 sudo su -
772 sudo su -
773 exit
774 cd /u01/app/oracle/patches
775 ls
776 mkdir spujan2015
777 cd spujan2015
778 pwd
779 ls
780 unzip p19854461_112030_Linux-x86-64.zip
781 ls
782 cd 19854461
783 sql
784 lsnrctl stop
785 ps -ef |grep -i ora
786 opatch napply -skip_subset -skip_duplicate
787 cd $ORACLE_HOME/rdbms/admin
788 sql
789 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2015Feb18_15_58_28.log
790 sql
791 lsnrctl start
792 lsnrctl stat
793 alog
794 df -h
795 exit
796 sql
797 sqlplus
798 sql
799 cd /u01/app
800 ls
801 cd oracle/
802 ls
803 cd product/
804 l
805 ll
806 cd 11.2.0.3/
807 ll
808 cd network/
809 ls
810 cd admin/
811 ll
812 more sqlnet.ora
813 more /u01/app/oracle/product/11.2.0.3/network/log
814 ll /u01/app/oracle/product/11.2.0.3/network/log
815 cd ../
816 ll
817 cd admin
818 ll
819 more tnsnav.ora
820
821 ll
822 more tnsnames.ora
823 ll
824 more shrept.lst
825
826 ll
827 moe listener.ora
828 more listener.ora
829 ll /u01/app/oracle/product/11.2.0.3/network/log
830 ll
831 find / -name \*trace\*
832 esit
833 exit
834 rman
835 alog
836 sql
837 df -h
838 alog
839 sql
840 alog
841 sql
842 alog
843 who
844 cd /u01/oradata/openview/backup
845 ls -l
846 cd *
847 ls -l
848 cd *
849 ls -l
850 cd 2014_11_17
851 ls -l
852 cd ../2015_02_07
853 ls -l
854 cd ..
855 ls -=lt
856 ls -lt
857 rm -rf 2014*
858 ls -lt
859 cd 2015_01_31
860 ls
861 cd ../2015_01_15
862 ls
863 cd ../2015_01_01
864 ls -l
865 cd ../2015_01_06
866 ls
867 cd ../2015_01_02
868 ls
869 cd ../2015_01_01
870 ls
871 df -h .
872 sql
873 lsntrl
874 sql
875 exit
876 df -h
877 scripts
878 sql
879 alog
880 sql
881 sql
882 opatch lsinventory
883 opatch lsinventory
884 alog
885 oerr ora 1543
886 exit
887 cd /u01/app/oracle/patches
888 ls
889 mkdir spuapr2015
890 cd spuapr2015
891 ls
892 unzip p20299010_112030_Linux-x86-64.zip
893 df -h
894 exit
895 cd /u01/app/oracle
896 ls
897 df -h
898 cd /u01/oradata/openview
899 ls
900 cd backup
901 ls
902 cd *
903 ls
904 cd flashback
905 ls
906 ls -l
907 pwd
908 du -h .
909 pwd
910 cd /u01/app/oracle/em*/*_inst
911 cd bin
912 ./emctl start agent
913 alog
914 exit
915 lsnrctl stop
916 cd /u01/app/oracle/patches
917 ls
918 cd spuapr2015
919 ls
920 cd 20299010
921 sql
922 ps -ef |grep -i orac
923 cd /u01/app/oracle/em*
924 cd age*
925 ls
926 cd agent_inst/bin
927 ./emctl stop agent
928 ps -ef |grep -i ora
929 cd /u01/app/oracle/patches
930 ls
931 cd spuapr2015/20*
932 pwd
933 ls
934 lsnrctl stat
935 opatch napply -skip_subset -skip_duplicate
936 cd $ORACLE_HOME/rdbms/admin
937 sql
938 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2015Apr30_18_21_59.log
939 lsnrctl start
940 lsnrctl stop
941 alog
942 sql
943 lsnrctl start
944 sql
945 exit
946 patches
947 cd /u01/app/oracle/patches
948 ssh 10.236.28.32
949 ssh D2LSEUTSH032
950 exit
951 sql
952 who
953 lsnrctl stop
954 sql
955 cd
956 view .bash_profile
957 . .bash_profile
958 patches
959 mkdir spujul2015
960 cd spujul2015
961 mkdir ojvm
962 ping -a D2LSENPSH212
963 pwd
964 ls
965 unzip p20803576_112030_Linux-x86-64.zip
966 cd ojvm
967 ls
968 unzip p21068553_112030_Linux-x86-64.zip
969 cd $ORACLE_HOME
970 ls -l
971 mv OPatch OPatch_Nov172014
972 unzip p6880880_112000_Linux-x86-64.zip
973 ls -l
974 cd -
975 cd ..
976 ls
977 cd 20803576
978 sql
979 alog
980 date
981 lsnrctl stat
982 sql
983 pwd
984 opatch napply -skip_subset -skip_duplicate
985 cd $ORACLE_HOME/rdbms/admin
986 sql
987 cd ojvm
988 cd -
989 cd ojvm
990 cd ../ojvm
991 ls
992 cd 21*
993 ls
994 opatch apply
995 cd $ORACLE_HOME/sqlpatch/21068553
996 sql
997 lsnnrctl start
998 lsnrctl start
999 who
1000 exit
1001 which version
1002 sql
1003 history
oracle@D2LSENPSH212[openview]#
STEPS for DATABASE CHANGE IMPLEMENTATION
=====================================
1. OPEN an RFC in remedy
2. Request approval for infrastructure change from ICCB (Infrastructure Change Control Board)=>approval gotten
3. DBA sends out email to all stakeholders of affected SERVER, DATABASE (e.g. Unix team, APPLICATION Support team) to notify them of potential change
4. DBA Asks APPLICATION TEAM to shutdown all their applications on the server, database>APPs TEAM notify DBA when done to go ahead
5. DBA acts based on APPs team's go-ahead to EFFECT/IMPLEMENT CHANGE (e.g. applying OJVM patching)>DBA verify that server, database is working perfectly after change
6. DBA then notifies different stakeholders of Server, Database e.g. APPs TEAM, UNIX Team to test their applications and make sure it's back up and running perfectly after patch
7. APPs team confirms to DBA if all is working well or not (via email)
NOTE: AFTER change has been implemented by DBA e.g. patching, take a screenshot or copy-paste registry history highlighting the change
NOTE: (4mTRB-Bruce for NPPD customer)MICROSOFT PATCH doesn't usually specify whether REBOOT is needed or NOT for the servers during PATCHING
=>That's why we first TEST patch in TEST env/TEST Lab>Test PATCH in GSS env(owned by HP)>before applying PATCH in COMPONENT (production)
Ll /u01/app/oracle/scripts
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
PERFORMANCE TUNING
==================
1. How would you approach database performance: By identifying bottlenecks and fixing them |
2. How do you force the optimizer to use a new plan: By first enabling baseline capture using : alter session set optimizer_capture_sql_plan_baselines = true; |
3. Difference between local and global index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions while A local index is a one-to-one mapping between a index partition and a table partition. |
4. What is the difference between DB file sequential read and DB File Scattered Read?: db file sequential read wait event has three parameters: file#, first block#, and block count while db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return |
5. Difference between nested loop joins and hash joins: Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can |
6. What factors do you consider when creating indexes on tables? How do you select the column for an index?:• Non-key columns are defined in the INCLUDE clause of the CREATE INDEX statement. • Non-key columns can only be defined on non-clustered indexes on tables or indexed views. |
7. If you were involved at the early stages of database development and coding, what are some of the measures you would suggest for optimal performance? 1. Get candid feedback from users. Determine the performance project's scope and subsequent performance goals, as well as performance goals for the future. This process is key in future capacity planning. 2. Get a full set of operating system, database, and application statistics from the system when the performance is both good and bad. If these are not available, then get whatever is available. Missing statistics are analogous to missing evidence at a crime scene: They make detectives work harder and it is more time-consuming. 3. Sanity-check the operating systems of all systems involved with user performance. By sanity-checking the operating system, you look for hardware or operating system resources that are fully utilized. List any over-used resources as symptoms for analysis later. In addition, check that all hardware shows no errors or diagnostics. 4. Check for the top ten most common mistakes with Oracle, and determine if any of these are likely to be the problem. List these as symptoms for later analysis. These are included because they represent the most likely problems. ADDM automatically detects and reports nine of these top ten issues. See Chapter 6, "Automatic Performance Diagnostics" and "Top Ten Mistakes Found in Oracle Systems". 5. Build a conceptual model of what is happening on the system using the symptoms as clues to understand what caused the performance problems. See "A Sample Decision Process for Performance Conceptual Modeling". 6. Propose a series of remedy actions and the anticipated behavior to the system, then apply them in the order that can benefit the application the most. ADDM produces recommendations each with an expected benefit. A golden rule in performance work is that you only change one thing at a time and then measure the differences. Unfortunately, system downtime requirements might prohibit such a rigorous investigation method. If multiple changes are applied at the same time, then try to ensure that they are isolated so that the effects of each change can be independently validated. |
8. Is creating an index online possible?: YES |
9. What is the difference between Redo, Rollback and Undo?:Redo log files record changes to the database as a result of transactions and internal Oracle server actions,undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle. Undo |
What is Row Chaining and Row Migration? |
10. How to find out background processes?: select sid, process, program from v$session s join v$bgprocess using (paddr) where s.status = 'ACTIVE' and rownum < 5; |
11. How to find background processes from OS:$ ps -ef|grep ora_|grep SID |
12. How do you troubleshoot connectivity issues?: Verify path to TNS_ADMIN is set correctly and that all the connection identifier(SIDs) exists in the tnsnames.ora file |
13. Why are bind variables important?:Bind variables have a huge impact on the stress in the shared pool Can you force literals to be converted into bind variables?: YES |
14. What is adaptive cursor sharing? It allows the optimizer to generate a set of plans that are optimal for different sets of bind values |
15. In Data Pump, if you restart a job in Data Pump, how it will know from where to resume?: By attaching the name of the job to be resumed. That is: expdp system/manager attach="Job_Name" |
1. How would you approach database performance :http://docs.oracle.com/cd/B19306_01/server.102/b14211/technique.htm#i11146
Oracle performance methodology involves identifying bottlenecks and fixing them. It is recommended that changes be made to a system only after you have confirmed that there is a bottleneck. Performance problems generally result from either a lack of throughput, unacceptable user/jobresponse time, or both
Before looking at any database or operating system statistics, it is crucial to get feedback from the most important components of the system: the users of the system and the people ultimately paying for the application. Typical user feedback includes statements like the following:
· "The online performance is so bad that it prevents my staff from doing their jobs."
· "The billing run takes too long."
· "When I experience high amounts of Web traffic, the response time becomes unacceptable, and I am losing customers."
· "I am currently performing 5000 trades a day, and the system is maxed out. Next month, we roll out to all our users, and the number of trades is expected to quadruple."
From candid feedback, it is easy to set critical success factors for any performance work. Determining the performance targets and the performance engineer's exit criteria make managing the performance process much simpler and more successful at all levels. These critical success factors are better defined in terms of real business goals rather than system statistics.
Some real business goals for these typical user statements might be:
· "The billing run must process 1,000,000 accounts in a three-hour window."
· "At a peak period on a Web site, the response time will not exceed five seconds for a page refresh."
· "The system must be able to process 25,000 trades in an eight-hour window."
The ultimate measure of success is the user's perception of system performance. The performance engineer's role is to eliminate any bottlenecks that degrade performance. These bottlenecks could be caused by inefficient use of limited shared resources or by abuse of shared resources, causing serialization. Because all shared resources are limited, the goal of a performance engineer is to maximize the number of business operations with efficient use of shared resources. At a very high level, the entire database server can be seen as a shared resource. Conversely, at a low level, a single CPU or disk can be seen as shared resources.
The Oracle performance improvement method can be applied until performance goals are met or deemed impossible. This process is highly iterative, and it is inevitable that some investigations will be made that have little impact on the performance of the system. It takes time and experience to develop the necessary skills to accurately pinpoint critical bottlenecks in a timely manner. However, prior experience can sometimes work against the experienced engineer who neglects to use the data and statistics available to him. It is this type of behavior that encourages database tuning by myth and folklore. This is a very risky, expensive, and unlikely to succeed method of database tuning.
The Automatic Database Diagnostic Monitor (ADDM) implements parts of the performance improvement method and analyzes statistics to provide automatic diagnosis of major performance issues. Using ADDM can significantly shorten the time required to improve the performance of a system. See Chapter 6, "Automatic Performance Diagnostics" for a description of ADDM.
Steps in The Oracle Performance Improvement Method
Perform the following initial standard checks:
1. Get candid feedback from users. Determine the performance project's scope and subsequent performance goals, as well as performance goals for the future. This process is key in future capacity planning.
2. Get a full set of operating system, database, and application statistics from the system when the performance is both good and bad. If these are not available, then get whatever is available. Missing statistics are analogous to missing evidence at a crime scene: They make detectives work harder and it is more time-consuming.
3. Sanity-check the operating systems of all systems involved with user performance. By sanity-checking the operating system, you look for hardware or operating system resources that are fully utilized. List any over-used resources as symptoms for analysis later. In addition, check that all hardware shows no errors or diagnostics.
4. Check for the top ten most common mistakes with Oracle, and determine if any of these are likely to be the problem. List these as symptoms for later analysis. These are included because they represent the most likely problems. ADDM automatically detects and reports nine of these top ten issues. See Chapter 6, "Automatic Performance Diagnostics" and "Top Ten Mistakes Found in Oracle Systems".
5. Build a conceptual model of what is happening on the system using the symptoms as clues to understand what caused the performance problems. See "A Sample Decision Process for Performance Conceptual Modeling".
6. Propose a series of remedy actions and the anticipated behavior to the system, then apply them in the order that can benefit the application the most. ADDM produces recommendations each with an expected benefit. A golden rule in performance work is that you only change one thing at a time and then measure the differences. Unfortunately, system downtime requirements might prohibit such a rigorous investigation method. If multiple changes are applied at the same time, then try to ensure that they are isolated so that the effects of each change can be independently validated.
7. Validate that the changes made have had the desired effect, and see if the user's perception of performance has improved. Otherwise, look for more bottlenecks, and continue refining the conceptual model until your understanding of the application becomes more accurate.
8. Repeat the last three steps until performance goals are met or become impossible due to other constraints
ADDM
For a quick and easy approach to performance tuning, use the Automatic Database Diagnostic Monitor (ADDM). ADDM automatically monitors your Oracle system and provides recommendations for solving performance problems should problems occur. For example, suppose a DBA receives a call from a user complaining that the system is slow. The DBA simply examines the latest ADDM report to see which of the recommendations should be implemented to solve the problem. See Chapter 6, "Automatic Performance Diagnostics" for information on the features that help monitor and diagnose Oracle systems
MANUAL PERFORMANCE TUNING DIAGNOSIS
The following steps illustrate how a performance engineer might look for bottlenecks without using automatic diagnostic features. These steps are only intended as a guideline for the manual process. With experience, performance engineers add to the steps involved. This analysis assumes that statistics for both the operating system and the database have been gathered.
1. Is the response time/batch run time
acceptable for a single user on an empty or lightly loaded system?
If it is not acceptable, then the application is probably not coded or designed
optimally, and it will never be acceptable in a multiple user situation when
system resources are shared. In this case, get application internal statistics,
and get SQL Trace and SQL plan information. Work with developers to investigate
problems in data, index, transaction SQL design, and potential deferral of work
to batch/background processing.
2. Is all the CPU being utilized?
If the kernel utilization is over 40%, then investigate the operating system
for network transfers, paging, swapping, or process thrashing. Otherwise, move
onto CPU utilization in user space. Check to see if there are any non-database
jobs consuming CPU on the system limiting the amount of shared CPU resources,
such as backups, file transforms, print queues, and so on. After determining
that the database is using most of the CPU, investigate the top SQL by CPU
utilization. These statements form the basis of all future analysis. Check the
SQL and the transactions submitting the SQL for optimal execution. Oracle
provides CPU statistics in V$SQL
and V$SQLSTATS.
See Also:
Oracle Database Reference for more information on V$SQL and V$SQLSTATS
If the application is optimal and there are no inefficiencies in the SQL
execution, consider rescheduling some work to off-peak hours or using a bigger
system.
3. At this point, the system performance is
unsatisfactory, yet the CPU resources are not fully utilized.
In this case, you have serialization and unscalable behavior within the server.
Get the WAIT_EVENTS statistics from the server, and
determine the biggest serialization point. If there are no serialization
points, then the problem is most likely outside the database, and this should
be the focus of investigation. Elimination of WAIT_EVENTS involves modifying application SQL and
tuning database parameters. This process is very iterative and requires the
ability to drill down on the WAIT_EVENTS systematically to eliminate serialization points.
Top Ten Mistakes Found in Oracle Systems
This section lists the most common mistakes found in Oracle systems. By following the Oracle performance improvement methodology, you should be able to avoid these mistakes altogether. If you find these mistakes in your system, then re-engineer the application where the performance effort is worthwhile. See "Automatic Performance Tuning Features" for information on the features that help diagnose and tune Oracle systems. See Chapter 10, "Instance Tuning Using Performance Views" for a discussion on how wait event data reveals symptoms of problems that can be impacting performance.
1. Bad Connection Management
The application connects and disconnects for each database interaction. This
problem is common with stateless middleware in application servers. It has over
two orders of magnitude impact on performance, and is totally unscalable.
2. Bad Use of Cursors and the Shared Pool
Not using cursors results in repeated parses. If bind variables are not used,
then there is hard parsing of all SQL statements. This has an order of
magnitude impact in performance, and it is totally unscalable. Use cursors with
bind variables that open the cursor and execute it many times. Be suspicious of
applications generating dynamic SQL.
3. Bad SQL
Bad SQL is SQL that uses more resources than appropriate for the application
requirement. This can be a decision support systems (DSS) query that runs for
more than 24 hours or a query from an online application that takes more than a
minute. SQL that consumes significant system resources should be investigated
for potential improvement. ADDM identifies high load SQL and the SQL tuning
advisor can be used to provide recommendations for improvement. See Chapter 6, "Automatic Performance
Diagnostics" and Chapter 12, "Automatic SQL Tuning".
4. Use of Nonstandard Initialization
Parameters
These might have been implemented based on poor advice or incorrect
assumptions. Most systems will give acceptable performance using only the set
of basic parameters. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer
features can cause a great deal of problems that can require considerable
investigation.
Likewise, optimizer parameters set in the initialization parameter file can
override proven optimal execution plans. For these reasons, schemas, schema
statistics, and optimizer settings should be managed together as a group to
ensure consistency of performance.
See Also:
· Oracle Database Administrator's Guide for information on initialization parameters and database creation
· Oracle Database Reference for details on initialization parameters
· "Performance Considerations for Initial Instance Configuration" for information on parameters and settings in an initial instance configuration
5. Getting Database I/O Wrong
Many sites lay out their databases poorly over the available disks. Other sites
specify the number of disks incorrectly, because they configure disks by disk
space and not I/O bandwidth. See Chapter 8, "I/O Configuration and Design".
6. Redo Log Setup Problems
Many sites run with too few redo logs that are too small. Small redo logs cause
system checkpoints to continuously put a high load on the buffer cache and I/O
system. If there are too few redo logs, then the archive cannot keep up, and
the database will wait for the archive process to catch up. See Chapter 4, "Configuring a Database for
Performance" for
information on sizing redo logs for performance.
7. Serialization of data blocks in the
buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.
This is particularly common on INSERT-heavy applications, in applications that have raised the block
size above 8K, or in applications with large numbers of active users and few
rollback segments. Use automatic segment-space management (ASSM) to and
automatic undo management solve this problem.
8. Long Full Table Scans
Long full table scans for high-volume or interactive online operations could
indicate poor transaction design, missing indexes, or poor SQL optimization.
Long table scans, by nature, are I/O intensive and unscalable.
9. High Amounts of Recursive (SYS) SQL
Large amounts of recursive SQL executed by SYS could indicate space management activities, such as
extent allocations, taking place. This is unscalable and impacts user response
time. Use locally managed tablespaces to reduce recursive SQL due to extent
allocation. Recursive SQL executed under another user Id is probably SQL and
PL/SQL, and this is not a problem.
10. Deployment and Migration Errors
In many cases, an application uses too many resources because the schema owning
the tables has not been successfully migrated from the development environment
or from an older implementation. Examples of this are missing indexes or
incorrect statistics. These errors can lead to sub-optimal execution plans and
poor interactive user performance. When migrating applications of known
performance, export the schema statistics to maintain plan stability using the DBMS_STATS package.
Although these errors are not directly detected by ADDM, ADDM highlights the
resulting high load SQL.
3.2 Emergency Performance Methods
This section provides techniques for dealing with performance emergencies. You have already had the opportunity to read about a detailed methodology for establishing and improving application performance. However, in an emergency situation, a component of the system has changed to transform it from a reliable, predictable system to one that is unpredictable and not satisfying user requests.
In this case, the role of the performance engineer is to rapidly determine what has changed and take appropriate actions to resume normal service as quickly as possible. In many cases, it is necessary to take immediate action, and a rigorous performance improvement project is unrealistic.
After addressing the immediate performance problem, the performance engineer must collect sufficient debugging information either to get better clarity on the performance problem or to at least ensure that it does not happen again.
The method for debugging emergency performance problems is the same as the method described in the performance improvement method earlier in this book. However, shortcuts are taken in various stages because of the timely nature of the problem. Keeping detailed notes and records of facts found as the debugging process progresses is essential for later analysis and justification of any remedial actions. This is analogous to a doctor keeping good patient notes for future reference.
3.2.1 Steps in the Emergency Performance Method
The Emergency Performance Method is as follows:
1. Survey the performance problem and collect the symptoms of the performance problem. This process should include the following:
· User feedback on how the system is underperforming. Is the problem throughput or response time?
· Ask the question, "What has changed since we last had good performance?" This answer can give clues to the problem. However, getting unbiased answers in an escalated situation can be difficult. Try to locate some reference points, such as collected statistics or log files, that were taken before and after the problem.
· Use automatic tuning features to diagnose and monitor the problem. See "Automatic Performance Tuning Features" for information on the features that help diagnose and tune Oracle systems. In addition, you can use Oracle Enterprise Manager performance features to identify top SQL and sessions.
2. Sanity-check the hardware utilization of all components of the application system. Check where the highest CPU utilization is, and check the disk, memory usage, and network performance on all the system components. This quick process identifies which tier is causing the problem. If the problem is in the application, then shift analysis to application debugging. Otherwise, move on to database server analysis.
3. Determine if the database server is constrained on CPU or if it is spending time waiting on wait events. If the database server is CPU-constrained, then investigate the following:
· Sessions that are consuming large amounts of CPU at the operating system level and database; check V$SESS_TIME_MODEL for database CPU usage
· Sessions or statements that perform many buffer gets at the database level; check V$SESSTAT and V$SQLSTATS
· Execution plan changes causing sub-optimal SQL execution; these can be difficult to locate
· Incorrect setting of initialization parameters
·
Algorithmic issues
as a result of code changes or upgrades of all components
If the database sessions are waiting on events, then follow the wait events
listed in V$SESSION_WAIT to determine what is causing
serialization. The V$ACTIVE_SESSION_HISTORY view contains a sampled history of session activity
which can be used to perform diagnosis even after an incident has ended and the
system has returned to normal operation. In cases of massive contention for the
library cache, it might not be possible to logon or submit SQL to the database.
In this case, use historical data to determine why there is suddenly contention
on this latch. If most waits are for I/O, then examine V$ACTIVE_SESSION_HISTORY to determine the SQL being run by the
sessions that are performing all of the inputs and outputs. See Chapter 10, "Instance Tuning Using Performance
Views" for a
discussion on wait events.
4. Apply emergency action to stabilize the system. This could involve actions that take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in process. These naturally have service level implications.
5. Validate that the system is stable. Having made changes and restrictions to the system, validate that the system is now stable, and collect a reference set of statistics for the database. Now follow the rigorous performance method described earlier in this book to bring back all functionality and users to the system. This process may require significant application re-engineering before it is complete.
From <http://docs.oracle.com/cd/B19306_01/server.102/b14211/technique.htm>
2. How do you force the optimizer to use a new plan: http://www.oracle.com/technetwork/issue-archive/2009/09-mar/o29spm-092092.html
TECHNOLOGY: SQL
Baselines and Better Plans
By Arup Nanda
Use SQL plan management in Oracle Database 11g to optimize execution plans.
Have you ever been in a situation in which some database queries that used to behave well suddenly started performing poorly? More likely than not, you traced the cause back to a change in the execution plan. Further analysis may have revealed that the performance change was due to newly collected optimizer statistics on the tables and indexes referred to in those queries.
And thoroughly humbled by this situation, have you ever made a snap decision to stop statistics collection? This course of action keeps the execution plans pretty much the same for those queries, but it makes other things worse. Performance of some other queries, or even the same queries with different predicates (the WHERE clauses), deteriorates because of suboptimal execution plans generated from stale statistics.
Whatever action you take next carries some risk, so how can you mitigate that risk and ensure that the execution plans for the SQL statements generated are optimal while maintaining a healthy environment in which optimizer statistics are routinely collected and all SQL statements perform well without significant changes (such as adding hints)? You may resort to using stored outlines to freeze the plan, but that also means that you‘re preventing the optimizer from generating potentially beneficial execution plans.
In Oracle Database 11g, using the new SQL plan management feature, you can now examine how execution plans change over time, have the database verify new plans by executing them before using them, and gradually evolve better plans in a controlled manner.
SQL Plan Management
When SQL plan management is enabled, the optimizer stores generated execution plans in a special repository, the SQL management base. All stored plans for a specific SQL statement are said to be part of a plan history for that SQL statement.
Some of the plans in the history can be marked as “accepted.”When the SQL statement is reparsed, the optimizer considers only the accepted plans in the history. This set of accepted plans for that SQL statement is called a SQL plan baseline , or baseline for short.
The optimizer still tries to generate a better plan, however. If the optimizer does generate a new plan, it adds it to the plan history but does not consider it while reparsing the SQL, unless the new plan is better than all the accepted plans in the baseline. Therefore, with SQL plan management enabled, SQL statements will never suddenly have a less efficient plan that results in worse performance.
With SQL plan management, you can examine all the available plans in the plan history for a SQL statement, compare them to see their relative efficiency, promote a specific plan to accepted status, and even make a plan the permanent (fixed) one.
This article will show you how to manage SQL plan baselines—including capturing, selecting, and evolving baselines—by using Oracle Enterprise Manager and SQL from the command line to ensure the optimal performance of SQL statements.
Capture
The capture function of SQL plan management captures the various optimizer plans used by SQL statements. By default, capture is disabled—that is, SQL plan management does not capture the history for the SQL statements being parsed or reparsed.
Now let‘s capture the baselines for some SQL statement examples coming from one session. We will use a sample schema provided with Oracle Database 11g—SH—and the SALES table in particular.
First, we enable the baseline capture in the session:
alter session
set optimizer_capture_sql_plan_baselines = true;
Now all the SQL statements executed in this session will be captured, along with their optimization plans, in the SQL management base. Every time the plan changes for a SQL statement, it is stored in the plan history. To see this, run the script shown in Listing 1, which executes exactly the same SQL but under different circumstances. First, the SQL runs with all the defaults (including an implicit default optimizer_mode = all_rows). In the next execution, the optimizer_mode parameter value is set to first_rows. Before the third execution of the SQL, we collect fresh stats on the table and the indexes
Code Listing 1: Capturing SQL plan baselines
alter
session set optimizer_capture_sql_plan_baselines = true;
-- First execution. Default Environment
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
-- Change the optimizer mode
alter session set optimizer_mode = first_rows;
-- Second execution. Opt Mode changed
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
-- Gather stats now
begin
dbms_stats.gather_table_stats (
ownname
=> 'SH',
tabname
=> 'SALES',
cascade
=> TRUE,
no_invalidate => FALSE,
method_opt => 'FOR ALL INDEXED
COLUMNS SIZE AUTO',
granularity
=> 'GLOBAL AND PARTITION',
estimate_percent => 10,
degree
=> 4
);
end;
/
-- Third execution. After stats
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
If the plan changes in each of the executions of the SQL in Listing 1, the different plans will be captured in the plan history for that SQL statement. (The /* ARUP */ comment easily identifies the specific SQL statements in the shared pool.)
The easiest way to view the plan history is through Oracle Enterprise Manager. From the Database main page, choose the Server tab and then click SQL Plan Control . From that page, choose the SQL Plan Baseline tab. On that page, search for the SQL statements containing the name ARUP , as in Figure 1, which shows the plan history for the SQL statements on the lower part of the screen.
3. Difference between local and global index:
Oracle Global Index vs. Local Index
Question: What is the difference between a oracle global index and a local index?
Answer: When using Oracle partitioning, you can specify the “global” or “local” parameter in the create index syntax:
· Global Index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions. The docs says that a "global index can be partitioned by the range or hash method, and it can be defined on any type of partitioned, or non-partitioned, table".
· Local Index: A local index is a one-to-one mapping between a index partition and a table partition. In general, local indexes allow for a cleaner “divide and conquer” approach for generating fast SQL execution plans with partition pruning.
For complete details, see my tips for Oracle partitioning.
Global and Local Index partitioning with Oracle
The first partitioned index method is called a LOCAL partition. A local partitioned index creates a one-for-one match between the indexes and the partitions in the table. Of course, the key value for the table partition and the value for the local index must be identical. The second method is called GLOBAL and allows the index to have any number of partitions.
The partitioning of the indexes is transparent to all SQL queries. The great benefit is that the Oracle query engine will scan only the index partition that is required to service the query, thus speeding up the query significantly. In addition, the Oracle parallel query engine will sense that the index is partitioned and will fire simultaneous queries to scan the indexes.
Local partitioned indexes
Local partitioned indexes allow the DBA to take individual partitions of a table and indexes offline for maintenance (or reorganization) without affecting the other partitions and indexes in the table.
In a local partitioned index, the key values and number of index partitions will match the number of partitions in the base table.
CREATE INDEX year_idx
on all_fact (order_date)
LOCAL
(PARTITION name_idx1),
(PARTITION name_idx2),
(PARTITION name_idx3);
Oracle will automatically use equal partitioning of the index based upon the number of partitions in the indexed table. For example, in the above definition, if we created four indexes on all_fact, the CREATE INDEX would fail since the partitions do not match. This equal partition also makes index maintenance easier, since a single partition can be taken offline and the index rebuilt without affecting the other partitions in the table.
Global partitioned indexes
A global partitioned index is used for all other indexes except for the one that is used as the table partition key. Global indexes partition OLTP (online transaction processing) applications where fewer index probes are required than with local partitioned indexes. In the global index partition scheme, the index is harder to maintain since the index may span partitions in the base table.
For example, when a table partition is dropped as part of a reorganization, the entire global index will be affected. When defining a global partitioned index, the DBA has complete freedom to specify as many partitions for the index as desired.
Now that we understand the concept, let's examine the Oracle CREATE INDEX syntax for a globally partitioned index:
CREATE INDEX item_idx
on all_fact (item_nbr)
GLOBAL
(PARTITION city_idx1 VALUES LESS THAN (100)),
(PARTITION city_idx1 VALUES LESS THAN (200)),
(PARTITION city_idx1 VALUES LESS THAN (300)),
(PARTITION city_idx1 VALUES LESS THAN (400)),
(PARTITION city_idx1 VALUES LESS THAN (500));
Here, we see that the item index has been defined with five partitions, each containing a subset of the index range values. Note that it is irrelevant that the base table is in three partitions. In fact, it is acceptable to create a global partitioned index on a table that does not have any partitioning.
From <http://www.dba-oracle.com/t_global_local_partitioned_index.htm>
<https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5931711000346922149>
Thanks for the question, Aravindhan.
Submitted: January 16, 2013 - 12:53 am UTC | Last updated: August 08, 2013 - 4:56 pm UTC
Category: Database | Version: 11.1.0
QUESTION: Which Index is Better Global Or Local in Partitioned Table?
Latest Followup
You Asked
We have partitioned table based on date say startdate (Interval partition , For each day)
We will use query that will generate report based on days (like report for previous 5 days)
Also we use queries that will generate report based on hours (like report for previous 5 hours)
So there are queries will access data within partition and across partition as well
So please suggest whether we can for global or local index on start date
and we said...
well, if you are going to cross partitions - hitting 5 days worth of data - hopefully you would NOT be using an index at all. Hopefully you would be using a full scan of the five partitions since you are hitting every row.
If all of your queries include "startdate" in the predicate and you think you'll hit partitions at the most typically - it is likely you want to employ locally partitioned indexes for most all of your indexes.
And startdate doesn't need to be in all of these indexes (they do not need to be prefixed with startdate). Only when you are going after the previous N hours might you want an index that starts with startdate.
for example, suppose you have queries like:
select ....
from t
where startdate between sysdate and sysdate-5
and x > 100;
select ....
from t
where startdate between sysdate and sysdate-2
and x > 100;
it MIGHT make sense to have a locally partitioned index on X, just on X. If x > 100 returns a very small number of rows from those five partitions then an index on X and just on X would be appropriate. We will do five index range scans (which is acceptable) to find the rows.
For the second query we would just do two index range scans (again, acceptable).
You would want a globally partitioned index on X if you did queries like:
select ....
from t
where startdate between sysdate and sysdate-50
and x > 100;
select ....
from t
where x > 100;
From <https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5931711000346922149>
4. What is the difference between DB file sequential read and DB File Scattered Read? http://www.dba-oracle.com/m_cpu_time_execution.htm>
The db file sequential read wait event has three parameters: file#, first block#, and block count. In Oracle Database 10g, this wait event falls under the User I/O wait class. Keep the following key thoughts in mind when dealing with the db file sequential read wait event.
· The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
· The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT by individual sessions.
· Significant db file sequential read wait time is most likely an application issue.
· From <http://logicalread.solarwinds.com/oracle-db-file-sequential-read-wait-event-part1-mc01/>
WHILE …
"The db file scattered Oracle metric event signifies that the user process is reading buffers into the SGAbuffer cache and is waiting for a physical I/Ocall to return. A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.
The db file scattered read wait event identifies that a full table scan is occurring. When performing a full table scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other. Such reads are called scattered read calls, because the blocks are scattered throughout memory. This is why the corresponding wait event is called 'db file scattered read'. Multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full table scans into the buffer cache show up as waits for 'db file scattered read'."
Furthermore, Oracle FAQ's explains that "'db file scattered read' events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache or user PGA memory." Also, the difference between db file scattered read and db file sequential read is that file scattered reads, "is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA."
The popular Ion tool is the easiest way to analyze Oracle cache and disk performance (db block parallel reads and writes), and Ion allows you to spot hidden disk I/O performance trends.
Ion is our favorite Oracle tuning tool, and the only 3rd party tool that we use.
5. Difference between nested loop joins and hash joins: http://blog.tanelpoder.com/2010/10/06/a-the-most-fundamental-difference-between-hash-and-nested-loop-joins/>
· Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.
Nested loops, Hash join and Sort Merge joins – difference?
Nested loop (loop over loop) : http://oracle-online-help.blogspot.com/2007/03/nested-loops-hash-join-and-sort-merge.html>
In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.
Ex:
Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;
It is processed like:
For i in (select * from tab1) loop
For j in (select * from tab2 where col2=i.col1) loop
Display results;
End loop;
End loop;
The Steps involved in doing nested loop are:
a) Identify outer (driving) table
Assign inner (driven) table to outer table.
For every row of outer table, access the rows of inner table.
In execution plan it is seen like this:
NESTED LOOPS
outer_loop
inner_loop
When optimizer uses nested loops?
Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.
Optimizer may not use nested loop in case:
1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.
Hash join
Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.
The algorithm of hash join is divided in two parts
1. Build a in-memory hash table on smaller of the two tables.
2. Probe this hash table with hash value for each row second table
In simpler terms it works like
Build phase
For each row RW1 in small (left/build) table loop
Calculate hash value on RW1 join key
Insert RW1 in appropriate hash bucket.
End loop;
Probe Phase
For each row RW2 in big (right/probe) table loop
Calculate the hash value on RW2 join key
For each row RW1 in hash table loop
If RW1 joins with RW2
Return RW1, RW2
End loop;
End loop;
When optimizer uses hash join?
Optimizer uses has join while joining big tables or big fraction of small tables.
Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.
Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.
6. What factors do you consider when creating indexes on tables? How do you select the column for an index? = descDBA_IND_COLUMNS
·
SQL> desc dba_ind_columns
Name
Null? Type
----------------------------------------- -------- ----------------------------
INDEX_OWNER
NOT NULL VARCHAR2(30)
INDEX_NAME
NOT NULL VARCHAR2(30)
TABLE_OWNER
NOT NULL VARCHAR2(30)
TABLE_NAME
NOT NULL VARCHAR2(30)
COLUMN_NAME
VARCHAR2(4000)
COLUMN_POSITION
NOT NULL NUMBER
COLUMN_LENGTH
NOT NULL NUMBER
CHAR_LENGTH
NUMBER
DESCEND
VARCHAR2(4)
From <https://community.oracle.com/thread/1099106>
When you are creating covering index you should keep in mind some guidelines: · Non-key columns are defined in the INCLUDE clause of the CREATE INDEX statement. · Non-key columns can only be defined on non-clustered indexes on tables or indexed views. · All data types are allowed except text, ntext, and image. · Computed columns that are deterministic and either precise or imprecise can be included columns. · As with key columns, computed columns derived from image, ntext, and text data types can be non-key (included) columns as long as the computed column data type is allowed as a non-key index column. · Column names cannot be specified in both the INCLUDE list and in the key column list. · Column names cannot be repeated in the INCLUDE list. · A maximum of 1023 additional columns can be used as non-key columns (a table can have a maximum of 1024 columns). Performance benefit gained by using covering indexes is typically great for queries that return a large number of rows (by the way this queries are called a non-selective queries). For queries that return only a small number of rows performance is small. But here you can ask, what is the small number of rows? Small numer of rows could be 10 rows for table with hundreds of rows or 1000 rows for table with 1 000 000 rows. |
Building Indexes in Ascending vs Descending Order When you are creating indexes, often the default options are used. This options create index in ascending order. This is usually the most logical way if creating an index, but in some cases this approach wouldn’t be the best. For example when you create index on ColumnA of TableA using default options, the newest data are at the end. This works perfectly when you want to get data in ascending order from the last recent at the top to the most recent at the end. But what if you need to get the most recent data at the top?. In this case you can create index in descending order. In a few following examples I will show you hot to create indexes in different order and how they can affect performance of queries. For all following examples I will use PurchasingOrderHeader of AdventureWorks2008R2 database |
From <http://www.codeproject.com/Articles/234399/Database-performance-optimization-part-Indexing>
7. If you were involved at the early stages of database development and coding, what are some of the measures you would suggest for optimal performance?
8. Is creating an index online possible?http://docs.oracle.com/cd/B28359_01/server.111/b28310/indexes003.htm>
You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index build is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.
The following statements illustrate online index build operations:
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
Note:
Keep in mind that the time that it takes on online index build to complete is proportional to the size of the table and the number of concurrently executing DML statements. Therefore, it is best to start online index builds when DML activity is low.
See Also:
"Rebuilding an Existing Index"
9. What is the difference between Redo, Rollback and Undo?https://oraclenz.wordpress.com/2008/06/22/what-is-the-difference-between-rollback-and-undo-tablespace-otn-forum-by-user-user503050/>
REDO
Redo log files record changes to the database as a result of transactions and internal Oracle server actions. (A transaction is a logical unit of work, consisting of one or more SQL statements run by a user.)
Redo log files protect the database from the loss of integrity because of system failures caused by power outages, disk failures, and so on.
Redo log files must be multiplexed to ensure that the information stored in them is not lost in the event of a disk failure.
The redo log consists of groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of that group, and each group is identified by a number. The LogWriter (LGWR) process writes redo records from the redo log buffer to all members of a redo log group until the file is filled or a log switch operation is requested. Then, it switches and writes to the files in the next group. Redo log groups are used in a circular fashion.
<https://oraclenz.wordpress.com/2008/06/22/differences-between-undo-and-redo/>
There might be confusion while undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle.
Undo
Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.
Undo records are used to:
· Roll back transactions when a ROLLBACK statement is issued
· Recover the database
· Provide read consistency
· Analyze data as of an earlier point in time by using Flashback Query
When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.
Undo vs Rollback
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends (Oracle 9i and on words) to use undo tablespace (automatic undo management) to manage undo rather than rollback segments.
To see the undo management mode and other undo related information of database-
SQL> show parameter undo
NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
Since the advent of Oracle9i, less time-consuming and suggested way is—using Automatic Undo Management—in which Oracle Database creates and manages rollback segments (now called “undo segments”) in a special-purpose undo tablespace. Unlike with rollback segments, we don’t need to create or manage individual undo segments—Oracle Database does that for you when you create the undo tablespace. All transactions in an instance share a single undo tablespace. Any executing transaction can consume free space in the undo tablespace, and when the transaction completes, its undo space is freed (depending on how it’s been sized and a few other factors, like undo retention). Thus, space for undo segments is dynamically allocated, consumed, freed, and reused—all under the control of Oracle Database, rather than manual management by someone.
Switching Rollback to Undo
1. We have to create an Undo tablespace. Oracle provides a function (10g and up) that provides information on how to size new undo tablespace based on the configuration and usage of the rollback segments in the system.
DECLARE
utbsiz_in_MB NUMBER;
BEGIN
utbsiz_in_MB ;= DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/
CREATE UNDO TABLESPACE UNDOTBS
DATAFILE ‘/oradata/dbf/undotbs_1.dbf’
SIZE 100M AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED RETENTION NOGUARANTEE;
Note: In undo tablespace creation, “SEGMENT SPACE MANAGEMENT AUTO” can not be set
2.Change system parameters
SQL> alter system set undo_retention=900 scope=both;
SQL> alter system set undo_tablespace=UNDOTBS scope=both;
SQL> alter system set undo_management=AUTO scope=spfile;
SQL> shutdown immediate
SQL> startup
UNDO_MANAGEMENT is a static parameter. So database needs to be restarted.
Regards,
From <
What is Row Chaining and Row Migration? http://dba.stackexchange.com/questions/41142/how-to-check-which-background-process-are-running-in-my-oracle-database>
10. How to find out background processes?
1 select
sid, process, program
2 from
v$session s join v$bgprocess using
(paddr)
3 where
s.status = 'ACTIVE'
4* and
rownum <5
17:31:21
5 /
SID
PROCESS
PROGRAM
---------- ------------------------
----------------------------------------------------------------
21332
ORACLE.EXE (PMON)
3480
ORACLE.EXE (PSP0)
4976
ORACLE.EXE (VKTM)
5992
ORACLE.EXE (GEN0)
Elapsed: 00:00:00.05
To maximize performance and accommodate many users, a multiprocess Oracle database system uses background processes. Background processes are the processes running behind the scene and are meant to perform certain maintenance activities or to deal with abnormal conditions arising in the instance. Each background process is meant for a specific purpose and its role is well defined.
Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle database processes to provide increased parallelism for better performance and reliability.
A background process is defined as any process that is listed in V$PROCESS and has a non-null value in the pname column.
Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
Oracle background processes are visible as separate operating system processes in Unix/Linux. In Windows, these run as separate threads within the same service. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log.
Background processes are started automatically when the instance is started.
To findout background processes from database:
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
To findout background processes from OS:
$ ps -ef|grep ora_|grep SID
From <http://satya-dba.blogspot.com/2009/08/background-processes-in-oracle.html>
11. How to find background processes from OS: $ ps -ef|grep ora_|grep SID
12. How do you troubleshoot connectivity issues?
Oracle - Diagnosing Connection Problems
If you are having problems connecting to your Oracle database, then you should follow the following steps for diagnosing this:
· when you fail to connect, a file sqlnet.log is often created (see below). This can contain useful information about how the Oracle Client tried to connect, and the error it received.
·
open a Windows command window and enter tnsping ORCL where ORCL
is the name of the Oracle Service you are trying to connect to. If you are
unsure of the Oracle Service name, from the AQT signon screen click on your
Oracle database then click on Configure - the Oracle Service name is given in
the field TNS Service Name.
tnsping will try to connect to the Oracle database, and will provide useful
information about how it is doing this and the error it has received.
tnsnames.ora
The information about the Oracle service names, and how to connect to them, is given in the Oracle file tnsnames.ora. In many cases, connection problems have happened because the wrong tnsnames.ora file is being used.
Oracle looks at the following locations for tnsnames.ora:
· the directory referred to in environment variable TNS_ADMIN
· the directory ORACLE_HOME\network\admin. ORACLE_HOME is given in the ORACLE_HOME environment variable, or the Windows registry.
To complicate matters:
· a user may have multiple ORACLE_HOMEs
· Oracle products may have their own ORACLE_HOME (and thus tnsnames.ora). So SQL*PLUS may be using one tnsnames.ora file but (unknown to you), AQT is using another.
To clear up this uncertainty, it is recommended that the TNS_ADMIN environment variable is set to refer to directory where tnsnames.ora is located. All Oracle products and AQT will then use this tnsnames.ora file.
To view environment variables, open a Windows command window and enter SET. To permanently set an environment variable, go to the Windows Control Panel > System. Click on the Advanced tab and then the Environment Variables button (this is for Windows XP - other Windows versions may have these in a different location).
sqlnet.log
If you fail to connect, the Oracle client will generally write diagnostic information to sqlnet.log. Note that this does not include information on which tnsnames.ora file is being used, which is often the cause of many connection problems.
In earlier versions of Windows, sqlnet.log was written in the same directory as the AQT executable (eg. C:\Program Files\Advanced Query Tool v9). However for more recent versions of Windows (Windows Vista, Windows 7 and Window Server), access to the Program Files directories is restricted. As a result the file can often be created in a Virtual Store directory. You may wish to look for sqlnet.log in either:
· C:\Users\<username>\AppData\Local\VirtualStore\Program Files\Advanced Query Tool v9
· C:\Users\<username>\AppData\Local\VirtualStore\Windows\System32
Running AQT on a 64-bit version of Windows
If you are running AQT on a 64-bit version of Windows, you may fail to connect with message:
TNS could not resolve the connect identifier
This can happen due to a bug in the Oracle client in the 64-bit environment. This is described below.
By default, AQT will be installed into C:\Program Files (x86)\Advanced Query Tool v9. The Program Files (x86) directory structure is used for 32-bit applications. However there is a bug in the Oracle client - when you run a program which has a bracket in the path, the Oracle client will fail to parse tnsnames.ora correctly, resulting in the above message.
The resolution to this problem is to install AQT into a directory that doesn't have a bracket in the name.
Note that this problem has been fixed in recent versions of the Oracle Client.
From <http://www.querytool.com/help/1205.htm>
13. Why are bind variables important? Can you force literals to be converted into bind variables? YES
These simple examples clearly show how replacing literals with bind variables can save both memory and CPU, making OLTPapplications faster and more scalable. If you are using third-party applications that don't use bind variables you may want to consider setting the CURSOR_SHARING parameter, but this should not be considered a replacement for bind variables. The CURSOR_SHARING parameter is less efficient and can potentially reduce performance compared to proper use of bind variables.
From <https://oracle-base.com/articles/misc/literals-substitution-variables-and-bind-variables>
Oracle Bind Variable Tips
Oracle Tips by Michael R. Ault
The perils of Non-Use of Bind Variables in Oracle
The biggest problem in many applications is the non-use of bind variables. Oracle bind variables are a super important way to make Oracle SQL reentrant.
Why is the use of bind variables such an issue?
Oracle uses a signature generation algorithm to assign a hash value to each SQL statement based on the characters in the SQL statement. Any change in a statement (generally speaking) will result in a new hash and thus Oracle assumes it is a new statement. Each new statement must be verified, parsed and have an execution plan generated and stored, all high overhead procedures.
The high overhead procedures might be avoided by using bind variables. See these notes on Oracle cursor_sharing for details.
Ad-hoc query generators (Crystal Reports, Discoverer, Business Objects) do not use bind variables, a major reason for Oracle developing the cursor_sharing parameter to force SQL to use bind variables (when cursor_sharing=force).
Bind variables and shared pool usage
Use of bind variables can have a huge impact on the stress in the shared pool and it is important to know about locating similar SQL in Oracle. This script shows how to check your shared pool for SQL that is using bind variables. Below is an example output of a database that is utilizing bind variables and the SQL is fully reentrant:
Time: 03:15 PM Bind Variable Utilization PERFSTAT dbaville database
When SQL is placed within PL/SQL, the embedded SQL never changes and a single library cache entry will be maintained and searched, greatly improving the library cache hit ratio and reducing parsing overhead.
Here are some particularly noteworthy advantages of placing SQL within Oracle stored procedures and packages:
· High productivity: PL/SQL is a language common to all Oracle environments. Developer productivity is increased when applications are designed to use PL/SQL procedures and packages because it avoids the need to rewrite code. Also, the migration complexity to different programming environments and front-end tools will be greatly reduced because Oracle process logic code is maintained inside the database with the data, where it belongs. The application code becomes a simple “shell” consisting of calls to stored procedures and functions.
· Improved Security: Making use of the “grant execute” construct, it is possible to restrict access to Oracle, enabling the user to run only the commands that are inside the procedures. For example, it allows an end user to access one procedure that has a command delete in one particular table instead of granting the delete privilege directly to the end user. The security of the database is further improved since you can define which variables, procedures and cursors will be public and which will be private, thereby completely limiting access to those objects inside the PL/SQL package. With the “grant” security model, back doors like SQL*Plus can lead to problems; with “grant execute” you force the end-user to play by your rules.
· Application portability: Every application written in PL/SQL can be transferred to any other environment that has the Oracle Database installed regardless of the platform. Systems that consist without any embedded PL/SQL or SQL become “database agnostic” and can be moved to other platforms without changing a single line of code.
· Code Encapsulation: Placing all related stored procedures and functions into packages allows for the encapsulation of storage procedures, variables and datatypes in one single program unit in the database, making packages perfect for code organization in your applications.
· Global variables and cursors: Packages can have global variables and cursors that are available to all the procedures and functions inside the package.
From <http://www.dba-oracle.com/t_bind_variables.htm>
Writing Efficient PL/SQL
Oracle Tips by Burleson Consulting
The following Tip is from the outstanding book "Oracle PL/SQL Tuning: Expert Secrets for High Performance Programming" by Dr. Tim Hall, Oracle ACE of the year, 2006:
In this chapter we will cover a large range of techniques and concepts for improving the efficiency, memory consumption and speed of PL/SQL code. Where possible these techniques are accompanied by small working examples that will help you to understand the concepts and how they can be applied to your application code to boost performance. The first area we will focus on is the use of bind variables.
Using Bind Variables
For every statement issued against the server, Oracle searches the shared pool to see if the statement has already been parsed. If an exact text match of the statement is already present in the shared pool a soft parse is performed as the execution plan for the statement has already been created and can be reused. If the statement is not found in the shared pool a hard parse must be performed to determine the optimal execution path.
The important thing to remember from the previous paragraph is the term “exact text match”, as different numbers of spaces, literal values and case will result in a failure to find a text match, such that the following statements are considered different.
SELECT 1 FROM dual WHERE dummy = ‘X’;
SELECT 1 FROM dual WHERE dummy = ‘Y’;
SELECT 1 FROM DUAL WHERE dummy = ‘X’;
SELECT 1 FROM dual WHERE dummy = ‘X’;
The first two statements only differ by the value of the search criteria, specified using a literal. In these situations exact text matches can be achieved by replacing the literal values with bind variables that have the correct values bound to them. Using the previous example the statement passed to the server might look like this.
SELECT 1 FROM dual WHERE dummy = :B1;
For every execution the bind variable may have a different value, but the text sent to the server is the same allowing for an exact text, which results in a soft parse.
There are two main problems associated with applications that do not use bind variables:
· Parsing SQL statements is a CPU intensive process, so reparsing similar statements constantly represents a waste of CPU cycles.
· Parsed statements are stored in the shared pool until they are aged out. By not using bind variables the shared pool can rapidly become filled with similar statements, which waste memory and make the instance less efficient.
The bind_variable_usage.sql script illustrates the problems associated with not using bind variables by using dynamic SQL to simulate an application sending insert statements to the server.
bind_variable_usage.sql
CREATE TABLE bind_variables (
code VARCHAR2(10)
);
BEGIN
-- Perform insert without bind variables.
FOR i IN 1 .. 10 LOOP
BEGIN
EXECUTE IMMEDIATE
'INSERT INTO bind_variables (code) VALUES (''' || i || ''')';
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
END LOOP;
-- Perform insert with bind variables.
FOR i IN 1 .. 10 LOOP
BEGIN
EXECUTE IMMEDIATE
'INSERT INTO bind_variables (code) VALUES (:B1)' USING TO_CHAR(i);
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
END LOOP;
COMMIT;
END;
/
-- Display the associated SQL text.
COLUMN sql_text FORMAT A60
COLUMN executions FORMAT 9999
SELECT sql_text,
executions
FROM v$sql
WHERE INSTR(sql_text, 'INSERT INTO bind_variables') > 0
AND INSTR(sql_text, 'EXECUTE') = 0
ORDER BY sql_text;
DROP TABLE bind_variables;
The script starts by creating a test table and executing a simple insert statement 10 times, where the insert statement concatenates a value into the string rather than using a bind variable. Next it repeats this process but this time uses a bind variable rather than concatenating the value into the string. Finally it displays the SQL text parsed by the server and stored in the shared pool, which requires query access on the v$sql view. The results from the script are displayed below
* SQL> @bind_variable_usage.sql
Table created.
PL/SQL procedure successfully completed.
SQL_TEXT EXECUTIONS
--------------------------------------------------------- ----------
INSERT INTO bind_variables (code) VALUES ('1') 1
INSERT INTO bind_variables (code) VALUES ('10') 1
INSERT INTO bind_variables (code) VALUES ('2') 1
INSERT INTO bind_variables (code) VALUES ('3') 1
INSERT INTO bind_variables (code) VALUES ('4') 1
INSERT INTO bind_variables (code) VALUES ('5') 1
INSERT INTO bind_variables (code) VALUES ('6') 1
INSERT INTO bind_variables (code) VALUES ('7') 1
INSERT INTO bind_variables (code) VALUES ('8') 1
INSERT INTO bind_variables (code) VALUES ('9') 1
INSERT INTO bind_variables (code) VALUES (:B1) 10
11 rows selected.
Table dropped.
From this we can see that when bind variables were not used the server parsed and executed each query as a unique statement, whereas the bind variable statement was parsed once and executed 10 times. This clearly demonstrates how applications that do not use bind variables can result in wasted memory in the shared pool, along with increased CPU usage.
The cursor_sharing parameter
In some situations you are not in control of the application development process and may be forced to accept applications that do not use bind variables running against the database. In these situations you can still take advantage of bind variables by using the cursor_sharing parameter at instance or session level.
ALTER SYSTEM SET CURSOR_SHARING=FORCE;
ALTER SESSION SET CURSOR_SHARING=FORCE;
The parameter can be set to one of three values:
· EXACT – The default setting where only statements with an exact text match share the same cursor.
· SIMILAR – Statements that match except for some literal values share the same cursor, unless the literal values affect the meaning of the statement or the level of optimization.
· FORCE - Statements that match except for some literal values share the same cursor, unless the literal values affect the meaning of the statement.
If we flush the shared pool and repeat the previous test with cursor sharing set to force we see a different result.
SQL> conn sys/password as sysdba
Connected.
SQL> alter system set cursor_sharing=force;
System altered.
SQL> alter system flush shared_pool;
System altered.
SQL> conn test/test
Connected.
SQL> @bind_variable_usage.sql
Table created.
PL/SQL procedure successfully completed.
SQL_TEXT EXECUTIONS
------------------------------------------------------------ ----------
INSERT INTO bind_variables (code) VALUES (:"SYS_B_0") 10
INSERT INTO bind_variables (code) VALUES (:B1) 10
2 rows selected.
Table dropped.
Here we can see that the ten insert statements using literals have been converted to a single insert statement using a bind variable called ”SYS_B_0” which has executed ten times. The statement that already used bind variables was unaltered and also executed ten times.
The cursor_sharing feature should be considered and a last resort as the process of rewriting the queries requires extra resources. It’s far better to do the job properly in the first place rather than rely on this feature
In the next section we will see how we can gain the advantages of using bind variables within dynamic SQL.
From <http://www.dba-oracle.com/plsql/t_plsql_efficient.htm>
14. What is adaptive cursor sharing?
Adaptive cursor sharing(ACS) is another feature we've blogged about before, which allows the optimizer to generate a set of plans that are optimal for different sets of bind values. A common question is how the two interact, and whether users should consider changing the value of cursor_sharing when upgrading to 11g to take advantage of ACS. The simplest way to think about the interaction between the two features for a given query is to first consider whether literal replacement will take place for a query. Consider a query containing a literal:
select * from employees where job = 'Clerk'
15. In Data Pump, if you restart a job in Data Pump, how it will know from where to resume?
·
edited: hate typing in here with an
ipod. Too difficult to see the complete post.
I can say, you are missing actual point here.
My comments
If in case impdp job failed and terminated , Lets suppose process already
imported 100 rows, Some how its terminated, Now your question is if you start
job again it should start import after 100 >rows, i.e. from 101 rows. Of
course this is not possible, You have to use options TABLE exists action as
Replace/Append/Truncate.
Again, Pause & Continue Client is different, For example proactively you
find some problem either from alert log file(Ex: Temp file) or at log file of
Import, You can give pause Ctrl + C, again after >taking proper action you
can use continue client, so that by using master table it can start import from
that point.
Did you read what i mentioned here? May be understanding problem with my
english.
I said, if job is paused by manually then if you resume it can continue that
job from that point of time after giving continue client
If job is completely failed, i said it will start from scratch.
Maybe what I should have realized is that you think you can pause a job by
typing ctl-c. This does not pause the job. All it does is pause the client. The
Data Pump code that is doing the work is still happily plugging along. It is
still exporting if you are running expdp, and still importing if you ran impdp.
If you want to verify this, export a single table that has data and a couple of
indexes. Then run an import job and remap the the schema to a schema that has
nothing in it and type ctl-c after you see the table created. Make sure that
you have indexes on the table. Let the job sit like this forever after typing ctl-c.
In another window, run sqlplus and query the table. You will see rows in it.
This is because the data pump processes are still running. Dont touch the the
other window and soon enough you will see that the indexes ate created. If you
want to do this with export, run a job and specify a log file. Type ctl-c after
the estimate phase is complete. you will see nothing happening on the screen.
In another window, tail -f the log file. You will see the log file is being
written to. You will also see the dump file getting bigger.
Did you read what i mentioned here? May be understanding problem with my
english.
I said, if job is paused by manually then if you resume it can continue that
job from that point of time >after giving continue client
If job is completely failed, i said it will start from scratch.
This is not true. Again, you can't pause a job. If you are running export and
someone does a shutdown of your database or computer then all of the data pump
processes are gone and your dump file is 1/2 written. If you attach to that old
job and issue a continue_client, the job will continue where it left off. If
you were running import when this happened and If it was importing your payroll
table data and imported everything but 1 row, when the system and database are
back up, the payroll table will be empty. If you attach to the job and issue
continue_client, all of the data will be loaded at that time.
Its a background job, Once you scheduled either by Crontab/Nohup, AFAIK you
cant process pause in >impdp job. There would be no control with you.
ONCE AGAIN... THIS IS WRONG INFORMATION!!!!
Again - you can never pause a job. You can either stop it by
ctl-c
export> stop
or kill a job
ctl-c
export> kill
If you started the job using some script then:
expdp user/password attach=you_job_name_here
export> stop or kill
I know how to continue that job when i ran in foreground, What happens when i
run by crontab or by >nohup ?
get the job name. Either by know what the script will do or by querying
user_datapump_jobs or dba_datapump_jobs and then
expdp user/password attach=your_jobname_here
Can you please justify how its wrong information? I know that job can be paused
and we have full >control when we run from our session(foreground).
Your job cant' be paused, and your job can be restarted even if you didn't use
the client to start the job. That is why it is wrong. You have full control
over a datapump job no matter how /where it was started
I'm saying when job is scheduled then you do not have control. If you have any
other way please do >mention, Please note when it ran in background.
Again, if the data pump job is running, you have full control over it. You can
have 20 different sessions attached to the job and all 20 dba can control it.
You could change the parallel to be 20 while another dba connected to the job
could add data files, while a 3rd dba attached to the job could bump the
parallel value to 50.
Your understanding of what ctl-c does is what is confusing you and what makes
your statements wrong. Like i said above, it does not pause the job. It just
disconnects the client from the server processes. The server processes are
running and exporting/importing just as they would be if there was a client
attached. Typing continue will reattach it. So, that is why what you said is
wrong.
If you want more tests to run, run your favorite expdp command ant type ctl-c
after estimate is complete. Then at the Export> prompt, type exit. Your job
will continue. If you specified a log file, it will be updated and you can tail
-f it out.
Hope this clears it up for you.
Dean
Edited by: Dean Gagne on Jan 27, 2012 5:41 PM
EXAMPLES of SHELL SCRIPTING
#!/bin/bash
echo "This script does export the table COUNTRIES in C##DBA_TEST's schema"
echo `date`
sqlplus / as sysdba <<EOF
spool on
spool /u01/oracle/app/oracle/scripts/DBA_TEST.log
grant read,write on directory DBA_DATAPUMP_DIRECTORY to C##DBA_TEST;
alter user C##DBA_TEST identified by amag account unlock;
select username,table_name from dba_users,dba_tables where owner='C##DBA_TEST';
spool off
EXIT;
EOF
echo "The above are tables owned by C##DBA_TEST user.Don't forget to check logfile at /u01/oracle/app/oracle/scripts/DBA_TEST.log.We'll proceed to export the COUNTRIES table"
expdp C##DBA_TEST/amag tables=COUNTRIES directory=DATA_PUMP_DIRECTORY dumpfile=COUNTRIES.dmp logfile=COUNTRIES.log
echo "The logical backup of Table Export for C##DBA_TEST is successfully completed. Now, we shall go ahead and do a physical backup of the full database."
rman target=/ <<EOF
spool log to '/u01/oracle/app/oracle/scripts/rmanbackup.log';
list backup summary;
DELETE NOPROMPT BACKUPSET COMPLETED BEFORE 'sysdate-1';
CROSSCHECK BACKUP;
DELETE NOPROMPT EXPIRED BACKUP;
BACKUP DEVICE TYPE DISK FORMAT '/u01/oracle/app/oracle/backup/db_%d_%I_%s_%p.bkup' tag CDB FULL_DAILY_BACKUP of database;
BACKUP DEVICE TYPE DISK FORMAT '/u01/oracle/app/oracle/backup/log_%d_%I_%s_%p.bkup' tag CDB FULL_ARCHIVELOG all not _BACKED UP delete all input;
BACKUP DEVICE TYPE DISK FORMAT '/u01/oracle/app/oracle/backup/cf_%d_%U.bkup' tag CDB FULL_DAILY FULL CURRENT CONTROLFILE;
spool log off;
EXIT;
EOF
echo "Full Daily RMAN Physical backup completed successfully at `date`. Check logfile at /u01/oracle/app/oracle/scripts/rmanbackup.log"
:wq!
[oracle@localhost scripts]$ vi /u01/oracle/app/oracle/scripts/datapump.sh
[oracle@localhost scripts]$ chmod 775 /u01/oracle/app/oracle/scripts/datapump.sh
[oracle@localhost scripts]$ ll /u01/oracle/app/oracle/scripts/datapump.sh
-rwxrwxr-x. 1 oracle oracle 1462 Oct 12 08:46 /u01/oracle/app/oracle/scripts/datapump.sh
SCRIPT OUTPUT
[oracle@localhost scripts]$ /u01/oracle/app/oracle/scripts/datapump.sh
This script does export the table COUNTRIES in C##DBA_TEST's schema
Thu Oct 12 09:16:14 PDT 2017
SQL*Plus: Release 12.1.0.2.0 Production on Thu Oct 12 09:16:15 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> SQL> SQL> grant read,write on directory DBA_DATAPUMP_DIRECTORY to C##DBA_TEST
*
ERROR at line 1:
ORA-22930: directory does not exist
SQL> alter user C##DBA_TEST identified by amag account unlock
*
ERROR at line 1:
ORA-28007: the password cannot be reused
SQL>
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
C##DBA_TEST
COUNTRIES
C##DUKEETOR_APP
COUNTRIES
C##DUKEETOR
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
C##TINZIM
COUNTRIES
SYSTEM
COUNTRIES
SYS
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
APEX_PUBLIC_USER
COUNTRIES
ANONYMOUS
COUNTRIES
DVF
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
APEX_040200
COUNTRIES
FLOWS_FILES
COUNTRIES
LBACSYS
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
SPATIAL_CSW_ADMIN_USR
COUNTRIES
SPATIAL_WFS_ADMIN_USR
COUNTRIES
MDDATA
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
OLAPSYS
COUNTRIES
DVSYS
COUNTRIES
SI_INFORMTN_SCHEMA
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
ORDPLUGINS
COUNTRIES
ORDDATA
COUNTRIES
ORDSYS
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
CTXSYS
COUNTRIES
OJVMSYS
COUNTRIES
WMSYS
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
GSMCATUSER
COUNTRIES
MDSYS
COUNTRIES
XDB
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
APPQOSSYS
COUNTRIES
DBSNMP
COUNTRIES
ORACLE_OCM
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
DIP
COUNTRIES
GSMUSER
COUNTRIES
GSMADMIN_INTERNAL
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
XS$NULL
COUNTRIES
OUTLN
COUNTRIES
SYSKM
COUNTRIES
USERNAME
--------------------------------------------------------------------------------
TABLE_NAME
--------------------------------------------------------------------------------
SYSDG
COUNTRIES
SYSBACKUP
COUNTRIES
AUDSYS
COUNTRIES
39 rows selected.
SQL> SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
The above are tables owned by C##DBA_TEST user.Don't forget to check logfile at /u01/oracle/app/oracle/scripts/DBA_TEST.log.We'll proceed to export the COUNTRIES table
Export: Release 12.1.0.2.0 - Production on Thu Oct 12 09:16:15 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/u01/oracle/app/oracle/scripts/COUNTRIES.dmp"
ORA-27038: created file already exists
Additional information: 1
The logical backup of Table Export for C##DBA_TEST is successfully completed. Now, we shall go ahead and do a physical backup of the full database.
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Oct 12 09:16:17 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: CDB1 (DBID=828012650)
RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN>
Spooling for log turned off
Recovery Manager12.1.0.2.0
RMAN>
Recovery Manager complete.
Full Daily RMAN Physical backup completed successfully at Thu Oct 12 09:16:19 PDT 2017. Check logfile at /u01/oracle/app/oracle/scripts/rmanbackup.log
[oracle@localhost scripts]$
==================================================================================
From <https://community.oracle.com/thread/2340182>
EXAMPLES of DATAPUMP =>expdp restart doubt
SQL> select table_name,username from dba_tables,dba_users where owner='C##DUKEETOR';
===
FIX:
===
SQL> create directory DATA_PUMP_DIRECTORY as '/u01/oracle/app/oracle/scripts';
SQL> grant read, write on directory DATA_PUMP_DIRECTORY to C##DUKEETOR;
Grant succeeded.
[oracle@localhost scripts]$ expdp C##DUKEETOR/amag tables=AMAZION directory=DATA_PUMP_DIRECTORY dumpfile=AMAZION.dmp logfile=AMAZION.log
Export: Release 12.1.0.2.0 - Production on Thu Oct 12 07:52:36 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.
Starting "C##DUKEETOR"."SYS_EXPORT_TABLE_01": C##DUKEETOR/******** tables=AMAZION directory=DATA_PUMP_DIRECTORY dumpfile=AMAZION.dmp logfile=AMAZION.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
From <https://community.oracle.com/thread/2340182>
·
·
Pleas Guide me how to attach job when
it is running in background
I know well how to pause , its not cancel , and re attach job. If you process
through background , i already mentioned >either shell script or Nohup when
job processed there is no control with you.
This is documented. Le't say your initial command was:
expdp system/manager job_name=full_1_27_2012 directory=dpump_dir
dumpfile=full_1_27_2012.dmp full=y
Then you can simply do this:
expdp system/manager attach=full_1_27_2012 =>resume from where
JOB=full_1_27_2012 failed (e.g. after server got rebooted, etc)
This will bring you to the
EXPORT> help? (start, stop)
prompt. If the job is still running, you can then say
EXPORT> stop
IMPDP/EXPDP:
*Create directory BACKUP_DIR(oracle) as ' /u01/Test'>Grant read,write on directory BACKUP_DIR to scott/hr[ from SYS / as sysdba profile(not system user)]
-@/../Test$expdp scott/pw directory=BACKUP_DIR dumpfile=SCOTT_EXP.dmp Logfile=SCOTT_EXP.log
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
┌────────────────────────────────────────────────────────────────────┐
│ • MobaXterm Personal Edition v6.3 • │
│ (Unix utilities and X-server on Gnu/Cygwin) │
│ │
│→ Your computer drives are accessible through the /drives path │
│→ Your DISPLAY is set to 10.236.80.56:0.0 │
│→ When using SSH, your remote DISPLAY is automatically forwarded │
│→ Each command status is specified by a special symbol (v or x) │
│ │
│• Important: │
│This is MobaXterm Personal Edition. The Professional edition │
│allows you to customize MobaXterm for your company: you can add │
│your own logo, your parameters, your welcome message and generate │
│either an MSI installation package or a portable executable. │
│We can also modify MobaXterm or develop the plugins you need. │
│For more information: http://mobaxterm.mobatek.net/versions.php │
└────────────────────────────────────────────────────────────────────┘
Login: root
Permanently added '10.236.28.242' (RSA) to the list of known hosts.
+---------------------------------------------------------------------+
| |
| Use of this network is restricted to authorized users only. User |
| activity may be monitored and/or recorded. Anyone using this |
| network expressly consents to such monitoring and/or recording. |
| |
| BE ADVISED: if possible criminal activity is detected, these |
| records, along with certain personal information, may be provided |
| to law enforcement officials. |
| |
+---------------------------------------------------------------------+
root@10.236.28.242's password:
Last login: Mon Sep 7 00:32:15 2015 from d2lseutsh036ag.dc2lab.local
+---------------------------------------------------------------------+
| |
| Use of this network is restricted to authorized users only. User |
| activity may be monitored and/or recorded. Anyone using this |
| network expressly consents to such monitoring and/or recording. |
| |
| BE ADVISED: if possible criminal activity is detected, these |
| records, along with certain personal information, may be provided |
| to law enforcement officials. |
| |
+---------------------------------------------------------------------+
-bash-3.2# sudo su - oracle
oracle@D2LSENPSH242[ORCLDR]# ll /u01/app/oracle/scripts
total 124
-rw-r--r-- 1 oracle oinstall 458 Oct 28 2013 sh_invalid_objects.sql
-rw-r--r-- 1 oracle oinstall 4996 Apr 22 2014 sh_tsdf.sql
-rw-r--r-- 1 oracle oinstall 452 Jul 29 2014 sh_fra.sql
-rw-r--r-- 1 oracle oinstall 175 Jul 31 2014 rman_delete_logs.txt
-rw-r--r-- 1 oracle oinstall 53 Jul 31 2014 sh_asmdisks.sql
-rw-r--r-- 1 oracle oinstall 53 Jul 31 2014 sh_asm_usage.sql
-rw-r--r-- 1 oracle oinstall 446 Jul 31 2014 sh_asm_files.sql
-rw-r--r-- 1 oracle oinstall 537 Oct 15 2014 sh_users.sql
-rw-r--r-- 1 oracle oinstall 137 Oct 15 2014 users_ORCL.txt
-rw-r--r-- 1 oracle oinstall 293 Oct 15 2014 sh_asmdisk_size.sql
-rw-r--r-- 1 oracle oinstall 465 Jan 27 2015 sh_restpnts.sql
-rw-r--r-- 1 oracle oinstall 538 Jan 27 2015 sh_reghist.sql
-rwxrwxrwx 1 oracle oinstall 1012 Feb 10 2015 delete_applied_logs_ORCLDR.sh
-rwxrwxrwx 1 oracle oinstall 17909 Feb 10 2015 rm_applied_logs.sh
-rwxrwxrwx 1 oracle oinstall 18000 Feb 10 2015 delete_applied_logs.sh
-rw-r--r-- 1 oracle oinstall 1950 Feb 10 2015 delete_applied_logs.log
-rw-r--r-- 1 oracle oinstall 630 Feb 13 2015 alogs2.sql
-rw-r--r-- 1 oracle oinstall 2726 Apr 23 16:24 tsdf_ORCL.txt
-rw-r--r-- 1 oracle oinstall 681 Apr 27 13:59 alogs.sql
-rw-r--r-- 1 oracle oinstall 713 May 4 15:57 alogs165.sql
-rw-r--r-- 1 oracle oinstall 713 May 4 15:58 alogs166.sql
-rw-r--r-- 1 oracle oinstall 395 Aug 6 12:53 asm_files.txt
oracle@D2LSENPSH242[ORCLDR]# cat sh_reghist.sql
cat: sh_reghist.sql: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# cd /u01/app/oracle/scripts
oracle@D2LSENPSH242[ORCLDR]# cat sh_reghist.sql
REM ************************************************************************************************
REM sh_reghist.sql
REM list contents of registry$history
REM
REM ************************************************************************************************
SET echo off heading on
set pages 9999 lines 140
column action_time format a30
column action format a15
column namespace format a12
column version format a12
column comments format a30
column bundle_series format a14
select * from registry$history;
spool off
SET echo on
oracle@D2LSENPSH242[ORCLDR]# rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Fri Sep 18 15:57:28 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database (not started)
RMAN> crosscheck archivelog all;
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of crosscheck command at 09/18/2015 15:57:48
RMAN-12010: automatic channel allocation initialization failed
RMAN-06403: could not obtain a fully authorized session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
RMAN> show all;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 09/18/2015 15:58:12
RMAN-06403: could not obtain a fully authorized session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
RMAN> exit
Recovery Manager complete.
oracle@D2LSENPSH242[ORCLDR]# ls /u01/
app
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app
11.2.0.3 grid oracle oraInventory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle
acfs acfsmounts admin backup cfgtoollogs checkpoints Clusterware D2LSENPSH242 diag media patches product scripts staging
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/backup
incr incr2
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin
+ASM LABDBDR orcl ORCLDR
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/ORCLDR
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/orcl
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/LABDBDR
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/+ASM
pfile
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/11.2.0.3
ls: /u01/app/oracle/11.2.0.3: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/11.2.0.3
grid
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory
backup ContentsXML install.platform logs oraInstaller.properties oui
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory/backup
2013-03-06_05-13-17PM 2013-03-06_12-03-21PM 2013-08-07_08-50-55PM 2013-08-14_06-34-59PM 2013-08-14_09-03-01PM 2013-08-14_09-03-30PM
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory/logs
installActions2013-03-06_04-58-02PM.log installActions2013-08-20_08-12-40PM.log OPatch2015-04-22_04-16-33-PM.log oraInstall2013-03-06_12-03-21PM.out
installActions2013-03-06_05-13-17PM.log installActions2013-08-20_08-15-30PM.log OPatch2015-07-23_05-08-13-PM.log oraInstall2013-08-07_08-50-55PM.err
installActions2013-03-06_11-33-57AM.log installActions2013-08-20_08-34-18PM.log OPatch2015-07-23_05-10-14-PM.log oraInstall2013-08-07_08-50-55PM.out
installActions2013-03-06_11-36-25AM.log installActions2013-08-20_08-41-09PM.log OPatch2015-07-23_05-13-35-PM.log oraInstall2013-08-14_06-34-59PM.err
installActions2013-03-06_11-42-00AM.log installActions2013-08-21_06-37-08PM.log OPatch2015-07-23_05-16-03-PM.log oraInstall2013-08-14_06-34-59PM.out
installActions2013-03-06_11-46-37AM.log installActions2013-08-21_06-37-22PM.log OPatch2015-07-23_05-26-44-PM.log oraInstall2013-08-14_09-03-01PM.err
installActions2013-03-06_11-48-33AM.log installActions2013-09-04_02-13-30PM.log OPatch2015-07-31_04-43-21-PM.log oraInstall2013-08-14_09-03-01PM.out
installActions2013-03-06_11-54-44AM.log OPatch2013-08-16_08-46-21-PM.log OPatch2015-08-06_03-42-12-PM.log oraInstall2013-08-14_09-03-30PM.err
installActions2013-08-07_08-50-55PM.log OPatch2013-08-16_08-59-58-PM.log OPatch2015-08-06_05-32-39-PM.log oraInstall2013-08-14_09-03-30PM.out
installActions2013-08-14_06-34-59PM.log OPatch2013-08-16_09-05-30-PM.log oraInstall2013-03-06_05-13-17PM.err oraInstall2013-08-20_08-41-09PM.err
installActions2013-08-20_06-52-37PM.log OPatch2013-10-29_08-26-14-PM.log oraInstall2013-03-06_05-13-17PM.out oraInstall2013-08-20_08-41-09PM.out
installActions2013-08-20_06-53-27PM.log OPatch2014-01-25_06-00-24-PM.log oraInstall2013-03-06_11-48-33AM.err oraInstall2013-09-04_02-13-30PM.err
installActions2013-08-20_07-19-41PM.log OPatch2014-07-30_09-39-40-PM.log oraInstall2013-03-06_11-48-33AM.out oraInstall2013-09-04_02-13-30PM.out
installActions2013-08-20_08-11-18PM.log OPatch2014-10-20_05-19-54-PM.log oraInstall2013-03-06_11-54-44AM.err UpdateNodeList2013-03-06_12-03-21PM.log
installActions2013-08-20_08-11-49PM.log OPatch2014-10-20_05-22-05-PM.log oraInstall2013-03-06_11-54-44AM.out UpdateNodeList2013-08-14_09-03-01PM.log
installActions2013-08-20_08-12-11PM.log OPatch2015-01-26_09-24-58-PM.log oraInstall2013-03-06_12-03-21PM.err UpdateNodeList2013-08-14_09-03-30PM.log
oracle@D2LSENPSH242[ORCLDR]# ls /
bin boot dev edsinfo.txt etc home lib lib64 lost+found media misc mnt opt proc root RPM sbin selinux srv sys tftpboot tmp u01 usr var
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app
11.2.0.3 grid oracle oraInventory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle
acfs acfsmounts admin backup cfgtoollogs checkpoints Clusterware D2LSENPSH242 diag media patches product scripts staging
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/cfgtoologs
ls: /u01/app/oracle/cfgtoologs: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/cfgtoollogs
asmca dbca emca netca postinstall
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/checkpoints
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/diag
asm clients crs diagtool lsnrctl netcman ofm rdbms tnslsnr
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/media
database grid OMS
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/media/database
doc install response rpm runInstaller sshsetup stage welcome.html
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/patches
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/staging
11.2.0.3
oracle@D2LSENPSH242[ORCLDR]#
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
FYI:
3123 rows selected.
=======================================
VIEWS BROKEN DOWN (alphabetically)
=====================================
A
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$A%';
TABLE_NAME
---------------------------------------------------------------------------
V$ACCESS
V$ACTIVE_INSTANCES
V$ACTIVE_SERVICES
V$ACTIVE_SESSION_HISTORY
V$ACTIVE_SESS_POOL_MTH
V$ADVISOR_CURRENT_SQLPLAN
V$ADVISOR_PROGRESS
V$ALERT_TYPES
V$AQ
V$AQ1
V$AQ_BACKGROUND_COORDINATOR
V$AQ_BMAP_NONDUR_SUBSCRIBERS
V$AQ_CROSS_INSTANCE_JOBS
V$AQ_JOB_COORDINATOR
V$AQ_MESSAGE_CACHE
V$AQ_MSGBM
V$AQ_NONDUR_REGISTRATIONS
V$AQ_NONDUR_SUBSCRIBER
V$AQ_NONDUR_SUBSCRIBER_LWM
V$AQ_NOTIFICATION_CLIENTS
V$AQ_SERVER_POOL
V$AQ_SUBSCRIBER_LOAD
V$ARCHIVE
V$ARCHIVED_LOG
V$ARCHIVE_DEST
V$ARCHIVE_DEST_STATUS
V$ARCHIVE_GAP
V$ARCHIVE_PROCESSES
V$ASH_INFO
V$ASM_ACFSREPL
V$ASM_ACFSREPLTAG
V$ASM_ACFSSNAPSHOTS
V$ASM_ACFSTAG
V$ASM_ACFSVOLUMES
V$ASM_ACFS_ENCRYPTION_INFO
V$ASM_ACFS_SECURITY_INFO
V$ASM_ACFS_SEC_ADMIN
V$ASM_ACFS_SEC_CMDRULE
V$ASM_ACFS_SEC_REALM
V$ASM_ACFS_SEC_REALM_FILTER
V$ASM_ACFS_SEC_REALM_GROUP
V$ASM_ACFS_SEC_REALM_USER
V$ASM_ACFS_SEC_RULE
V$ASM_ACFS_SEC_RULESET
V$ASM_ACFS_SEC_RULESET_RULE
V$ASM_ALIAS
V$ASM_ATTRIBUTE
V$ASM_AUDIT_CLEANUP_JOBS
V$ASM_AUDIT_CLEAN_EVENTS
V$ASM_AUDIT_CONFIG_PARAMS
V$ASM_AUDIT_LAST_ARCH_TS
V$ASM_CLIENT
V$ASM_DISK
V$ASM_DISKGROUP
V$ASM_DISKGROUP_STAT
V$ASM_DISK_IOSTAT
V$ASM_DISK_STAT
V$ASM_ESTIMATE
V$ASM_FILE
V$ASM_FILESYSTEM
V$ASM_OPERATION
V$ASM_TEMPLATE
V$ASM_USER
V$ASM_USERGROUP
V$ASM_USERGROUP_MEMBER
V$ASM_VOLUME
V$ASM_VOLUME_STAT
V$AW_AGGREGATE_OP
V$AW_ALLOCATE_OP
V$AW_CALC
V$AW_LONGOPS
V$AW_OLAP
V$AW_SESSION_INFO
73 rows selected.
B
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$B%';
TABLE_NAME
---------------------------------------------------------------------------
V$BACKUP
V$BACKUP_ARCHIVELOG_DETAILS
V$BACKUP_ARCHIVELOG_SUMMARY
V$BACKUP_ASYNC_IO
V$BACKUP_CONTROLFILE_DETAILS
V$BACKUP_CONTROLFILE_SUMMARY
V$BACKUP_COPY_DETAILS
V$BACKUP_COPY_SUMMARY
V$BACKUP_CORRUPTION
V$BACKUP_DATAFILE
V$BACKUP_DATAFILE_DETAILS
V$BACKUP_DATAFILE_SUMMARY
V$BACKUP_DEVICE
V$BACKUP_FILES
V$BACKUP_NONLOGGED
V$BACKUP_PIECE
V$BACKUP_PIECE_DETAILS
V$BACKUP_REDOLOG
V$BACKUP_SET
V$BACKUP_SET_DETAILS
V$BACKUP_SET_SUMMARY
V$BACKUP_SPFILE
V$BACKUP_SPFILE_DETAILS
V$BACKUP_SPFILE_SUMMARY
V$BACKUP_SYNC_IO
V$BGPROCESS
V$BH
V$BLOCKING_QUIESCE
V$BLOCK_CHANGE_TRACKING
V$BSP
V$BTS_STAT
V$BT_SCAN_CACHE
V$BT_SCAN_OBJ_TEMPS
V$BUFFERED_PUBLISHERS
V$BUFFERED_QUEUES
V$BUFFERED_SUBSCRIBERS
V$BUFFER_POOL
V$BUFFER_POOL_STATISTICS
38 rows selected.
C
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$C%';
TABLE_NAME
---------------------------------------------------------------------------
V$CACHE
V$CACHE_LOCK
V$CACHE_TRANSFER
V$CALLTAG
V$CELL
V$CELL_CONFIG
V$CELL_OFL_THREAD_HISTORY
V$CELL_REQUEST_TOTALS
V$CELL_STATE
V$CELL_THREAD_HISTORY
V$CHANNEL_WAITS
V$CIRCUIT
V$CLASS_CACHE_TRANSFER
V$CLASS_PING
V$CLIENT_SECRETS
V$CLIENT_STATS
V$CLONEDFILE
V$CLUSTER_INTERCONNECTS
V$CONFIGURED_INTERCONNECTS
V$CONTAINERS
V$CONTEXT
V$CONTROLFILE
V$CONTROLFILE_RECORD_SECTION
V$CON_SYSSTAT
V$CON_SYSTEM_EVENT
V$CON_SYSTEM_WAIT_CLASS
V$CON_SYS_TIME_MODEL
V$COPY_CORRUPTION
V$COPY_NONLOGGED
V$CORRUPT_XID_LIST
V$CPOOL_CC_INFO
V$CPOOL_CC_STATS
V$CPOOL_CONN_INFO
V$CPOOL_STATS
V$CR_BLOCK_SERVER
V$CURRENT_BLOCK_SERVER
36 rows selected.
D
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$D%';
TABLE_NAME
---------------------------------------------------------------------------
V$DLM_MISC
V$DLM_LATCH
V$DLM_CONVERT_LOCAL
V$DLM_CONVERT_REMOTE
V$DLM_ALL_LOCKS
V$DLM_LOCKS
V$DLM_RESS
V$DLM_TRAFFIC_CONTROLLER
V$DYNAMIC_REMASTER_STATS
V$DATAGUARD_STATUS
V$DBFILE
V$DATABASE
V$DISPATCHER
V$DISPATCHER_CONFIG
V$DISPATCHER_RATE
V$DB_PIPES
V$DB_OBJECT_CACHE
V$DBLINK
V$DATABASE_BLOCK_CORRUPTION
V$DELETED_OBJECT
V$DATAFILE_COPY
V$DATAFILE_HEADER
V$DATAFILE
V$DATAGUARD_CONFIG
V$DATAGUARD_STATS
V$DB_CACHE_ADVICE
V$DATABASE_INCARNATION
V$DETACHED_SESSION
V$DB_TRANSPORTABLE_PLATFORM
V$DNFS_STATS
V$DNFS_FILES
V$DNFS_SERVERS
V$DIAG_INFO
V$DNFS_CHANNELS
V$DIAG_CRITICAL_ERROR
V$DATAPUMP_JOB
V$DATAPUMP_SESSION
V$DIAG_ADR_CONTROL
V$DIAG_ADR_INVALIDATION
V$DIAG_INCIDENT
V$DIAG_PROBLEM
V$DIAG_INCCKEY
V$DIAG_INCIDENT_FILE
V$DIAG_SWEEPERR
V$DIAG_PICKLEERR
V$DIAG_VIEW
V$DIAG_VIEWCOL
V$DIAG_HM_RUN
V$DIAG_HM_FINDING
V$DIAG_HM_RECOMMENDATION
V$DIAG_HM_FDG_SET
V$DIAG_HM_INFO
V$DIAG_HM_MESSAGE
V$DIAG_DDE_USER_ACTION_DEF
V$DIAG_DDE_USR_ACT_PARAM_DEF
V$DIAG_DDE_USER_ACTION
V$DIAG_DDE_USR_ACT_PARAM
V$DIAG_DDE_USR_INC_TYPE
V$DIAG_DDE_USR_INC_ACT_MAP
V$DIAG_IPS_PACKAGE
V$DIAG_IPS_PACKAGE_INCIDENT
V$DIAG_IPS_PACKAGE_FILE
V$DIAG_IPS_FILE_METADATA
V$DIAG_IPS_FILE_COPY_LOG
V$DIAG_IPS_PACKAGE_HISTORY
V$DIAG_IPS_PKG_UNPACK_HIST
V$DIAG_IPS_REMOTE_PACKAGE
V$DIAG_IPS_CONFIGURATION
V$DIAG_INC_METER_SUMMARY
V$DIAG_INC_METER_INFO
V$DIAG_INC_METER_CONFIG
V$DIAG_INC_METER_IMPT_DEF
V$DIAG_INC_METER_PK_IMPTS
V$DIAG_DIR_EXT
V$DIAG_ALERT_EXT
V$DIAG_RELMD_EXT
V$DIAG_EM_USER_ACTIVITY
V$DIAG_EM_DIAG_JOB
V$DIAG_EM_TARGET_INFO
V$DIAG_AMS_XACTION
V$DIAG_VSHOWINCB
V$DIAG_VSHOWINCB_I
V$DIAG_V_INCFCOUNT
V$DIAG_V_NFCINC
V$DIAG_VSHOWCATVIEW
V$DIAG_VINCIDENT
V$DIAG_VINC_METER_INFO
V$DIAG_VIPS_FILE_METADATA
V$DIAG_VIPS_PKG_FILE
V$DIAG_VIPS_PACKAGE_FILE
V$DIAG_VIPS_PACKAGE_HISTORY
V$DIAG_VIPS_FILE_COPY_LOG
V$DIAG_VIPS_PACKAGE_SIZE
V$DIAG_VIPS_PKG_INC_DTL1
V$DIAG_VIPS_PKG_INC_DTL
V$DIAG_VINCIDENT_FILE
V$DIAG_V_INCCOUNT
V$DIAG_V_IPSPRBCNT1
V$DIAG_V_IPSPRBCNT
V$DIAG_VPROBLEM_LASTINC
V$DIAG_VPROBLEM_INT
V$DIAG_VEM_USER_ACTLOG
V$DIAG_VEM_USER_ACTLOG1
V$DIAG_VPROBLEM1
V$DIAG_VPROBLEM2
V$DIAG_V_INC_METER_INFO_PROB
V$DIAG_VPROBLEM
V$DIAG_VPROBLEM_BUCKET1
V$DIAG_VPROBLEM_BUCKET
V$DIAG_VPROBLEM_BUCKET_COUNT
V$DIAG_VHM_RUN
V$DIAG_DIAGV_INCIDENT
V$DIAG_VIPS_PACKAGE_MAIN_INT
V$DIAG_VIPS_PKG_MAIN_PROBLEM
V$DIAG_V_ACTINC
V$DIAG_V_ACTPROB
V$DIAG_V_SWPERRCOUNT
V$DIAG_VIPS_PKG_INC_CAND
V$DIAG_VNOT_EXIST_INCIDENT
V$DIAG_VTEST_EXISTS
V$DISK_RESTORE_RANGE
V$DATABASE_KEY_INFO
V$DIAG_IPS_PROGRESS_LOG
V$DIAG_DFW_CONFIG_CAPTURE
V$DIAG_DFW_CONFIG_ITEM
V$DEAD_CLEANUP
V$DG_BROKER_CONFIG
127 rows selected.
E
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$E%';
TABLE_NAME
---------------------------------------------------------------------------
V$EDITIONABLE_TYPES
V$EMON
V$ENABLEDPRIVS
V$ENCRYPTED_TABLESPACES
V$ENCRYPTION_KEYS
V$ENCRYPTION_WALLET
V$ENQUEUE_LOCK
V$ENQUEUE_STAT
V$ENQUEUE_STATISTICS
V$EVENTMETRIC
V$EVENT_HISTOGRAM
V$EVENT_NAME
V$EXECUTION
13 rows selected.
F
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$F%';
TABLE_NAME
---------------------------------------------------------------------------
V$FALSE_PING
V$FAST_START_SERVERS
V$FAST_START_TRANSACTIONS
V$FILEMETRIC
V$FILEMETRIC_HISTORY
V$FILESPACE_USAGE
V$FILESTAT
V$FILE_CACHE_TRANSFER
V$FILE_HISTOGRAM
V$FILE_OPTIMIZED_HISTOGRAM
V$FILE_PING
V$FIXED_TABLE
V$FIXED_VIEW_DEFINITION
V$FLASHBACK_DATABASE_LOG
V$FLASHBACK_DATABASE_LOGFILE
V$FLASHBACK_DATABASE_STAT
V$FLASHBACK_TXN_GRAPH
V$FLASHBACK_TXN_MODS
V$FLASHFILESTAT
V$FLASH_RECOVERY_AREA_USAGE
V$FOREIGN_ARCHIVED_LOG
V$FS_FAILOVER_HISTOGRAM
V$FS_FAILOVER_STATS
23 rows selected.
G
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$G%';
TABLE_NAME
---------------------------------------------------------------------------
V$GCSHVMASTER_INFO
V$GCSPFMASTER_INFO
V$GC_ELEMENT
V$GC_ELEMENTS_WITH_COLLISIONS
V$GES_BLOCKING_ENQUEUE
V$GES_DEADLOCKS
V$GES_DEADLOCK_SESSIONS
V$GES_ENQUEUE
V$GG_APPLY_COORDINATOR
V$GG_APPLY_READER
V$GG_APPLY_RECEIVER
V$GG_APPLY_SERVER
V$GLOBALCONTEXT
V$GLOBAL_BLOCKED_LOCKS
V$GLOBAL_TRANSACTION
V$GOLDENGATE_CAPABILITIES
V$GOLDENGATE_CAPTURE
V$GOLDENGATE_MESSAGE_TRACKING
V$GOLDENGATE_TABLE_STATS
V$GOLDENGATE_TRANSACTION
20 rows selected.
H
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$H%';
TABLE_NAME
---------------------------------------------------------------------------
V$HANG_INFO
V$HANG_SESSION_INFO
V$HANG_STATISTICS
V$HEAT_MAP_SEGMENT
V$HM_CHECK
V$HM_CHECK_PARAM
V$HM_FINDING
V$HM_INFO
V$HM_RECOMMENDATION
V$HM_RUN
V$HS_AGENT
V$HS_PARAMETER
V$HS_SESSION
V$HVMASTER_INFO
14 rows selected.
I
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$I%';
TABLE_NAME
---------------------------------------------------------------------------
V$INCMETER_CONFIG
V$INCMETER_INFO
V$INCMETER_SUMMARY
V$INDEXED_FIXED_COLUMN
V$INSTANCE
V$INSTANCE_CACHE_TRANSFER
V$INSTANCE_LOG_GROUP
V$INSTANCE_PING
V$INSTANCE_RECOVERY
V$IOFUNCMETRIC
V$IOFUNCMETRIC_HISTORY
V$IOSTAT_CONSUMER_GROUP
V$IOSTAT_FILE
V$IOSTAT_FUNCTION
V$IOSTAT_FUNCTION_DETAIL
V$IOSTAT_NETWORK
V$IOS_CLIENT
V$IO_CALIBRATION_STATUS
V$IO_OUTLIER
V$IR_FAILURE
V$IR_FAILURE_SET
V$IR_MANUAL_CHECKLIST
V$IR_REPAIR
23 rows selected.
J
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$J%';
TABLE_NAME
---------------------------------------------------------------------------
V$JAVAPOOL
V$JAVA_LIBRARY_CACHE_MEMORY
V$JAVA_POOL_ADVICE
K
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$K%';
TABLE_NAME
---------------------------------------------------------------------------
V$KERNEL_IO_OUTLIER
L
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$L%';
TABLE_NAME
---------------------------------------------------------------------------
V$LATCH
V$LATCHHOLDER
V$LATCHNAME
V$LATCH_CHILDREN
V$LATCH_MISSES
V$LATCH_PARENT
V$LGWRIO_OUTLIER
V$LIBCACHE_LOCKS
V$LIBRARYCACHE
V$LIBRARY_CACHE_MEMORY
V$LICENSE
V$LISTENER_NETWORK
V$LOADISTAT
V$LOADPSTAT
V$LOBSTAT
V$LOCK
V$LOCKED_OBJECT
V$LOCKS_WITH_COLLISIONS
V$LOCK_ACTIVITY
V$LOCK_ELEMENT
V$LOCK_TYPE
V$LOG
V$LOGFILE
V$LOGHIST
V$LOGMNR_CONTENTS
V$LOGMNR_DICTIONARY
V$LOGMNR_DICTIONARY_LOAD
V$LOGMNR_LATCH
V$LOGMNR_LOGFILE
V$LOGMNR_LOGS
V$LOGMNR_PARAMETERS
V$LOGMNR_PROCESS
V$LOGMNR_SESSION
V$LOGMNR_STATS
V$LOGMNR_TRANSACTION
V$LOGSTDBY
V$LOGSTDBY_PROCESS
V$LOGSTDBY_PROGRESS
V$LOGSTDBY_STATE
V$LOGSTDBY_STATS
V$LOGSTDBY_TRANSACTION
V$LOG_HISTORY
42 rows selected.
M
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$M%';
TABLE_NAME
---------------------------------------------------------------------------
V$MANAGED_STANDBY
V$MAPPED_SQL
V$MAP_COMP_LIST
V$MAP_ELEMENT
V$MAP_EXT_ELEMENT
V$MAP_FILE
V$MAP_FILE_EXTENT
V$MAP_FILE_IO_STACK
V$MAP_LIBRARY
V$MAP_SUBELEMENT
V$MAX_ACTIVE_SESS_TARGET_MTH
V$MEMORY_CURRENT_RESIZE_OPS
V$MEMORY_DYNAMIC_COMPONENTS
V$MEMORY_RESIZE_OPS
V$MEMORY_TARGET_ADVICE
V$METRIC
V$METRICGROUP
V$METRICNAME
V$METRIC_HISTORY
V$MTTR_TARGET_ADVICE
V$MUTEX_SLEEP
V$MUTEX_SLEEP_HISTORY
V$MVREFRESH
V$MYSTAT
24 rows selected.
N
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$N%';
TABLE_NAME
---------------------------------------------------------------------------
V$NFS_CLIENTS
V$NFS_LOCKS
V$NFS_OPEN_FILES
V$NLS_PARAMETERS
V$NLS_VALID_VALUES
V$NONLOGGED_BLOCK
6 rows selected.
O
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$O%';
TABLE_NAME
---------------------------------------------------------------------------
V$OBJECT_DEPENDENCY
V$OBJECT_DML_FREQUENCIES
V$OBJECT_PRIVILEGE
V$OBJECT_USAGE
V$OBSOLETE_PARAMETER
V$OFFLINE_RANGE
V$OFSMOUNT
V$OFS_STATS
V$OPEN_CURSOR
V$OPTIMIZER_PROCESSING_RATE
V$OPTION
V$OSSTAT
12 rows selected.
P
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$P%';
TABLE_NAME
---------------------------------------------------------------------------
V$PARALLEL_DEGREE_LIMIT_MTH
V$PARAMETER
V$PARAMETER2
V$PARAMETER_VALID_VALUES
V$PATCHES
V$PDBS
V$PDB_INCARNATION
V$PERSISTENT_PUBLISHERS
V$PERSISTENT_QMN_CACHE
V$PERSISTENT_QUEUES
V$PERSISTENT_SUBSCRIBERS
V$PGASTAT
V$PGA_TARGET_ADVICE
V$PGA_TARGET_ADVICE_HISTOGRAM
V$PING
V$POLICY_HISTORY
V$PQ_SESSTAT
V$PQ_SLAVE
V$PQ_SYSSTAT
V$PQ_TQSTAT
V$PROCESS
V$PROCESS_GROUP
V$PROCESS_MEMORY
V$PROCESS_MEMORY_DETAIL
V$PROCESS_MEMORY_DETAIL_PROG
V$PROPAGATION_RECEIVER
V$PROPAGATION_SENDER
V$PROXY_ARCHIVEDLOG
V$PROXY_ARCHIVELOG_DETAILS
V$PROXY_ARCHIVELOG_SUMMARY
V$PROXY_COPY_DETAILS
V$PROXY_COPY_SUMMARY
V$PROXY_DATAFILE
V$PWFILE_USERS
V$PX_BUFFER_ADVICE
V$PX_INSTANCE_GROUP
V$PX_PROCESS
V$PX_PROCESS_SYSSTAT
V$PX_PROCESS_TRACE
V$PX_SESSION
V$PX_SESSTAT
41 rows selected.
Q
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$Q%';
TABLE_NAME
---------------------------------------------------------------------------
V$QMON_COORDINATOR_STATS
V$QMON_SERVER_STATS
V$QMON_TASKS
V$QMON_TASK_STATS
V$QUEUE
V$QUEUEING_MTH
6 rows selected.
R
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$R%';
TABLE_NAME
---------------------------------------------------------------------------
V$RECOVERY_AREA_USAGE
V$RECOVERY_FILE_DEST
V$RECOVERY_FILE_STATUS
V$RECOVERY_LOG
V$RECOVERY_PROGRESS
V$RECOVERY_STATUS
V$RECOVER_FILE
V$REDO_DEST_RESP_HISTOGRAM
V$REPLAY_CONTEXT
V$REPLAY_CONTEXT_LOB
V$REPLAY_CONTEXT_SEQUENCE
V$REPLAY_CONTEXT_SYSDATE
V$REPLAY_CONTEXT_SYSGUID
V$REPLAY_CONTEXT_SYSTIMESTAMP
V$REPLPROP
V$REPLQUEUE
V$REQDIST
V$RESERVED_WORDS
V$RESOURCE
V$RESOURCE_LIMIT
V$RESTORE_POINT
V$RESTORE_RANGE
V$RESULT_CACHE_DEPENDENCY
V$RESULT_CACHE_MEMORY
V$RESULT_CACHE_OBJECTS
V$RESULT_CACHE_STATISTICS
V$RESUMABLE
V$RFS_THREAD
V$RMAN_BACKUP_JOB_DETAILS
V$RMAN_BACKUP_SUBJOB_DETAILS
V$RMAN_BACKUP_TYPE
V$RMAN_COMPRESSION_ALGORITHM
V$RMAN_CONFIGURATION
V$RMAN_ENCRYPTION_ALGORITHMS
V$RMAN_OUTPUT
V$RMAN_STATUS
V$ROLLNAME
V$ROLLSTAT
V$ROWCACHE
V$ROWCACHE_PARENT
V$ROWCACHE_SUBORDINATE
V$RO_USER_ACCOUNT
V$RSRCMGRMETRIC
V$RSRCMGRMETRIC_HISTORY
V$RSRC_CONSUMER_GROUP
V$RSRC_CONSUMER_GROUP_CPU_MTH
V$RSRC_CONS_GROUP_HISTORY
V$RSRC_PLAN
V$RSRC_PLAN_CPU_MTH
V$RSRC_PLAN_HISTORY
V$RSRC_SESSION_INFO
V$RT_ADDM_CONTROL
V$RULE
V$RULE_SET
V$RULE_SET_AGGREGATE_STATS
55 rows selected.
S
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$S%';
TABLE_NAME
---------------------------------------------------------------------------
V$SQL_REDIRECTION
V$SQL_PLAN
V$SQL_PLAN_STATISTICS
V$SQL_PLAN_STATISTICS_ALL
V$SQL_WORKAREA
V$SQL_WORKAREA_ACTIVE
V$SQL_WORKAREA_HISTOGRAM
V$SYS_OPTIMIZER_ENV
V$SES_OPTIMIZER_ENV
V$SQL_OPTIMIZER_ENV
V$SQLFN_METADATA
V$SQLFN_ARG_METADATA
V$STANDBY_LOG
V$SESSION
V$SESSION_LONGOPS
V$SESSTAT
V$SUBCACHE
V$SYSSTAT
V$STATNAME
V$SGA
V$SYSTEM_PARAMETER
V$SYSTEM_PARAMETER2
V$SPPARAMETER
V$SQLAREA
V$SQLAREA_PLAN_HASH
V$SQLTEXT
V$SQLTEXT_WITH_NEWLINES
V$SQL
V$SQL_SHARED_CURSOR
V$SHARED_SERVER_MONITOR
V$SGASTAT
V$SGAINFO
V$SHARED_SERVER
V$STATISTICS_LEVEL
V$SESSION_CURSOR_CACHE
V$SESSION_WAIT_CLASS
V$SESSION_WAIT
V$SESSION_WAIT_HISTORY
V$SESSION_BLOCKERS
V$SESSION_EVENT
V$SESSION_CONNECT_INFO
V$SYSTEM_WAIT_CLASS
V$SYSTEM_EVENT
V$SYSTEM_CURSOR_CACHE
V$SESS_IO
V$SHARED_POOL_RESERVED
V$SORT_SEGMENT
V$SORT_USAGE
V$SQL_CURSOR
V$SQL_BIND_METADATA
V$SQL_BIND_DATA
V$SQL_SHARED_MEMORY
V$SESSION_OBJECT_CACHE
V$STANDBY_EVENT_HISTOGRAM
V$SGA_TARGET_ADVICE
V$SEGMENT_STATISTICS
V$SEGSTAT_NAME
V$SEGSTAT
V$SHARED_POOL_ADVICE
V$STREAMS_POOL_ADVICE
V$SGA_CURRENT_RESIZE_OPS
V$SGA_RESIZE_OPS
V$SGA_DYNAMIC_COMPONENTS
V$SGA_DYNAMIC_FREE_MEMORY
V$SYSMETRIC
V$SYSMETRIC_HISTORY
V$SERVICE_WAIT_CLASS
V$SERVICE_EVENT
V$SERVICES
V$SYSMETRIC_SUMMARY
V$SESSMETRIC
V$SERVICEMETRIC
V$SERVICEMETRIC_HISTORY
V$SQLPA_METRIC
V$SQL_JOIN_FILTER
V$SQLSTATS
V$SQLSTATS_PLAN_HASH
V$SYSAUX_OCCUPANTS
V$SCHEDULER_RUNNING_JOBS
V$SUBSCR_REGISTRATION_STATS
V$SYSTEM_FIX_CONTROL
V$SESSION_FIX_CONTROL
V$SQL_FEATURE
V$SQL_FEATURE_HIERARCHY
V$SQL_FEATURE_DEPENDENCY
V$SQL_HINT
V$SQL_CS_HISTOGRAM
V$SQL_CS_SELECTIVITY
V$SQL_CS_STATISTICS
V$SQL_MONITOR
V$SQL_PLAN_MONITOR
V$SSCR_SESSIONS
V$SECUREFILE_TIMER
V$SQLCOMMAND
V$SERV_MOD_ACT_STATS
V$SERVICE_STATS
V$SYS_TIME_MODEL
V$SESS_TIME_MODEL
V$STREAMS_CAPTURE
V$STREAMS_APPLY_COORDINATOR
V$STREAMS_APPLY_SERVER
V$STREAMS_APPLY_READER
V$STREAMS_TRANSACTION
V$STREAMS_MESSAGE_TRACKING
V$STREAMS_POOL_STATISTICS
V$SQL_BIND_CAPTURE
V$SBT_RESTORE_RANGE
V$SEGSPACE_USAGE
V$SQL_REOPTIMIZATION_HINTS
V$SYS_REPORT_STATS
V$SYS_REPORT_REQUESTS
V$SESSIONS_COUNT
V$SCHEDULER_INMEM_RTINFO
V$SCHEDULER_INMEM_MDINFO
V$SQL_DIAG_REPOSITORY
V$SQL_DIAG_REPOSITORY_REASON
V$SQL_MONITOR_STATNAME
V$SQL_MONITOR_SESSTAT
118 rows selected.
SQL>
T
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$T%';
TABLE_NAME
---------------------------------------------------------------------------
V$TABLESPACE
V$TEMPFILE
V$TEMPORARY_LOBS
V$TEMPSEG_USAGE
V$TEMPSTAT
V$TEMPUNDOSTAT
V$TEMP_CACHE_TRANSFER
V$TEMP_EXTENT_MAP
V$TEMP_EXTENT_POOL
V$TEMP_PING
V$TEMP_SPACE_HEADER
V$THREAD
V$THRESHOLD_TYPES
V$TIMER
V$TIMEZONE_FILE
V$TIMEZONE_NAMES
V$TOPLEVELCALL
V$TRANSACTION
V$TRANSACTION_ENQUEUE
V$TRANSPORTABLE_PLATFORM
V$TSDP_SUPPORTED_FEATURE
V$TSM_SESSIONS
V$TYPE_SIZE
23 rows selected.
U
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$U%';
TABLE_NAME
---------------------------------------------------------------------------
V$UNDOSTAT
V$UNIFIED_AUDIT_RECORD_FORMAT
V$UNIFIED_AUDIT_TRAIL
V$UNUSABLE_BACKUPFILE_DETAILS
V
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$V%';
TABLE_NAME
---------------------------------------------------------------------------
V$VERSION
V$VPD_POLICY
W
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$W%';
TABLE_NAME
---------------------------------------------------------------------------
V$WAITCLASSMETRIC
V$WAITCLASSMETRIC_HISTORY
V$WAITSTAT
V$WAIT_CHAINS
V$WALLET
V$WLM_PCMETRIC
V$WLM_PCMETRIC_HISTORY
V$WLM_PC_STATS
V$WORKLOAD_REPLAY_THREAD
9 rows selected.
X
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$X%';
TABLE_NAME
---------------------------------------------------------------------------
V$XML_AUDIT_TRAIL
V$XSTREAM_APPLY_COORDINATOR
V$XSTREAM_APPLY_READER
V$XSTREAM_APPLY_RECEIVER
V$XSTREAM_APPLY_SERVER
V$XSTREAM_CAPTURE
V$XSTREAM_MESSAGE_TRACKING
V$XSTREAM_OUTBOUND_SERVER
V$XSTREAM_TABLE_STATS
V$XSTREAM_TRANSACTION
V$XS_SESSIONS
V$XS_SESSION_NS_ATTRIBUTE
V$XS_SESSION_NS_ATTRIBUTES
V$XS_SESSION_ROLES
14 rows selected.
Y
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$Y%';
no rows selected
Z
SQL> select TABLE_NAME from DICTIONARY where TABLE_NAME like 'V$Z%';
no rows selected
======DBA Views============
* dba_all_tables
* dba_indexes
* dba_ind_partitions
* dba_ind_subpartitions
* dba_object_tables
* dba_part_col_statistics
* dba_subpart_col_statistics
* dba_tables
* dba_tab_cols
* dba_tab_columns
* dba_tab_col_statistics
* dba_tab_partitions
* dba_tab_subpartitions
· DBA_DB_LINKS - All DB links defined in the database
· ALL_DB_LINKS - All DB links the current user has access to
· USER_DB_LINKS - All DB links owned by current user
· e.g
SELECT DB_LINK, USERNAME, HOST FROM ALL_DB_LINKS
1. Know everything about any USER account in ORACLE database from dba_users/user_users/all_users
USERS
SQL> desc dba_users
Name Null? Type
------------------------------------------------------------------------------- ---------------------------------------------------------------- -------- ------ -------------------------------------------------------------------------------- ----------
USERNAME NOT NULL VARCHAR2(128)
USER_ID NOT NULL NUMBER
PASSWORD VARCHAR2(4000)
ACCOUNT_STATUS NOT NULL VARCHAR2(32)
LOCK_DATE DATE
EXPIRY_DATE DATE
DEFAULT_TABLESPACE NOT NULL VARCHAR2(30)
TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30)
CREATED NOT NULL DATE
PROFILE NOT NULL VARCHAR2(128)
INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(128)
EXTERNAL_NAME VARCHAR2(4000)
PASSWORD_VERSIONS VARCHAR2(12)
EDITIONS_ENABLED VARCHAR2(1)
AUTHENTICATION_TYPE VARCHAR2(8)
PROXY_ONLY_CONNECT VARCHAR2(1)
VARCHAR2(1)
COMMON VARCHAR2(3)
LAST_LOGIN TIMESTAMP(9) WITH TIME ZONE
ORACLE_MAINTAINED VARCHAR2(1)
2. Know everything about TABLES in your ORACLE database from dba_tables/user_tables/all_tables
SQL>desc dba_tables
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------
OWNER NOT NULL VARCHAR2(128)
TABLE_NAME NOT NULL VARCHAR2(128)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(128)
IOT_NAME VARCHAR2(128)
STATUS VARCHAR2(8)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NU
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
FLASH_CACHE VARCHAR2(7)
CELL_FLASH_CACHE VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(128)
DEPENDENCIES VARCHAR2(8)
COMPRESSION VARCHAR2(8)
COMPRESS_FOR VARCHAR2(30)
DROPPED VARCHAR2(3)
READ_ONLY VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RESULT_CACHE VARCHAR2(7)
CLUSTERING VARCHAR2(3)
ACTIVITY_TRACKING VARCHAR2(23)
DML_TIMESTAMP VARCHAR2(25)
HAS_IDENTITY VARCHAR2(3)
CONTAINER_DATA VARCHAR2(3)
3. Know everything about any DATA FILES (Users data)in ORACLE database from dba_data_files/user_data_files/all_data_files
SQL>desc dba_data_files
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------
FILE_NAME VARCHAR2(513)
FILE_ID NUMBER
TABLESPACE_NAME VARCHAR2(30)
BYTES NUMBER
BLOCKS NUMBER
STATUS VARCHAR2(9)
RELATIVE_FNO NUMBER
AUTOEXTENSIBLE VARCHAR2(3)
MAXBYTES NUMBER
MAXBLOCKS NUMBER
INCREMENT_BY NUMBER
USER_BYTES NUMBER
USER_BLOCKS NUMBER
ONLINE_STATUS VARCHAR2(7)
4. Know everything about any DIRECTORY(Oracle dir mapped to O/S(linux) dir)in ORACLE database from dba_directories/user_directories/all_directories
5. SQL> desc dba_directories
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------
OWNER NOT NULL VARCHAR2(128)
DIRECTORY_NAME NOT NULL VARCHAR2(128)
DIRECTORY_PATH VARCHAR2(4000)
ORIGIN_CON_ID NUMBER
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
-rwxr-x--- 1 oracle oinstall 2134 Apr 23 14:13 dbup.sh
-rw-r--r-- 1 oracle oinstall 13788 Sep 30 01:01 delete_archlogs.log
-rw-r--r-- 1 oracle oinstall 25378 Jun 23 18:00 delete_archlogs.log.20150623180 0
-rw-r--r-- 1 oracle oinstall 99409 Jun 23 19:00 delete_archlogs.log.2015062319
-rwxr-x--- 1 oracle oinstall 272 Jun 25 18:46 delete_archlogs.sh
-rw-r--r-- 1 oracle oinstall 25730 Apr 13 2010 dfltpass.sql
-rw-r--r-- 1 oracle oinstall 1478 Apr 13 2010 df.sql
-rw-r--r-- 1 oracle oinstall 790 Apr 13 2010 endbackup.sql
-rw-r--r-- 1 oracle oinstall 1771 Apr 13 2010 g2b.sql
-rw-r--r-- 1 oracle oinstall 208 Apr 15 2010 gen_user_default.sql
-rwxr-x--- 1 oracle oinstall 878 Apr 23 14:13 grant_readonly.sh
-rw-r--r-- 1 oracle oinstall 110971 Sep 2 2010 grant_readonly.sql
-rw-r--r-- 1 oracle oinstall 15091 Apr 13 2010 hardening.sql
-rw-r--r-- 1 oracle oinstall 212 Apr 13 2010 invalid_objects.sql
-rwxr-x--- 1 oracle oinstall 1637 Apr 23 14:13 ldf.sh
-rw-r--r-- 1 oracle oinstall 468 May 1 17:16 metrics_database_resource.txt
-rwxr-x--- 1 oracle oinstall 5235 Apr 23 14:13 nb_arc_backup_BASSD.sh
-rw-rw-rw- 1 root root 3947 Apr 23 19:44 nb_arc_backup_BASSD.sh.out
-rwxr-x--- 1 oracle oinstall 5235 Apr 23 14:13 nb_arc_backup_DBSID.sh
-rwxr-x--- 1 oracle oinstall 11546 Apr 23 14:13 nb_hot_backup_BASSD.sh
-rwxr-x--- 1 oracle oinstall 11838 Apr 22 17:04 nb_hot_backup_BASSD.sh.old
-rw-rw-rw- 1 root root 3082 Apr 24 11:24 nb_hot_backup_BASSD.sh.out
-rwxr-x--- 1 oracle oinstall 11546 Apr 23 14:13 nb_hot_backup_DBSID.sh
-rw-r--r-- 1 oracle oinstall 52463 Apr 13 2010 Oracle_test_script_v4.sql
-rw-r--r-- 1 oracle oinstall 4550 Apr 15 2010 recompile_invalid_objects.sql
-rw-r--r-- 1 oracle oinstall 354 Apr 13 2010 redo_logs.sql
-rw-r--r-- 1 oracle oinstall 12374 Sep 3 22:52 rmanbackup_BASSD_disk_full_keep .log
-rwxr-x--- 1 oracle oinstall 478 May 21 15:00 rmanbackup_BASSD_disk_full_keep .sh
-rw-r--r-- 1 oracle oinstall 7722 Sep 27 00:48 rmanbackup_BASSD_disk_full.log
-rwxr-x--- 1 oracle oinstall 654 Jun 24 15:16 rmanbackup_BASSD_disk_full.sh
-rw-r--r-- 1 oracle oinstall 12102 Sep 30 01:18 rmanbackup_BASSD_disk_inc.log
-rwxr-x--- 1 oracle oinstall 556 Jun 24 15:15 rmanbackup_BASSD_disk_inc.sh
-rwxr-x--- 1 oracle oinstall 1258 Apr 23 15:56 rmanbackup_BASSD_sbt.sh
-rwxr-x--- 1 oracle oinstall 1250 Apr 23 14:13 rmanbackup_DBSID_sbt.sh
-rwxr-x--- 1 oracle oinstall 469 Apr 23 14:13 rmanbackup_log.sh
-rwxr-x--- 1 oracle oinstall 699 Apr 23 14:13 rmanbackup.sh
-rw-r--r-- 1 oracle oinstall 31228 Apr 13 2010 rpt_db_tuning.sql
-rw-r--r-- 1 oracle oinstall 22293 Apr 23 15:01 rpt_hardening_BASSD_150423.txt
-rw-r--r-- 1 oracle oinstall 10592 Aug 1 2011 rpt_hardening.sql
-rw-r--r-- 1 oracle oinstall 14884 Apr 13 2010 rpt_Oracle_Hardening.sql
-rw-r--r-- 1 oracle oinstall 70858 Apr 13 2010 rpt_scanner.sql
-rw-r--r-- 1 oracle oinstall 7304 Apr 13 2010 rpt_SLA_security.sql
-rw-r--r-- 1 oracle oinstall 2998 Apr 13 2010 rpt_user_audit.sql
-rw-r--r-- 1 oracle oinstall 3329 Apr 13 2010 rpt_user_privs.sql
-rw-r--r-- 1 oracle oinstall 270 Apr 13 2010 sh_active_locks.sql
-rw-r--r-- 1 oracle oinstall 465 Apr 13 2010 sh_active_sessions.sql
-rw-r--r-- 1 oracle oinstall 630 Apr 13 2010 sh_actwaits.sql
-rw-r--r-- 1 oracle oinstall 3499 Apr 13 2010 sh_all_sessions2.sql
-rw-r--r-- 1 oracle oinstall 665 Apr 13 2010 sh_all_sessions.sql
-rw-r--r-- 1 oracle oinstall 549 Apr 13 2010 sh_arch_hist.sql
-rw-r--r-- 1 oracle oinstall 721 Apr 13 2010 sh_db_links2.sql
-rw-r--r-- 1 oracle oinstall 300 Apr 13 2010 sh_db_links.sql
-rw-r--r-- 1 oracle oinstall 592 Apr 13 2010 sh_dependency.sql
-rw-r--r-- 1 oracle oinstall 1478 Apr 13 2010 sh_df.sql
-rw-r--r-- 1 oracle oinstall 612 Apr 13 2010 sh_disk.sql
-rw-r--r-- 1 oracle oinstall 462 Apr 13 2010 sh_free_mem.sql
-rw-r--r-- 1 oracle oinstall 503 Apr 13 2010 sh_hit_ratio.sql
-rw-r--r-- 1 oracle oinstall 487 Apr 13 2010 sh_invalid_index.sql
-rw-r--r-- 1 oracle oinstall 213 Apr 13 2010 sh_invalid_objects.sql
-rw-r--r-- 1 oracle oinstall 242 Apr 13 2010 sh_invalid.sql
-rw-r--r-- 1 oracle oinstall 823 Apr 13 2010 sh_iowaits.sql
-rw-r--r-- 1 oracle oinstall 1218 Apr 13 2010 sh_jobs.sql
-rw-r--r-- 1 oracle oinstall 478 Apr 13 2010 sh_part.sql
-rw-r--r-- 1 oracle oinstall 596 Apr 13 2010 sh_redo_logs.sql
-rw-r--r-- 1 oracle oinstall 774 Apr 13 2010 sh_resource_limits.sql
-rw-r--r-- 1 oracle oinstall 212 May 27 17:49 sh_rp.sql
-rw-r--r-- 1 oracle oinstall 897 Apr 13 2010 sh_sch_jobs.sql
-rw-r--r-- 1 oracle oinstall 1540 Apr 13 2010 sh_seg_extents.sql
-rw-r--r-- 1 oracle oinstall 373 Apr 13 2010 sh_sqlarea.sql
-rw-r--r-- 1 oracle oinstall 442 Apr 13 2010 sh_tab_analyzed.sql
-rw-r--r-- 1 oracle oinstall 450 Apr 13 2010 sh_temp_usage.sql
-rw-r--r-- 1 oracle oinstall 5348 Jun 16 21:01 sh_tsdf.sql
-rw-r--r-- 1 oracle oinstall 2658 Aug 1 2011 sh_ts.sql
-rw-r--r-- 1 oracle oinstall 1631 Apr 13 2010 sh_tss.sql
-rw-r--r-- 1 oracle oinstall 1568 Apr 13 2010 sh_tsss.sql
-rw-r--r-- 1 oracle oinstall 501 Apr 13 2010 sh_undo_usage.sql
-rw-r--r-- 1 oracle oinstall 1830 Apr 13 2010 sh_user_mem.sql
-rw-r--r-- 1 oracle oinstall 1293 Apr 13 2010 sh_user_privs.sql
-rw-r--r-- 1 oracle oinstall 151 Apr 13 2010 sh_user_sql.sql
-rw-r--r-- 1 oracle oinstall 1237 Apr 13 2010 sh_users_roles.sql
-rw-r--r-- 1 oracle oinstall 556 May 20 14:41 sh_users.sql
-rw-r--r-- 1 oracle oinstall 1326 Apr 13 2010 sh_waits.sql
-rw-r--r-- 1 oracle oinstall 596 Apr 13 2010 sys_event.sql
-rw-r--r-- 1 oracle oinstall 7510 Jul 9 12:52 tsdf_BASSD.txt
-rw-r--r-- 1 oracle oinstall 5245 Apr 13 2010 tsdf.sql
-rw-r--r-- 1 oracle oinstall 681 Jun 16 20:21 tsdf_.txt
-rw-r--r-- 1 oracle oinstall 1631 Apr 13 2010 tss.sql
-rw-r--r-- 1 oracle oinstall 9667 Sep 16 14:37 users_BASSD.txt
-rw-r--r-- 1 oracle oinstall 5099 Apr 13 2010 utlpwdmg.sql
[kenneth.chando@d2asedvic004 ~]$ crontab -l
You (kenneth.chando) are not allowed to use this program (crontab)
See crontab(1) for more information
[kenneth.chando@d2asedvic004 ~]$ sudo su - oracle
oracle@d2asedvic004[BASSD]# crontab -l
#30 22 03 9 * /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full_keep.sh > /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full_keep.log 2>&1
#30 22 08 9 * /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full.sh > /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full.log 2>&1
59 23 * * 6 /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full.sh > /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full.log 2>&1
59 23 * * 1-5 /u01/app/oracle/scripts/rmanbackup_BASSD_disk_inc.sh > /u01/app/oracle/scripts/rmanbackup_BASSD_disk_inc.log 2>&1
01 1,5,9,13,17,21 * * * /u01/app/oracle/scripts/delete_archlogs.sh > /u01/app/oracle/scripts/delete_archlogs.log 2>&1
oracle@d2asedvic004[BASSD]# cat /u01/app/oracle/scripts/rmanbackup_BASSD_disk_full.log
Backup Starting at Sat Sep 26 23:59:01 UTC 2015
stty: standard input: Invalid argument
Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 26 23:59:01 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: BASSD (DBID=3461595491)
RMAN>
Starting backup at 26-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1021 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/u01/app/oradata/BASSD/BASS_DATA_TBLSPC.DAT
input datafile file number=00008 name=/u01/app/oradata/BASSD/BASS_STG_TBLSPC.DAT
input datafile file number=00004 name=/u01/app/oradata/BASSD/undotbs01.dbf
input datafile file number=00007 name=/u01/app/oradata/BASSD/BASS_INDEX_TBLSPC.DAT
input datafile file number=00009 name=/u01/app/oradata/BASSD/EPM_TBSP_DATA.dat
input datafile file number=00010 name=/u01/app/oradata/BASSD/OBIEE_REP.dat
input datafile file number=00011 name=/u01/app/oradata/BASSD/OBI_REP.dat
input datafile file number=00012 name=/u01/app/oradata/BASSD/ODI_TBSP.dat
input datafile file number=00013 name=/u01/app/oradata/BASSD/FDM_TBSP.dat
input datafile file number=00014 name=/u01/app/oradata/BASSD/CBM_EPM_TBSP_DATA.dat
input datafile file number=00003 name=/u01/app/oradata/BASSD/sysaux01.dbf
input datafile file number=00001 name=/u01/app/oradata/BASSD/system01.dbf
input datafile file number=00005 name=/u01/app/oradata/BASSD/INFA_TBLSPC.DAT
input datafile file number=00006 name=/u01/app/oradata/BASSD/users01.dbf
channel ORA_DISK_1: starting piece 1 at 26-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=/u01/app/FRA/backup/db_BASSD_3461595491_1131_1.bkup tag=BASSD_WEEKLY_FULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:48:05
Finished backup at 27-SEP-15
Starting Control File and SPFILE Autobackup at 27-SEP-15
piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_27/o1_mf_s_891478046_c0gh4zf8_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 27-SEP-15
RMAN>
Starting backup at 27-SEP-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=88534 RECID=88756 STAMP=891454117
input archived log thread=1 sequence=88535 RECID=88757 STAMP=891454270
input archived log thread=1 sequence=88536 RECID=88758 STAMP=891468712
input archived log thread=1 sequence=88537 RECID=88759 STAMP=891468721
input archived log thread=1 sequence=88538 RECID=88760 STAMP=891468727
input archived log thread=1 sequence=88539 RECID=88761 STAMP=891476058
input archived log thread=1 sequence=88540 RECID=88762 STAMP=891476062
input archived log thread=1 sequence=88541 RECID=88763 STAMP=891478051
channel ORA_DISK_1: starting piece 1 at 27-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=/u01/app/FRA/backup/log_BASSD_3461595491_1133_1.bkup tag=BASSD_WEEKLY_FULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_26/o1_mf_1_88534_c0fqs4wf_.arc RECID=88756 STAMP=891454117
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_26/o1_mf_1_88535_c0fqxy1z_.arc RECID=88757 STAMP=891454270
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_26/o1_mf_1_88536_c0g61831_.arc RECID=88758 STAMP=891468712
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_26/o1_mf_1_88537_c0g61jqz_.arc RECID=88759 STAMP=891468721
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_26/o1_mf_1_88538_c0g61q4p_.arc RECID=88760 STAMP=891468727
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_27/o1_mf_1_88539_c0gf6syb_.arc RECID=88761 STAMP=891476058
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_27/o1_mf_1_88540_c0gf6y00_.arc RECID=88762 STAMP=891476062
archived log file name=/u01/app/FRA/BASSD/archivelog/2015_09_27/o1_mf_1_88541_c0gh52wx_.arc RECID=88763 STAMP=891478051
Finished backup at 27-SEP-15
Starting Control File and SPFILE Autobackup at 27-SEP-15
piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_27/o1_mf_s_891478058_c0gh5ccz_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 27-SEP-15
RMAN>
Starting backup at 27-SEP-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 27-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=/u01/app/FRA/backup/cf_BASSD_3fqi5o1f_1_1.bkup tag=BASSD_WEEKLY_FULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 27-SEP-15
Starting Control File and SPFILE Autobackup at 27-SEP-15
piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_27/o1_mf_s_891478065_c0gh5kv9_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 27-SEP-15
RMAN>
released channel: ORA_DISK_1
allocated channel: ORA_MAINT_DISK_1
channel ORA_MAINT_DISK_1: SID=1021 device type=DISK
RMAN>
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 1 days
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set 1092 26-SEP-15
Backup Piece 1092 26-SEP-15 /u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391266_c0ctf465_.bkp
Backup Set 1094 26-SEP-15
Backup Piece 1094 26-SEP-15 /u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391298_c0ctg47z_.bkp
Backup Set 1095 26-SEP-15
Backup Piece 1095 26-SEP-15 /u01/app/FRA/backup/cf_BASSD_39qi33a7_1_1.bkup
deleted backup piece
backup piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391266_c0ctf465_.bkp RECID=1092 STAMP=891391268
deleted backup piece
backup piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391298_c0ctg47z_.bkp RECID=1094 STAMP=891391300
deleted backup piece
backup piece handle=/u01/app/FRA/backup/cf_BASSD_39qi33a7_1_1.bkup RECID=1095 STAMP=891391304
Deleted 3 objects
RMAN>
List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
1090 1090 1 1 AVAILABLE DISK /u01/app/FRA/backup/db_BASSD_3461595491_1124_1.bkup
1091 1091 1 1 AVAILABLE DISK /u01/app/FRA/backup/db_BASSD_3461595491_1125_1.bkup
1093 1093 1 1 AVAILABLE DISK /u01/app/FRA/backup/log_BASSD_3461595491_1127_1.bkup
1096 1096 1 1 AVAILABLE DISK /u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391305_c0ctgbnx_.bkp
deleted backup piece
backup piece handle=/u01/app/FRA/backup/db_BASSD_3461595491_1124_1.bkup RECID=1090 STAMP=891388751
deleted backup piece
backup piece handle=/u01/app/FRA/backup/db_BASSD_3461595491_1125_1.bkup RECID=1091 STAMP=891391191
deleted backup piece
backup piece handle=/u01/app/FRA/backup/log_BASSD_3461595491_1127_1.bkup RECID=1093 STAMP=891391272
deleted backup piece
backup piece handle=/u01/app/FRA/BASSD/autobackup/2015_09_26/o1_mf_s_891391305_c0ctgbnx_.bkp RECID=1096 STAMP=891391306
Deleted 4 objects
RMAN>
released channel: ORA_MAINT_DISK_1
RMAN>
Recovery Manager complete.
Backup Completed at Sun Sep 27 00:48:05 UTC 2015
oracle@d2asedvic004[BASSD]#
Cronjob Script for BASST(Omer’s)
/u01/app/oracle/scripts/rmanbackup_BASST_disk_full_keep.sh >/u01/app/oracle/scripts/rmanbackup_BASST_disk_full_keep.log 2>&1
oracle@d2asetsic002[BASST]# cat /u01/app/oracle/scripts/rmanbackup_BASST_disk_full_keep.sh
SCRIPT
#!/bin/ksh
echo "Backup Starting at `date`"
. $HOME/.profile
rman target=/ << EOF
BACKUP device type disk format '/u01/app/FRA/backup/keep/db_%d_%I_%s_%p.bkup' tag BASST_keep_full database;
backup device type disk format '/u01/app/FRA/backup/keep/log_%d_%I_%s_%p.bkup' tag BASST_keep_full archivelog all not backed up;
backup device type disk format '/u01/app/FRA/backup/keep/cf_%d_%U.bkup' tag BASST_keep_full current controlfile;
EXIT;
EOF
echo "Backup Completed at `date`"
oracle@d2asetsic002[BASST]#
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
+mailto:kenneth.chando@associates.hq.dhs.gov
SET PAGESIZE 2000 LINESIZE 150
TTITLE CENTER 'DISK SPACE USAGE' SKIP 1
col TOTAL format 999999999.99 heading "Allocated Space|(GB)"
col USED format 999999999.99 heading "Used Space|(GB)"
col FREE format 999999999.99 heading "Free Space|(GB)"
col PERCENT_USED format 99999.99 heading "Space Usage|(%)"
BREAK ON REPORT
COMPUTE SUM LABEL "TOTAL DB SIZE:" OF USED ON REPORT;
SELECT a.tablespace_name,
ROUND(a.bytes/1024/1024/1024,2) TOTAL,
ROUND(NVL( b.bytes,0)/1024/1024/1024,2) USED,
ROUND(NVL(c.bytes, 0)/1024/1024/1024,2) FREE,
ROUND(NVL(b.bytes,0)*100/NVL(a.bytes,0),2) AS PERCENT_USED
FROM SYS.SM$TS_AVAIL a, SYS.SM$TS_USED b, SYS.SM$TS_FREE c
WHERE a.tablespace_name= b.tablespace_name(+)
AND b.tablespace_name = c.tablespace_name(+)
ORDER BY 5 DESC;
SIZE(%)
SET LINESIZE 200 PAGESIZE 2000
COL DATABASE_SIZE FOR A25
COL FREE_SPACE FOR A15
COL USED_SPACE FOR A15
SELECT ROUND (SUM (used.bytes) / 1024 / 1024 / 1024) || ' GB' "DATABASE_SIZE",
ROUND (SUM (used.bytes) / 1024 / 1024 / 1024) -
ROUND (free.p / 1024 / 1024 / 1024) || ' GB' "USED_SPACE",
ROUND (free.p / 1024 / 1024 / 1024) || ' GB' "FREE_SPACE"
FROM (
SELECT SUM (bytes) AS bytes
FROM v$datafile
UNION ALL
SELECT SUM (bytes) AS bytes
FROM v$tempfile
UNION ALL
SELECT SUM (bytes) AS bytes
FROM v$log
) used,
(SELECT SUM (bytes) AS p FROM dba_free_space ) free
GROUP BY free.p;
Db file_rename
set timing off
set feedback off
set echo off
set linesize 200
set pagesize 0
spool c:TXSTST3_db_file_rename
select 'mv ' || name || ' ' || replace(name,'/txstin1','/txstst3') ksh_cmd
from v$datafile order by name;
select 'mv ' || member || ' ' || replace(member,'/txstin1','/txstst3') as ksh_cmd
from v$logfile order by member;
select 'mv ' || name || ' ' || replace(name,'/temp/temp','/temp/txstst3temp') as ksh_cmd
from v$tempfile order by name;
select 'alter database rename file ''' || name || ''' to ''' || replace(name,'/txstin1','/txstst3') || ''';' as sql_cmd
from v$datafile order by name;
select 'alter database rename file ''' || member || ''' to ''' || replace(member,'/txstin1','/txstst3') || ''';' as sql_cmd
from v$logfile order by member;
select 'alter database rename file ''' || name || ''' to ''' || replace(name,'/temp/temp','/temp/txstst3temp') || ''';' as sql_cmd
from v$tempfile order by name;
spool off
db_cache_ratios
VAR val1 NUMBER;
COL val1 FOR 999999999.99
EXEC SELECT 100*(1-(SUM(Reloads)/SUM(Pins))) val1 INTO :val1 FROM V$LIBRARYCACHE;
VAR val2 NUMBER;
COL val2 FOR 999999999.99
EXEC SELECT 100*(1-(SUM(Getmisses)/SUM(Gets))) val2 INTO :val2 FROM V$ROWCACHE;
VAR val3 NUMBER;
COL val3 FOR 999999999.99
EXEC SELECT value val3 INTO :val3 FROM V$SYSSTAT WHERE Name = 'physical reads';
VAR val4 NUMBER;
COL val4 FOR 999999999.99
EXEC SELECT value val4 INTO :val4 FROM V$SYSSTAT WHERE Name = 'db block gets';
VAR val5 NUMBER;
COL val5 FOR 999999999.99
EXEC SELECT value val5 INTO :val5 FROM V$SYSSTAT WHERE Name = 'consistent gets';
VAR val6 NUMBER;
COL val6 FOR 999999999.99
EXEC SELECT ((1-(:val3/(:val4 + :val5)))*100) val6 INTO :val6 FROM DUAL;
VAR val7 NUMBER;
COL val7 FOR 999999999.99
VAR val8 NUMBER;
COL val8 FOR 999999999.99
EXEC SELECT SUM(Users_Opening)/COUNT(*) val7 , SUM(Executions)/COUNT(*) val8 INTO :val7, :val8 FROM V$SQLAREA;
SELECT 'Data Block Buffer Hit Ratio : '|| :val6,
' Shared SQL Pool ',
' Dictionary Hit Ratio : '|| :val2,
' Shared SQL Buffers (Library Cache) ',
' Cache Hit Ratio : '|| :val1,
' Avg. Users/Stmt : '|| :val7,
' Avg. Executes/Stmt : '|| :val8
FROM DUAL;
ENABLE/DISABLE DATAGUARD
*********************
* DISABLE Dataguard
*********************
On Standby DB:
------------------
i) SQL> alter database recover managed standby database cancel;
ii) SQL> alter system set log_archive_dest_state_2=DEFER SCOPE=BOTH;
On Primary DB:
---------------------
i) SQL> alter system set log_archive_dest_state_2=DEFER SCOPE=BOTH SID='*';
*********************
* EABLE Dataguard
*********************
On Primary DB:
---------------------
ALTER SYSTEM SET log_archive_dest_state_2='ENABLE' SCOPE=BOTH SID='*';
On Standby DB:
---------------------
ALTER SYSTEM SET log_archive_dest_state_2='ENABLE' SCOPE=BOTH;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
FIND CONTENTION in Database:
Finding the contention
When a session is waiting on this event, an entry will be seen in the v$session_wait system view giving more information on the blocks being waited for:
SELECT p1 "file#", p2 "block#", p3 "class#"
FROM v$session_wait
WHERE event = 'read by other session';
If information collected from the above query repeatedly shows that the same block, (or range of blocks), is experiencing waits, this indicates a "hot" block or object. The following query will give the name and type of the object:
SELECT relative_fno, owner, segment_name, segment_type
FROM dba_extents
WHERE file_id = &file
AND &block BETWEEN block_id AND block_id + blocks - 1;
Eliminating contention
Depending on the database environment and specific performance situation the following variety of methods can be used to eliminate contention:
Tune inefficient queries - This is one of those events you need to "catch in the act" through the v$session_wait view as prescribed above. Then, since this is a disk operating system issue, take the associated system process identifier (c.spid) and see what information we can obtain from the operating system.
Redistribute data from the hot blocks –deleting and reinserting the hot rows will often move them to a new data block. This will help decrease contention for the hot block and increase performance. More information about the data residing within the hot blocks can be retrieved with queries similar to the following:
SELECT data_object_id
FROM dba_objects
WHERE owner='&owner' AND object_name='&object';
SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,0) start_rowid
FROM dual;
--rowid for the first row in the block
SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,500) end_rowid
FROM dual;
--rowid for the 500th row in the block
SELECT <column_list>
FROM <owner>.<segment_name>
WHERE rowid BETWEEN <start_rowid> AND <end_rowid>
CHECK SYS_PRIVS:
SET LINESIZE 180 PAGESIZE 2000
COL TABLE_NAME FOR A30
COL GRANTEE FOR A20
COL PRIVILEGE FOR A25
COL OWNER FOR A20
SELECT GRANTEE, TABLE_NAME, OWNER, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE substr(privilege,1,20) LIKE '%EXECUTE%'
AND substr(table_name,1,35) LIKE 'UTL_%'
AND GRANTEE NOT IN ( 'EXECUTE_CATALOG_ROLE', 'SYS', 'SYSTEM', 'GATHER_SYSTEM_STATISTICS','DBA', 'SELECT_CATALOG_ROLE', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE')
ORDER BY TABLE_NAME, GRANTEE
/
CHECK STATISTICS:
SELECT DTM.TABLE_OWNER,
DTM.TABLE_NAME,
DTM.PARTITION_NAME,
ROUND ( (DTM.INSERTS + DTM.UPDATES + DTM.DELETES) / DT.NUM_ROWS,2) * 100 "CHANGE_FACTOR",
DT.PARTITIONED,
DT.NUM_ROWS
FROM SYS.DBA_TAB_MODIFICATIONS DTM,
DBA_TABLES DT
WHERE DTM.TABLE_OWNER = DT.OWNER
AND DTM.TABLE_NAME = DT.TABLE_NAME
AND NOT DTM.TABLE_OWNER IN ('SYS','SYSTEM', 'DBSNMP', 'OUTLN')
AND NOT DT.NUM_ROWS IS NULL
AND IOT_TYPE IS NULL
AND ( (DT.PARTITIONED = 'YES' AND NOT DTM.PARTITION_NAME IS NULL)
OR (DT.PARTITIONED = 'NO' AND DTM.PARTITION_NAME IS NULL))
AND NOT (DTM.TABLE_OWNER, DTM.TABLE_NAME) IN (SELECT DTS.OWNER, DTS.TABLE_NAME FROM DBA_TAB_STATISTICS DTS WHERE DTS.STATTYPE_LOCKED ='ALL')
AND NOT (DTM.TABLE_OWNER, DTM.TABLE_NAME) IN (SELECT DET.OWNER, DET.TABLE_NAME FROM DBA_EXTERNAL_TABLES DET)
AND DTM.TABLE_OWNER='DWADM'
ORDER BY 2;
SELECT * FROM DBA_TAB_STATISTICS
WHERE STALE_STATS='YES'
AND NOT OWNER IN ('SYSTEM', 'DBSNMP','SYS');
CHECK ROLE_PRIVS
As for the second question: the thing you're looking for, are the system privileges CREATE USER, DROP USER and possibly ALTER USER might be useful. These privileges might have been granted directly to a user or via a role.
This query checks and shows both:
select grantee, 'DIRECTLY' type, privilege priv_or_role
from dba_sys_privs
where privilege in ('CREATE USER', 'DROP USER', 'ALTER USER')
and grantee in (select username from dba_users)
UNION
select grantee, 'VIA ROLE' type, granted_role priv_or_role
from dba_role_privs
where granted_role in (
select grantee
from dba_sys_privs
where privilege in ('CREATE USER', 'DROP USER', 'ALTER USER')
and grantee in (select role from dba_roles)
);
CHECK MEMORY_USAGE:
set serveroutput on
DECLARE
libcac NUMBER(10,2);
rowcac NUMBER(10,2);
bufcac NUMBER(10,2);
redlog NUMBER(10,2);
spsize NUMBER;
blkbuf NUMBER;
logbuf NUMBER;
BEGIN
SELECT VALUE INTO redlog FROM v$sysstat
WHERE name = 'redo log space requests';
SELECT 100*(SUM(pins)-SUM(reloads))/SUM(pins) INTO libcac FROM v$librarycache;
SELECT 100*(SUM(gets)-SUM(getmisses))/SUM(gets) INTO rowcac FROM v$rowcache;
SELECT 100*(cur.VALUE + con.VALUE - phys.VALUE)/(cur.VALUE + con.VALUE) INTO bufcac
FROM v$sysstat cur,v$sysstat con,v$sysstat phys,v$statname ncu,v$statname nco,v$statname nph
WHERE cur.statistic# = ncu.statistic#
AND ncu.name = 'db block gets'
AND con.statistic# = nco.statistic#
AND nco.name = 'consistent gets'
AND phys.statistic# = nph.statistic#
AND nph.name = 'physical reads';
SELECT VALUE INTO spsize FROM v$parameter WHERE name = 'shared_pool_size';
SELECT VALUE INTO blkbuf FROM v$parameter WHERE name = 'db_block_buffers';
SELECT VALUE INTO logbuf FROM v$parameter WHERE name = 'log_buffer';
DBMS_OUTPUT.put_line('> SGA CACHE STATISTICS');
DBMS_OUTPUT.put_line('> ********************');
DBMS_OUTPUT.put_line('> SQL Cache Hit rate = '||libcac);
DBMS_OUTPUT.put_line('> Dict Cache Hit rate = '||rowcac);
DBMS_OUTPUT.put_line('> Buffer Cache Hit rate = '||bufcac);
DBMS_OUTPUT.put_line('> Redo Log space requests = '||redlog);
DBMS_OUTPUT.put_line('> ');
DBMS_OUTPUT.put_line('> INIT.ORA SETTING');
DBMS_OUTPUT.put_line('> ****************');
DBMS_OUTPUT.put_line('> Shared Pool Size = '||spsize||' Bytes');
DBMS_OUTPUT.put_line('> DB Block Buffer = '||blkbuf||' Blocks');
DBMS_OUTPUT.put_line('> Log Buffer = '||logbuf||' Bytes');
DBMS_OUTPUT.put_line('> ');
IF
libcac < 99 THEN DBMS_OUTPUT.put_line('*** HINT: Library Cache too low! Increase the Shared Pool Size.');
END IF;
IF
rowcac < 85 THEN DBMS_OUTPUT.put_line('*** HINT: Row Cache too low! Increase the Shared Pool Size.');
END IF;
IF
bufcac < 90 THEN DBMS_OUTPUT.put_line('*** HINT: Buffer Cache too low! Increase the DB Block Buffer value.');
END IF;
IF
redlog > 100 THEN DBMS_OUTPUT.put_line('*** HINT: Log Buffer value is rather low!');
END IF;
END;
/
BLOCKING_SESSIONS:
Prompt ============ Display the BLOCKING SESSION =============
SELECT B.SID,
B.SQL_ID,
B.USERNAME,
B.MACHINE
FROM V$SESSION B
WHERE B.SID = (SELECT DISTINCT blocker FROM (select a.sid blocker, 'is blocking the session ', b.sid blockee
FROM v$lock a, v$lock b
WHERE a.block =1
AND b.request > 0
AND a.id1=b.id1
AND a.id2=b.id2)
);
Prompt ============ Display the BLOCKED SESSION =============
SELECT
S.SID,
S.SQL_ID,
S.USERNAME,
S.MACHINE
FROM V$SESSION S
WHERE S.SID = (SELECT DISTINCT blockee FROM (select a.sid blocker, 'is blocking the session ', b.sid blockee
FROM v$lock a, v$lock b
WHERE a.block =1
AND b.request > 0
AND a.id1=b.id1
AND a.id2=b.id2)
);
BLOCKING SESSIONS:
set pagesize 14000 linesize 170
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2
order by s1.machine, s2.machine;
ACTIVE USERS:
A script such as follows works well. It shows you who's logged in and active -- and if
active, the statement they are executing (and the last et text shows you how long that
statement has been executing). Currently, it shows only SQL that is executing right now,
just change the predicate from "where status = 'ACTIVE'" to "where status = status" if
you want to see the currently executing as well as LAST executed (in which case the last
et column text shows you how long they've been idle -- not how long that statement took
to execute):
column status format a10
set feedback off
set serveroutput on
select username, sid, serial#, process, status
from v$session
where username is not null
/
column username format a20
column sql_text format a55 word_wrapped
set serveroutput on size 1000000
declare
x number;
begin
for x in
( select username||'('||sid||','||serial#||
') ospid = ' || process ||
' program = ' || program username,
to_char(LOGON_TIME,' Day HH24:MI') logon_time,
to_char(sysdate,' Day HH24:MI') current_time,
sql_address, LAST_CALL_ET
from v$session
where status = 'ACTIVE'
and rawtohex(sql_address) <> '00'
and username is not null order by last_call_et )
loop
for y in ( select max(decode(piece,0,sql_text,null)) ||
max(decode(piece,1,sql_text,null)) ||
max(decode(piece,2,sql_text,null)) ||
max(decode(piece,3,sql_text,null))
sql_text
from v$sqltext_with_newlines
where address = x.sql_address
and piece < 4)
loop
if ( y.sql_text not like '%listener.get_cmd%' and
y.sql_text not like '%RAWTOHEX(SQL_ADDRESS)%')
then
dbms_output.put_line( '--------------------' );
dbms_output.put_line( x.username );
dbms_output.put_line( x.logon_time || ' ' ||
x.current_time||
' last et = ' ||
x.LAST_CALL_ET);
dbms_output.put_line(
substr( y.sql_text, 1, 250 ) );
end if;
end loop;
end loop;
end;
/
column username format a25 word_wrapped
column module format a35 word_wrapped
column action format a25 word_wrapped
column client_info format a30 word_wrapped
SELECT username||'('||sid||','||serial#||')' username,
module,
action,
client_info
FROM V$SESSION
where module || action || client_info IS NOT NULL
ORDER BY 1;
ACTIVE SESSIONS:
set linesize 350 pagesize 14000
ALTER SESSION SET NLS_DATE_FORMAT='MM/DD/YYYY HH24:MI:SS';
col username for a15
col spid for 99999999
col sid for 999999
col serial# for 99999999
col LOGON_TIME for a20
col sql_text for a80
col PID for 99999
col process for a20
SELECT DISTINCT
a.username,
---b.spid,
a.osuser,
a.status,
a.logon_time,
a.sid,
a.machine,
a.serial#,
---c.sql_text,
b.PID,
a.Process
FROM V$SESSION a, V$PROCESS b, V$SQLTEXT c
WHERE a.PADDR=b.ADDR
AND c.hash_value=a.sql_hash_value
----and a.STATUS='ACTIVE'
order by a.sid;
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
SELECT 'GRANT SELECT ON ' || OWNER || '.' || TABLE_NAME || ' TO system;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || TABLE_NAME || ' TO sysman;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || OBJECT_NAME || ' TO system;'
FROM DBA_OBJECTS
WHERE OWNER='asevilla'
AND OBJECT_TYPE='VIEW'
ORDER BY OBJECT_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || OBJECT_NAME || ' TO sysman;'
FROM DBA_OBJECTS
WHERE OWNER='asevilla'
AND OBJECT_TYPE='VIEW'
ORDER BY OBJECT_NAME;
SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON ' || OWNER || '.' || TABLE_NAME || ' TO sys;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON ' || OWNER || '.' || TABLE_NAME || ' TO asevilla;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
XCLOCK: X11->forwarding:
MobaXterm requires you to do SSH –X user@DestionationiP before GUI will work.
DATABASE INSTALL:
After successful install of your database, you need to set your environment variables CORRECTLY:
1. ~/.bash_profile (modify to reflect the new $ORACLE_HOME)
2. $PATH
3.
RECOVER system tablespace from RMAN:
RMAN>
run
{
set newname for datafile 1 to '/u01/oradata/LABDB/datafile/non_default_location/system01.dbfq60k_.dbf';
restore tablespace system;
switch datafile all;
recover tablespace system;
alter database open;
};
Or:
SQL>create tablespace SYSTEM datafile '/u01/oradata/LABDB/datafile/non_default_location/system01.dbfq60k_.dbf' size 50M autoextend on;
(security monitoring)
Desc DBA_audit_session
Desc DB_audit_object
================================================================================================================================
restore controlfile from '/u01/oradata/LABDB/controlfile/o1_mf_8nkgs300_.ctl' to '/u01/oradata/LABDB/datafile/non_default_location';
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
(phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
From: Chando, Kenneth
Sent: Tuesday, October 06, 2015 8:45 PM
To:kenneth.chando@associates.hq.dhs.gov
Cc: Chando, Kenneth
Subject: VIEWING a USER's Privilege to an Object
SELECT 'GRANT SELECT ON ' || OWNER || '.' || TABLE_NAME || ' TO system;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || TABLE_NAME || ' TO sysman;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || OBJECT_NAME || ' TO system;'
FROM DBA_OBJECTS
WHERE OWNER='asevilla'
AND OBJECT_TYPE='VIEW'
ORDER BY OBJECT_NAME;
SELECT 'GRANT SELECT ON ' || OWNER || '.' || OBJECT_NAME || ' TO sysman;'
FROM DBA_OBJECTS
WHERE OWNER='asevilla'
AND OBJECT_TYPE='VIEW'
ORDER BY OBJECT_NAME;
SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON ' || OWNER || '.' || TABLE_NAME || ' TO sys;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON ' || OWNER || '.' || TABLE_NAME || ' TO asevilla;'
FROM DBA_TABLES
WHERE OWNER='asevilla'
ORDER BY TABLE_NAME;
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
Hi Bruce,
Find below my draft playbook for the HP DC2LAB 12c install
**NOTE: Please as you’re currently performing the task in the LAB, kindly modify/amend to suit our purpose. Include screenshot where possible.
=====================================================================================================================================
I. ORACLE CONFIGURATION
The following two installation files were downloaded from Oracle Support’s website (www.support.oracle.com).
i. linuxamd64_12102_database_1of2.zip
ii. linuxamd64_12102_database_2of2.zip
======================================================================================================================================
II. Create directory on the server that host the two zipped files above
#cd /u01/app/oracle
#mkdir staging
=======================================================================================================================================
III. Create directory structure that will be used as the base ($ORACLE_BASE) and home ($ORACLE_HOME) location for the oracle 12c software binaries as shown below:
#cd /
#mkdir -p /u01/app/oracle/product/12.1.0.1
=========================================
IV. Change the ownership of the directory you just created as shown:
#chown oracle:oinstall -R /u01/app/oracle
====================================================
V. Go to the home directory of oracle user as shown:
====================================================
#cd /home/oracle
#ls -ltra
============================================================================================
VI. Add/Modify entries in the ~/.bash_profile for the oracle user to fit currently install version of oracle rdbms software (e.g. 12.1.0.1):
============================================================================================================================
ORACLE_HOME=/u01/app/oracle/product/12.1.0.1
ORACLE_BASE=/u01/app/oracle
ORACLE_SID=labdb
ORACLE_UNQNAME=labdb
ORACLE_DB=labdb
ORACLE_GRID=/u01/app/12.1.0.1/grid; export ORACLE_GRID
DIAG=$ORACLE_BASE/diag/rdbms/labdb/labdb/trace
TNS_ADMIN=$ORACLE_HOME/network/admin
# PATH=$HOME/bin:/u01/app/oracle/product/11.2.0.3/bin:$PATH
# PATH=$HOME/bin:/u01/app/oracle/product/11.2.0.3/bin:/usr/local/bin:/bin:/usr/bin
PATH=$HOME:/usr/sbin:/usr/proc/bin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/local/bin:/usr/bin:$PATH
PATH=$ORACLE_BASE/scripts:$PATH
export ORACLE_HOME ORACLE_BASE ORACLE_SID ORACLE_UNQNAME TNS_ADMIN DIAG
alias ll='ls -al'
alias scripts='cd /u01/app/oracle/scripts'
alias sql='$ORACLE_HOME/bin/sqlplus "/ as sysdba"'
ora_db=$( echo "$ORACLE_DB" | tr -s '[:upper:]' '[:lower:]' )
alias alog='tail -200 /u01/app/oracle/diag/rdbms/${ora_db}/${ORACLE_SID}/trace/alert_${ORACLE_SID}.log'
alias bdump='cd /u01/app/oracle/diag/rdbms/${ora_db}/${ORACLE_SID}/trace/'
alias udump='cd /u01/app/oracle/diag/rdbms/${ora_db}/${ORACLE_SID}/trace/'
alias cdump='cd /u01/app/oracle/diag/rdbms/${ora_db}/${ORACLE_SID}/cdump'
alias adump='cd /u01/app/oracle/admin/${ORACLE_DB}/adump'
#alias goasm='. $HOME/.goasm'
#alias godb='. $HOME/.godb'
export TMOUT=0
PS1="$USER@"`hostname`"[$ORACLE_SID]# "
#export ORACLE_BASE=/u01/app/oracle
#export ORACLE_HOME=/u01/app/oracle/product/12.1.0.1
#export ORACLE_SID=labdb
export TNS_ADMIN=/u01/app/oracle/product/12.1.0.1/network/admin
# export PATH=/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:/usr/local/bin:/bin:/usr/bin
# export PATH=/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:/usr/local/bin:/bin:/usr/bin
-- INSERT --
=====================================================================================================
VII. INSTALL ORACLE DATABASE SOFTWARE
=====================================
1.Winscp - transfer the 2 oracle zipped files to the /u01/app/oracle/staging directory in your server directory on your Linux server
2. On server where oracle 12c is to be installed do:
#export ORACLE_HOME=/u01/app/oracle/12.1.0.1
#cd /u01/app/oracle/staging
3. Unzip the 2 oracle 12c zipped files:
#unzip linuxamd64_12102_database_1of2.zip
#unzip linuxamd64_12102_database_2of2.zip
4. Once extraction is completed, view contents by doing:
# ls -ltra on /u01/app/oracle/staging
5. Go to the database directory
#cd /u01/app/oracle/staging/database
6. Kick off the installation script (runInstaller) as follows:
/u01/app/oracle/staging/database#./runInstaller
==================================================================================================
7. Run the commands below to check DISPLAY settings:
#echo $DISPLAY
#If no DISPLAY setting configured, do:
8. root# ssh server IP Address (i.e. when connected via MobaXterm)
9. echo $DISPLAY (x11 DISPLAY settings will be forwarded to your server when you ssh to server ip)
==============================================================================================
10. Kick off the installation script (runInstaller) as follows:
/u01/app/oracle/staging/database#./runInstaller
11. Uncheck the "I wish to receive security updates via My Oracle Support" option
12. Click "YES"
13. Select "Skip software updates"
14. Click "NEXT"
15. Select "Create and configure a database"/"Install database software only" (for very new install)
16. Click "NEXT"
17. Select "Server Class"
18. Click "NEXT"
19. Select "Single instance database installation"
20. Click "NEXT"
21. Select "Typical Install"
22. Click "NEXT"
23. Most of the information will be pre-populated for: Oracle_base,Software location,Database file location, etc
24. Enter Global database name (e.g. labdb)
25. Enter/Confirm Password
26. Click "NEXT" (leave defaults)
27. Click "NEXT"
28. Click "Install"
29. Execute required Configuration scripts as root user in another session (e.g see below)
root#/../../root.sh
30. Click "OK" once scripts are executed as root user.
31. Click "Close"
=================================================
TESTING
==========
$echo $ORACLE_HOME
$echo $ORACLE_SID
32. Connect to your database and test by doing:
#sql
sql>select name from v$database;
sql>select status from v$instance;
==============================================================================================================
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
That was perfectly fine.
Then you would have to go back and create the database, or create it manually with the script I sent you.
Thanks
Lionel
From: Chando, Kenneth
Sent: Wednesday, October 14, 2015 3:43 PM
To: Charles, Lionel
Cc: Franklin, Bruce
Subject: RE: 4th day with no database
I was just talking to Bruce about the possibility that I might have used the selection of “Install Database Software Only” rather than “Create and Configure a Database” option…
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Charles, Lionel
Sent: Wednesday, October 14, 2015 3:31 PM
To: Chando, Kenneth
Cc: Franklin, Bruce
Subject: RE: 4th day with no database
Ken,
I know you shadowed Bruce to get this completed and thanks to both of you. Did you see where something might have been missed causing the issue you encountered?
Thanks
Lionel
From: Chando, Kenneth
Sent: Wednesday, October 14, 2015 2:59 PM
To: Matthews, Gregory (Scott); Bailey, Denise Nikcevich; Dorgan, Dennis
P.; Franklin, Bruce; Charles, Lionel
Cc: Ignatz, Bryan; Batheja, Rajeev
Subject: RE: 4th day with no database
Hi Dennis,
Thanks for your patience.
You can now connect to the database.
Let us know if you face any issues.
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
(Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Matthews, Gregory (Scott)
Sent: Tuesday, October 13, 2015 2:41 PM
To: Bailey, Denise Nikcevich; Chando, Kenneth; Dorgan, Dennis P.;
Franklin, Bruce; Charles, Lionel
Cc: Ignatz, Bryan; Batheja, Rajeev
Subject: RE: 4th day with no database
Ken,
Do these changes mean the database should be back up and running now?
Thanks
-Scott
Gregory Scott Matthews
DC2 Tools Engineer - DHS
DC2 Program, An ISO 20000:2011 Organization
HP Enterprise Services
gregory.matthews@hp.com
T +1 434 374 0621
Hewlett-Packard Company
Data Center 2
Clarksville, VA 23927
From: Bailey, Denise Nikcevich
Sent: Monday, October 12, 2015 5:35 PM
To: Chando, Kenneth; Dorgan, Dennis P.; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan; Batheja, Rajeev
Subject: RE: 4th day with no database
It is my understanding that we should not be using memory_target, based off another admin who talked to Martin Fesmire, the Database Capability Lead for HPES. We should use the memory percentage parameter.
I have adjusted the /etc/sysctl.conf file with the correct parameters for 4 GB of memory and also added some RHEL tuning parameters that are supposed to provide.
Please let me know if you got it working.
Denise
Denise Nikcevich Bailey
UNIX/Linux Engineer
Global Engineering and Technical Consulting (GE&TC)
HP Enterprise
Services
denise.bailey@hp.com
T +1 248 639 6067
M +1 248 497 1384
Hewlett-Packard Company
585 South Boulevard
Pontiac, MI 48341 (Eastern Time)
USA
Scheduled Time Off: October 19, November 30, December 1, (HPE Shutdown) 28,29,30,31
From: Chando, Kenneth
Sent: Monday, October 12, 2015 11:38 AM
To: Bailey, Denise Nikcevich; Dorgan, Dennis P.; Franklin, Bruce;
Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: RE: 4th day with no database
Hi Denise,
Here is what I just got from Oracle’s site concerning minimum requirements for a 12c database:
https://docs.oracle.com/cd/E23104_01/sysreqs1213/sysrs.htm
Please, let us know if anything else is needed.
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
From: Bailey, Denise Nikcevich
Sent: Monday, October 12, 2015 11:37 AM
To: Chando, Kenneth; Dorgan, Dennis P.; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: RE: 4th day with no database
If /dev/shm is low, please send the Oracle Note with the OS tuning requirements for RHEL version you are on in the Lab.
Thanks.
Denise
Denise Nikcevich Bailey
UNIX/Linux Engineer
Global Engineering and Technical Consulting (GE&TC)
HP Enterprise
Services
denise.bailey@hp.com
T +1 248 639 6067
M +1 248 497 1384
Hewlett-Packard Company
585 South Boulevard
Pontiac, MI 48341 (Eastern Time)
USA
Scheduled Time Off: October 19, November 30, December 1, (HPE Shutdown) 28,29,30,31
From: Chando, Kenneth
Sent: Monday, October 12, 2015 8:10 AM
To: Dorgan, Dennis P.; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan; Bailey, Denise Nikcevich
Subject: RE: 4th day with no database
Hi Brian/Denise,
After installing 12c on node D2LSENPSH164 in the DC2LAB, there is need for more memory.
Kindly address.
See below:
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
From: Chando, Kenneth
Sent: Friday, October 09, 2015 8:48 AM
To: Dorgan, Dennis P.; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: RE: 4th day with no database
Hi Dennis,
Good morning. After consulting with Oracle Support, they recommend us to rebuild the database.
I’m currently rebuilding it completely.
Will update you when it’s all done.
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
From: Dorgan, Dennis P.
Sent: Friday, October 09, 2015 8:46 AM
To: Chando, Kenneth; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: RE: 4th day with no database
Hi Ken:
Has there been any progress towards getting the CCC database operational again? Today is the 5th day it has been down.
Thanks,
Dennis
From: Chando, Kenneth
Sent: Thursday, October 08, 2015 9:49 AM
To: Dorgan, Dennis P.; Franklin, Bruce; Charles, Lionel
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: RE: 4th day with no database
Hi Dennis,
I have been in contact with Oracle support this morning and they’re looking into this.
I’m pushing them to get this issue taken care of as soon as possible.
The sad situation here is that as it stands currently, DC2LAB doesn’t have a media backup option for now. That would have made things easier for us to recover the file locally here at HP.
Sorry for the inconveniences caused.
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
(phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback |Recognition@hp
From: Dorgan, Dennis P.
Sent: Thursday, October 08, 2015 8:27 AM
To: Franklin, Bruce; Charles, Lionel; Chando, Kenneth
Cc: Matthews, Gregory (Scott); Ignatz, Bryan
Subject: 4th day with no database
Hello all:
What is happening with the Oracle database for CCC? This begins the 4th day it has been down, and I can do nothing without it!
Please get this thing fixed!
Thanks,
Dennis
Manual Database Creation
Friday, October 16, 2015
8:16 AM
**** Set new database environment *****
. oraenv
**** Create password file ***
orapwd file=orapwCSGDB password=Password1 entries=30
sqlplus /nolog
connect sys/password as sysdba
startup nomount
CREATE DATABASE CSGDB
USER SYS IDENTIFIED BY Password1
USER SYSTEM IDENTIFIED BY Password1
LOGFILE GROUP 1 ('/u01/oradata/CSGDB/redo01a.log', '/u01/oradata/CSGDB/redo
01b.log') SIZE 10M,
GROUP 2 ('/u01/oradata/CSGDB/redo02a.log', '/u01/oradata/CSGDB/redo
02b.log') SIZE 10M,
GROUP 3 ('/u01/oradata/CSGDB/redo03a.log', '/u01/oradata/CSGDB/redo
03b.log') SIZE 10M
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXLOGHISTORY 200
MAXDATAFILES 100
MAXINSTANCES 8
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/u01/oradata/CSGDB/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/oradata/CSGDB/temp01.dbf'
SIZE 50M REUSE AUTOEXTEND on
sysaux
datafile '/u01/oradata/CSGDB/sysaux01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
UNDO TABLESPACE undotbs1
DATAFILE '/u01/oradata/CSGDB/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
SQL> !uname -a
SQL> @cr_CSGDB_DB.sql
Database created.
****** create additional tablespaces ******
spool cr_userapp_tbs.log
create tablespace users datafile '/u01/oradata/CSGDB/users01.dbf' size 10M extent management local autoallocate segment space management
auto;
create tablespace csg datafile '/u01/oradata/CSGDB/CSGDB.dbf' size 4000M extent management local autoallocate segment space management
auto;
create temporary tablespace temp tempfile '/u01/oradata/CSGDB/temp1.dbf' size 2000M reuse autoextend on;
spool off
******* Run catalog.sql ****
******* Run catproc.sql ****
******* create role and user ****
spool cr_csgADMINROLE.log
create role aradminrole;
grant alter session to csgadminrole;
grant CREATE CLUSTER to csgadminrole;
grant CREATE DATABASE LINK to csgadminrole;
grant CREATE PROCEDURE to csgadminrole;
grant CREATE SEQUENCE to csgadminrole;
grant CREATE SESSION to csgadminrole;
grant CREATE SYNONYM to aradminrole;
grant CREATE TABLE to csgadminrole;
grant CREATE TRIGGER to csgadminrole;
grant CREATE VIEW to csgadminrole;
grant QUERY REWRITE to csgadminrole;
create user csgadmin identified by c8es013gr# default tablespace csg temporary tablespace temp;
grant csgadminrole to csgadmin;
******* Create spfile from pfile ****
Script Database Creation
Friday, October 16, 2015
8:14 AM
/u01/app/oracle/product/11.2.0.3/bin/orapwd file=orapwCSGDB password=s3cur1ty entries=30
sqlplus /nolog
connect sys/s3cur1ty as sysdba
startup nomount
oracle@D2LSENPSH228[CSGDB]# sqlplus /nolog
SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 5 01:59:26 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> sys/s3cur1ty as sysdba
SP2-0734: unknown command beginning "sys/s3cur1..." - rest of line ignored.
SQL> connect sys/s3cur1ty as sysdba
Connected.
SQL> shutdown
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1653518336 bytes
Fixed Size 2228904 bytes
Variable Size 1325403480 bytes
Database Buffers 318767104 bytes
Redo Buffers 7118848 bytes
SQL> @cr_CSGDB_DB.sql
Database created.
SQL> @cr_userapp_tbs.sql
Tablespace created.
Tablespace created.
PL/SQL procedure successfully completed.
TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP CATALOG 2014-08-05 02:11:42
PL/SQL procedure successfully completed.
SQL>
SQL> SELECT dbms_registry_sys.time_stamp('CATPROC') AS timestamp FROM DUAL;
TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP CATPROC 2014-08-05 02:21:27
1 row selected.
SQL>
SQL> SET SERVEROUTPUT OFF
SQL>
SQL>
SQL>
SQL> @cr_csgADMINROLE.sql
SQL> spool cr_csgADMINROLE.log
SQL>
SQL> create role csgadminrole;
Role created.
SQL>
SQL> grant alter session to csgadminrole;
Grant succeeded.
SQL> grant CREATE CLUSTER to csgadminrole;
Grant succeeded.
SQL> grant CREATE DATABASE LINK to csgadminrole;
Grant succeeded.
SQL> grant CREATE PROCEDURE to csgadminrole;
Grant succeeded.
SQL> grant CREATE SEQUENCE to csgadminrole;
Grant succeeded.
SQL> grant CREATE SESSION to csgadminrole;
Grant succeeded.
SQL> grant CREATE SYNONYM to aradminrole;
grant CREATE SYNONYM to aradminrole
*
ERROR at line 1:
ORA-01917: user or role 'ARADMINROLE' does not exist
SQL> grant CREATE TABLE to csgadminrole;
Grant succeeded.
SQL> grant CREATE TRIGGER to csgadminrole;
Grant succeeded.
SQL> grant CREATE VIEW to csgadminrole;
Grant succeeded.
SQL> grant QUERY REWRITE to csgadminrole;
Grant succeeded.
SQL>
SQL> create user csgadmin identified by c8es013gr# default tablespace csg temporary tablespace temp;
User created.
SQL>
SQL> grant csgadminrole to csgadmin;
Grant succeeded.
SQL>
SQL> spool off
SQL> grant CREATE SYNONYM to csgadminrole;
Grant succeeded.
SQL> grant csgadminrole to csgadmin;
Grant succeeded.
SQL> grant CREATE SYNONYM to csgadminrole;
Grant succeeded.
SQL> grant csgadminrole to csgadmin;
Grant succeeded.
SQL>
SQL> create spfile from pfile;
File created.
********
******* Apply Jul2014 patch
Patching component oracle.rdbms, 11.2.0.3.0...
Copying file to "/u01/app/oracle/product/11.2.0.3/cpu/CPUJul2014/catcpu.sql"
Copying file to "/u01/app/oracle/product/11.2.0.3/cpu/CPUJul2014/catcpu_rollback.sql"
Applying interim patch '18740215' to OH '/u01/app/oracle/product/11.2.0.3'
Patching component oracle.rdbms, 11.2.0.3.0...
Patching component oracle.rdbms.rsf, 11.2.0.3.0...
Patches 13742433,13742434,13742435,13742436,13742438,14062795,14062797,14480675,14480676,15862016,15862017,15862018,15862019,15862020,15862021,15862022,15862023,15862024,16314467,16794241,16794242,16794244,17333197,17333198,17333199,17333203,17748830,17748831,17748832,17748833,18173592,18173593,18173595,18681866,18740215 successfully applied.
Log file location: /u01/app/oracle/product/11.2.0.3/cfgtoollogs/opatch/opatch2014-08-05_13-50-38PM.log
OPatch succeeded.
SQL> PROMPT Updating registry...
Updating registry...
SQL> INSERT INTO registry$history
2 (action_time, action,
3 namespace, version, id,
4 bundle_series, comments)
5 VALUES
6 (SYSTIMESTAMP, 'APPLY',
7 SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'),
8 '11.2.0.3',
9 11,
10 'CPU',
11 'CPUJul2014');
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_CSGDB_APPLY_2014Aug05_14_04_09.
DC2 DATABASE SUPPORT_TECHNICAL DOCUMENTS
Wednesday, May 20, 2015
10:37 AM
============ORACLE TROUBLESHOOTING - OMER 2012============
Team,
Here are a few notes on troubleshooting common Oracle issues while you are on-call.
All Oracle and SQL servers we support are listed in spreadsheet. They are also listed in the two public groups I have created in HPSA: “Oracle Servers” and “SQL Servers” we will try to keep them up to date. HPSA will be the fastest way to get to these servers otherwise you may have to go through some jump servers to get to some of them when working from home.
Once you login with your own account to an oracle server issue “sudo su – oracle” to become the oracle user. We have setup the environment for the oracle user in all servers via the .profile. We created alias that will make you move around and access the databases easily. Here are some of the alias names you could use (just enter the alias and hit <Enter>)
alias scripts='cd /u01/app/oracle/scripts'
alias ll='ls -ltr'
alias nomon='sudoedit /opt/CA/SharedComponents/ccs/atech/agents/config/caiLogA2/*OraNegativeList.txt'
alias alog='tail -100 /u01/app/oracle/diag/rdbms/${ora_sid}/${ORACLE_SID}/trace/alert_${ORACLE_SID}.log'
alias bdump='cd /u01/app/oracle/diag/rdbms/${ora_sid}/${ORACLE_SID}/trace'
alias udump='cd /u01/app/oracle/diag/rdbms/${ora_sid}/${ORACLE_SID}/trace'
alias adump='cd /u01/app/oracle/admin/${ORACLE_SID}/adump'
alias cdump='cd /u01/app/oracle/admin/${ORACLE_SID}/cdump'
alias admin='cd /u01/app/oracle/admin/${ORACLE_SID}'
alias data='cd /u01/oradata/${ORACLE_SID}'
alias bkup='cd /u01/app/oracle/backup'
alias media='cd /u01/app/media'
alias pfile='cd $ORACLE_HOME/dbs'
alias dbs='cd $ORACLE_HOME/dbs'
alias arch='cd /u01/oradata/${ORACLE_SID}/arch'
alias sql='sqlplus "/ as sysdba"'
For example to look at the last 100 lines of the database alert log, which is helpful in troubleshooting issues, you would type alog and hit <Enter>
If you need to look up more lines you can go to the trace directory where the alert log is located by typing bdump and hitting <Enter> and then tail to any number of lines.
We have also created a collection of scripts and placed them in the /u01/app/oracle/scripts in all database servers. Just use the “scripts” alias to go there and look them out. Here are a sample helpful ones:
sh_active_locks.sql
sh_user_sql.sql -- display source code run by specific user session
sh_active_sessions.sql
sh_all_sessions.sql
sh_df.sql -- list datafiles and their auto extend storage
sh_temp_usage.sql -- who is currently using TEMP tablespace
sh_users.sql -- list database users and their default tablespaces and account status (whether open or locked)
sh_tsdf.sql -- list tablespace information. To get a good picture of auto extendible datafiles run sh_df.sql
sh_free_mem.sql
sh_waits.sql
sh_arch_hist.sql -- list archivelog sizing information by dates
sh_invalid_objects.sql
sh_sqlarea.sql -- what is currently running
In Servers that are configured as DataGuard (whether primary or standby) we can also run the alogs.sql script to list logs and show whether they have been applied to the standby or not. If there is a gap (logs not shipped or not applied) then there could be a communication issue between the servers. There are procedures for fixing the issue when standby is falling behind.
In that directory you will also see the netbackup RMAN scripts (start with nb_hot_backup_*.sh). The output of these scripts when executed by netbackup policies are saved in the scripts directory a nb_hot_backup_*.sh.out which you can check to diagnose backup issues.
Adjusting space if needed is straight forward using ALTER DATABASE or ALTER TABLESPACE commands.
You can use a few UNIX commands to check filesystem space utilization i.e. df –h | grep u01
You can also run the “top” command to check memory and cpu utilization.
To check the listener you can use lsnrctl commands to check status stop and start the service if/when needed.
For Oracle RAC we use a set of commands to check, start and stop database services and another set of commands to check, start, and stop cluster ready services (crs). These can be executed from either node of the cluster:
Server Control (srvctl commands) are used to check, start, or stop database instances and other services such as listener (instead of using sqlplus or lsnrctl as the case of standalone servers).. Here are some examples:
===== Check Status ================
Confirm that the Oracle ASM instance is running:
# srvctl status asm
ASM is running on d2aclprhq003,d2aclprhq004
# srvctl status instance -d ESDOP -n d2aclprhq003
Instance ESDOP1 is running on node d2aclprhq003
# srvctl status instance -d ESDOP -n d2aclprhq004
Instance ESDOP2 is running on node d2aclprhq004
# srvctl status instance -d ESDOP -i ESDOP1,ESDOP2
Instance ESDOP1 is running on node d2aclprhq003
Instance ESDOP2 is running on node d2aclprhq004
# srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): d2aclprhq003,d2aclprhq004
# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node d2aclprhq004
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node d2aclprhq003
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node d2aclprhq003
# srvctl status nodeapps
VIP d2aclprhq003-vip is enabled
VIP d2aclprhq003-vip is running on node: d2aclprhq003
VIP d2aclprhq004-vip is enabled
VIP d2aclprhq004-vip is running on node: d2aclprhq004
Network is enabled
Network is running on node: d2aclprhq003
Network is running on node: d2aclprhq004
GSD is disabled
GSD is not running on node: d2aclprhq003
GSD is not running on node: d2aclprhq004
ONS is enabled
ONS daemon is running on node: d2aclprhq003
ONS daemon is running on node: d2aclprhq004
eONS is enabled
eONS daemon is running on node: d2aclprhq003
eONS daemon is running on node: d2aclprhq004
====== STOP =====================
srvctl stop database -d IWMST
-- this stops database and shutdown all instances
OR
srvctl stop instance -d IWMST -i IWMST1,IWMST2
srvctl stop instance -d IWMST -n d2acltscb010
srvctl stop instance -d IWMST -n d2acltscb011
srvctl stop instance -d ESDOP -n d2aclprhq004
-- To stop all services
srvctl stop nodeapps -n d2aclprhq003 -f -v
srvctl stop nodeapps -n d2aclprhq004 -f -v
srvctl stop listener
srvctl stop asm
====== START =====================
srvctl start asm
srvctl start database -d IWMST
-- this starts database and all instances
OR start each instance seprately
srvctl start instance -d IWMST -n d2acltscb010
srvctl start instance -d IWMST -n d2acltscb011
srvctl start listener
srvctl start nodeapps -n d2acltscb010
srvctl start nodeapps -n d2acltscb011
Usually all services start automatically when a server is rebooted unless there is an underlying issues such as asm disk.
Cluster Ready Service Control (crsctl) commands can be used to check, stop and start cluster services. You need to sudo from oracle to root to execute these commands:
export PATH=/u01/app/grid/bin:$PATH
export CRID_HOME=/u01/app/grid
====== STATUS CHECKS ================================================
[root@d2acltscb011 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRSDG.dg ora....up.type ONLINE ONLINE d2ac...b010
ora.DATADG.dg ora....up.type ONLINE ONLINE d2ac...b010
ora.FRADG.dg ora....up.type ONLINE ONLINE d2ac...b010
ora....ER.lsnr ora....er.type ONLINE ONLINE d2ac...b010
ora....N1.lsnr ora....er.type ONLINE ONLINE d2ac...b011
ora....N2.lsnr ora....er.type ONLINE ONLINE d2ac...b010
ora....N3.lsnr ora....er.type ONLINE ONLINE d2ac...b010
ora.asm ora.asm.type ONLINE ONLINE d2ac...b010
ora....SM1.asm application ONLINE ONLINE d2ac...b010
ora....10.lsnr application ONLINE ONLINE d2ac...b010
ora....010.gsd application OFFLINE OFFLINE
ora....010.ons application ONLINE ONLINE d2ac...b010
ora....010.vip ora....t1.type ONLINE ONLINE d2ac...b010
ora....SM2.asm application ONLINE ONLINE d2ac...b011
ora....11.lsnr application ONLINE ONLINE d2ac...b011
ora....011.gsd application OFFLINE OFFLINE
ora....011.ons application ONLINE ONLINE d2ac...b011
ora....011.vip ora....t1.type ONLINE ONLINE d2ac...b011
ora.eons ora.eons.type ONLINE ONLINE d2ac...b010
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora.iwmst.db ora....se.type ONLINE ONLINE d2ac...b010
ora....network ora....rk.type ONLINE ONLINE d2ac...b010
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE d2ac...b010
ora.scan1.vip ora....ip.type ONLINE ONLINE d2ac...b011
ora.scan2.vip ora....ip.type ONLINE ONLINE d2ac...b010
ora.scan3.vip ora....ip.type ONLINE ONLINE d2ac...b010
[root@d2acltscb011 ~]#
Use the crsctl check cluster command on any node in the cluster to check the
status of the Oracle Clusterware stack.
crsctl check cluster [-all | [-n server_name [...]]
# crsctl check cluster -all
**************************************************************
d2acltscb010:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
d2acltscb011:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Use the crsctl check crs command to check the status of Oracle High Availability
Services and the Oracle Clusterware stack on the local server.
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
Use the crsctl check ctss command to check the status of the Cluster Time
Synchronization services
# crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
Use the crsctl query css votedisk command to display the voting disks used
by Cluster Synchronization Services, the status of the voting disks, and the location of
the disks, whether they are stored on Oracle ASM or elsewhere
# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 53c6bef2eb8e4fe2bfc807fdd7a080b0 (ORCL:CRS) [CRSDG]
Located 1 voting disk(s).
# crsctl status server
NAME=d2aclprhq003
STATE=ONLINE
NAME=d2aclprhq004
STATE=ONLINE
# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=d2aclprhq003 d2aclprhq004
NAME=ora.ESDOP
ACTIVE_SERVERS=d2aclprhq003 d2aclprhq004
====== START =====================================================
crsctl start crs
crsctl start cluster -all
crsctl start cluster [-all | -n server_name [...]]
crsctl start cluster -n d2aclprhq004
crsctl start cluster [-all | -n server_name [...]]
crsctl start cluster -n d2aclprhq004
Use the crsctl start cluster command on any node in the cluster to start the
Oracle Clusterware stack.
# cd /u01/app/grid/bin
# ./crsctl start cluster -n d2aclprhq004
CRS-2672: Attempting to start 'ora.cssd' on 'd2aclprhq004'
CRS-2672: Attempting to start 'ora.diskmon' on 'd2aclprhq004'
CRS-2676: Start of 'ora.diskmon' on 'd2aclprhq004' succeeded
CRS-2676: Start of 'ora.cssd' on 'd2aclprhq004' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'd2aclprhq004'
CRS-2676: Start of 'ora.ctssd' on 'd2aclprhq004' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'd2aclprhq004'
CRS-5011: Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/grid/log/d2aclprhq004/agent
/ohasd/oraagent_oracle/oraagent_oracle.log"
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
CRS-5011: Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/grid/log/d2aclprhq004/agent
/ohasd/oraagent_oracle/oraagent_oracle.log"
CRS-2681: Clean of 'ora.asm' on 'd2aclprhq004' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'd2aclprhq004'
CRS-2676: Start of 'ora.asm' on 'd2aclprhq004' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'd2aclprhq004'
CRS-2676: Start of 'ora.crsd' on 'd2aclprhq004' succeeded
-bash-3.2#
====== STOP =====================================================
crsctl stop cluster [-all | -n server_name [...]] [-f]
crsctl stop cluster
crsctl stop crs
crsctl stop crs -f
crsctl stop cluster
crsctl stop crs
Use the crsctl stop cluster command on any node in the cluster to stop the
Oracle Clusterware stack on all servers in the cluster or specific servers.
Syntax
crsctl stop cluster [-all | -n server_name [...]] [-f]
crsctl stop crs
Use the crsctl stop crs command to stop Oracle High Availability Services on
the local server.
Syntax
crsctl stop crs [-f]
=================ASM DISK MIGRATE=====================================
1.List storage devices and make sure that new disks are visible to both cluster nodes.
Migrate
Migrate ASM to new disks
fdisk -l
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/sdi: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdi doesn't contain a valid partition table
2. Partition the new disks
-bash-3.2# fdisk /dev/sdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
-bash-3.2#
-bash-3.2# fdisk /dev/sdh
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 7832.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-7832, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-7832, default 7832):
Using default value 7832
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
-bash-3.2# fdisk /dev/sdi
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 3916.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3916, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-3916, default 3916):
Using default value 3916
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
3.List the disks again to verify new partitions are listed on both nodes
-bash-3.2# fdisk -l
Disk /dev/sda: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 33 265041 83 Linux
/dev/sda2 34 5238 41809162+ 8e Linux LVM
/dev/sda3 5239 10443 41809162+ 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 47.2 GB, 47244640256 bytes
255 heads, 63 sectors/track, 5743 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 1044 8385898+ 83 Linux
Disk /dev/sdh: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdh1 1 7832 62910508+ 83 Linux
Disk /dev/sdi: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdi1 1 3916 31455238+ 83 Linux
4.List current asm disks
-bash-3.2# /etc/init.d/oracleasm listdisks
CRS
DATA
FRA
5.Label new asm disks
-bash-3.2# /etc/init.d/oracleasm createdisk OCRDISK /dev/sdg1
Marking disk "OCRDISK" as an ASM disk: [ OK ]
-bash-3.2# /etc/init.d/oracleasm createdisk DATADISK /dev/sdh1
Marking disk "DATADISK" as an ASM disk: [ OK ]
-bash-3.2# /etc/init.d/oracleasm createdisk FRADISK /dev/sdi1
Marking disk "FRADISK" as an ASM disk: [ OK ]
-bash-3.2# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:
[ OK ]
6.Verify ASM disks again
-bash-3.2# /etc/init.d/oracleasm listdisks
CRS
CRS
DATA
DATADISK
FRA
FRADISK
OCRDISK
7.Add new ASM Disks to ASM
-bash-3.2# sudo su - oracle
oracle@D2LSENPSH165[orcl1]# goasm
oracle@D2LSENPSH165[+ASM1]# sql
SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 18 19:57:08 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL>
set pages 999 lines 120
col name format a20
col path format a20
SELECT name, path, header_status, total_mb, free_mb FROM V$ASM_DISK;
NAME PATH HEADER_STATU TOTAL_MB FREE_MB
-------------------- -------------------- ------------ ---------- ----------
ORCL:DATADISK PROVISIONED 0 0
ORCL:FRADISK PROVISIONED 0 0
ORCL:OCRDISK PROVISIONED 0 0
CRS ORCL:CRS MEMBER 5120 4725
DATA ORCL:DATA MEMBER 45056 16108
FRA ORCL:FRA MEMBER 20480 18710
6 rows selected.
SQL>select name, state, total_mb, free_mb from V$ASM_DISKGROUP;
NAME STATE TOTAL_MB FREE_MB
------------------------------ ----------- ---------- ----------
CRSDG MOUNTED 5120 4725
DATADG MOUNTED 45056 16108
FRADG MOUNTED 21504 19598
SQL>Alter diskgroup CRSDG ADD DISK 'ORCL:OCRDISK' REBALANCE POWER 9;
Diskgroup altered.
SQL>Alter diskgroup FRADG ADD DISK 'ORCL:FRADISK' REBALANCE POWER 9;
Diskgroup altered.
SQL>Alter diskgroup DATADG ADD DISK 'ORCL:DATADISK' REBALANCE POWER 9;
Diskgroup altered.
SQL>select group_number, name, state, total_mb, free_mb from V$ASM_DISKGROUP;
GROUP_NUMBER NAME STATE TOTAL_MB FREE_MB
------------ ------------------------------ ----------- ---------- ----------
1 CRSDG MOUNTED 13309 12912
2 DATADG MOUNTED 106492 77542
3 FRADG MOUNTED 52222 50314
SQL>SELECT GROUP_NUMBER, OPERATION, STATE, POWER, EST_MINUTES FROM V$ASM_OPERATION;
GROUP_NUMBER OPERA STAT POWER EST_MINUTES
------------ ----- ---- ---------- -----------
2 REBAL WAIT 9
3 REBAL RUN 9 0
SQL> /
GROUP_NUMBER OPERA STAT POWER EST_MINUTES
------------ ----- ---- ---------- -----------
2 REBAL WAIT 9
SQL> /
no rows selected
SQL>
set pages 999 lines 120
col name format a20
col path format a20
SELECT name, path, header_status, total_mb, free_mb FROM V$ASM_DISK;
NAME PATH HEADER_STATU TOTAL_MB FREE_MB
-------------------- -------------------- ------------ ---------- ----------
CRS ORCL:CRS MEMBER 5120 4945
DATA ORCL:DATA MEMBER 45056 32778
FRA ORCL:FRA MEMBER 20480 19663
DATADISK ORCL:DATADISK MEMBER 61436 44764
FRADISK ORCL:FRADISK MEMBER 30718 29670
OCRDISK ORCL:OCRDISK MEMBER 8189 7967
6 rows selected.
8.Drop old ASM Disks from ASM
SQL>Alter diskgroup CRSDG DROP DISK CRS rebalance power 9;
Diskgroup altered.
SQL>Alter diskgroup FRADG DROP DISK FRA rebalance power 9;
Diskgroup altered.
SQL>Alter diskgroup DATADG DROP DISK DATA rebalance power 9;
Diskgroup altered.
SQL>
SQL>SELECT GROUP_NUMBER, OPERATION, STATE, POWER, EST_MINUTES FROM V$ASM_OPERATION;
GROUP_NUMBER OPERA STAT POWER EST_MINUTES
------------ ----- ---- ---------- -----------
2 REBAL RUN 9 4
SQL> /
no rows selected
SQL>
set pages 999 lines 120
col name format a20
col path format a20
SELECT name, path, header_status, total_mb, free_mb FROM V$ASM_DISK;
NAME PATH HEADER_STATU TOTAL_MB FREE_MB
-------------------- -------------------- ------------ ---------- ----------
ORCL:CRS FORMER 0 0
ORCL:DATA MEMBER 0 0
ORCL:DATADISK MEMBER 0 0
ORCL:FRA FORMER 0 0
FRADISK ORCL:FRADISK MEMBER 30718 28807
OCRDISK ORCL:OCRDISK MEMBER 8189 7794
6 rows selected.
SQL>ALTER DISKGROUP DATADG MOUNT;
Diskgroup altered.
SQL>
set pages 999 lines 120
col name format a20
col path format a20
SELECT name, path, header_status, total_mb, free_mb FROM V$ASM_DISK;
NAME PATH HEADER_STATU TOTAL_MB FREE_MB
-------------------- -------------------- ------------ ---------- ----------
ORCL:CRS FORMER 0 0
ORCL:DATA FORMER 0 0
ORCL:FRA FORMER 0 0
DATADISK ORCL:DATADISK MEMBER 61436 32488
FRADISK ORCL:FRADISK MEMBER 30718 28807
OCRDISK ORCL:OCRDISK MEMBER 8189 7794
6 rows selected.
9.Delete ASM disk labels of old disks
As root:
In node 1
-bash-3.2# /etc/init.d/oracleasm listdisks
CRS
DATA
DATADISK
FRA
FRADISK
OCRDISK
-bash-3.2#
-bash-3.2# /etc/init.d/oracleasm deletedisk CRS
Removing ASM disk "CRS": [ OK ]
-bash-3.2#
-bash-3.2# /etc/init.d/oracleasm deletedisk DATA
Removing ASM disk "DATA": [ OK ]
-bash-3.2#
-bash-3.2# /etc/init.d/oracleasm deletedisk FRA
Removing ASM disk "FRA": [ OK ]
If a disk label cannot be removed (because of BUG that even after the drop claims that the disk is busy), clean Header by issuing the following command and then try the deletedisk command again :
# dd if=/dev/zero of=/dev/sdf bs=1024 count=100
10. Make sure old ASM disk labels are removed from both Nodes
In Node 1
-bash-3.2# /etc/init.d/oracleasm listdisks
DATADISK
FRADISK
OCRDISK
In Node 2
-bash-3.2# /etc/init.d/oracleasm listdisks
CRS
DATA
DATADISK
FRA
FRADISK
OCRDISK
-bash-3.2# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:
[ OK ]
-bash-3.2# /etc/init.d/oracleasm listdisks
DATADISK
FRADISK
OCRDISK
11. Have UNIX SA and Storage Admin remove old storage devices
=============HOW TO ENABLE FLASHBACK DATABASE=============
How to Enable Flashback Database
To enable flashback database the following operations is needed.
1)Configure the Database in archivelog mode.
Changing the Database Archiving Mode |
1)See the current archiving mode of the database. select log_mode from v$database;
2)Perform clean shutdown of the database. shutdown immediate or, shutdown transactional or, shutdown normal You cannot change the mode from ARCHIVELOG to NOARCHIVELOG if any datafiles need media recovery.
3)Backup the Database.
4)If you use pfile as initialization file then edit the archive destination parameter (like LOG_ARCHIVE_DEST) as your archival destination. If you use spfile ignore this step.
5)Mount the database but don't open. STARTUP MOUNT
6)Change the archival mode and open the database.
ALTER DATABASE ARCHIVELOG If you use spfile then you can use ALTER SYSTEM SET LOG_ARCHIVE_DEST='your location' ALTER DATABASE OPEN;
7)Check the archival Location archive log list
8)Shutdown and Backup the database. SHUTDOWN IMMEDIATE |
2)Configure Flash Recovery Area.
To configure flash recovery area,
Set Up a Flash Recovery Area for RMAN |
Flash recovery area simplifies the ongoing administration of your database by automatically naming recovery-related files, retaining them as long as they are needed for restore and recovery activities, and deleting them when they are no longer needed to restore your database and space is needed for some other backup and recovery-related purpose.
To see up flash recovery follow below steps.
1)Set up DB_RECOVERY_FILE_DEST_SIZE: SQL> alter system set db_recovery_file_dest_size=2G;
2)Decide the area from OS where you will place Flash recovery area. SQL>host mkdir /oradata1/flash_recovery_area
3)Set up DB_RECOVERY_FILE_DEST: SQL> alter system set db_recovery_file_dest='/oradata1/flash_recovery_area';
The V$RECOVERY_FILE_DEST and V$FLASH_RECOVERY_AREA_USAGE views can help to find out the current location, disk quota, space in use, space reclaimable by deleting files,total number of files, the percentage of the total disk quota used by different types of files, and how much space for each type of file can be reclaimed by deleting files that are obsolete, redundant, or already backed up to tape.
In order to disable flash recovery area issue, SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='' SCOPE=BOTH SID='*'; |
3)Clean Shutdown and mount the database.
SQL> SHUT IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> Alter Database Flashback ON;
Before running command you can check whether flashback was actually On or not.
select flashback_on from v$database;
Steps:
------
SQL> alter database flashback ON;
Database altered.
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES
4)Open the database and optionally you can set DB_FLASHBACK_RETENTION_TARGET to the length of the desired flashback window in minutes. By default it is 1 day(1440 minutes).
SQL> ALTER DATABASE OPEN;
To make it 3 days
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320;
SQL> show parameter DB_FLASHBACK_RETENTION_TARGET
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 4320
However you can disable Flashback Database for a tablespace.Then you must take its datafiles offline before running FLASHBACK DATABASE.
Like,
SQL> select file_name,file_id from dba_data_files where tablespace_name='TEST';
FILE_NAME FILE_ID
------------------------------ ----------
/oradata2/1.dbf 5
SQL> alter database datafile 5 offline;
Database altered.
SQL> ALTER TABLESPACE test flashback off;
Tablespace altered.
SQL> recover datafile 5;
Media recovery complete.
SQL> alter database datafile 5 online;
Database altered.
To disable flashback feature simply issue,
SQL>ALTER DATABASE FLASHBACK OFF;
Database altered.
===ENABLE ARCHIVELOG AND FLASHBACK IN RAC DATABASE===
http://oracleinstance.blogspot.com/2009/12/enable-archivelog-and-flashback-in-rac.html
Step by step process of putting a RAC database in archive log mode and then enabling the flashback Database option.
Enabling archive log in RAC Database:
A database must be in archivelog mode before enabling flashback.
In this example database name is test and instances name are test1 and test2.
step 1:
creating recovery_file_dest in asm disk
SQL> alter system set db_recovery_file_dest_size=200m sid='*';
System altered.
SQL> alter system set db_recovery_file_dest='+DATA' sid='*';
System altered.
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 12
Current log sequence 14
SQL>
step 2:
set the LOG_ARCHIVE_DEST_1 parameter. since these parameters will be identical for all nodes, we will use sid='*'. However, you may need to modify this for your situation if the directories are different on each node.
SQL> alter system set log_archive_dest_1='LOCATION=USE-DB_RECOVERY_FILE_DEST';
System altered.
step 3:
set LOG_ARCHIVE_START to TRUE for all instances to enable automatic archiving.
SQL> alter system set log_archive_start=true scope=spfile sid='*';
System altered.
Note that we illustrate the command for backward compatibility purposes, but in oracle database 10g onwards, the parameter is actually deprecated. Automatic archiving will be enabled by default whenever an oracle database is placed in archivelog mode.
step 4:
Set CLUSTER_DATABASE to FALSE for the local instance, which you will then mount to put the database into archivelog mode. By having CLUSTER_DATABASE=FALSE, the subsequent shutdown and startup mount will actually do a Mount Exclusive by default, which is necessary to put the database in archivelog mode, and also to enable the flashback database feature:
SQL> alter system set cluster_database=false scope=spfile sid='test1';
System altered.
step 5;
Shut down all instances. Ensure that all instances are shut down cleanly:
SQL> shutdown immediate
step 6:
Mount the database from instance test1 (where CLUSTER_DATABASE was set to FALSE) and then put the database into archivelog mode.
SQL> startup mount
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.
Database mounted.
SQL> alter database archivelog;
Database altered.
NOTE:
If you did not shut down all instances cleanly in step 5,
putting the database in archivelog mode will fail
with an ORA-265 Error.
SQL> alter database archivelog;
*
ERROR at line 1:
ORA-00265: instance recovery required, cannot set ARCHIVELOG mode
step 7:
Confirm that the database is in archivelog mode, with the appropriate parameters, by issuing the ARCHIVE LOG LIST command:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 13
Next log sequence to archive 15
Current log sequence 15
step 8
Confirm the location of the RECOVERY_FILE_DEST via a SHOW PARAMETER.
SQL> show parameter recovery_file
NAME TYPEVALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +DATA
db_recovery_file_dest_size big integer 200M
Step 9:
Once the database is in archivelog mode, you can enable flashback while the database is still mounted in Exclusive mode (CLUSTER_DATABASE=FALSE).
SQL> alter database flashback on;
Database altered.
Step 10:
Confirm that Flashback is enabled and verify the retention target:
SQL> select flashback_on,current_scn from v$database;
FLASHBACK_ONCURRENT_SCN
------------------ -----------
YES0
SQL> show parameter flash
NAMETYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer1440
step 11:
Reset the CLUSTER_DATABASE parameter back to TRUE for all instances:
SQL> alter system set cluster_database=true scope=spfile sid=' * ';
System altered.
step 12:
shutdown the instance and then restart all cluster database instances.
All instances will now be archiving their redo threads.
SQL> shu immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
start the database, using srvctl command or normal startup
[root@rac1 bin]# ./srvctl status database -d test
Instance test1 is not running on node rac1
Instance test2 is not running on node rac2
[root@rac1 bin]# ./srvctl start database -d test
[root@rac1 bin]# ./srvctl status database -d test
Instance test1 is running on node rac1
Instance test2 is running on node rac2
[root@rac1 bin]#
on test1 instance:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 14
Next log sequence to archive 16
Current log sequence 16
SQL>
on test2 instance:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 3
Next log sequence to archive 5
Current log sequence 5
SQL>
wow, both are in archive log mode
hope, this document will help you .
regards,
rajeshkumar g
UPGRADE GUIDE
Wednesday, June 17, 2015
8:01 AM
How to Upgrade to Oracle Grid Infrastructure 12c Release 1 (LINUX - LAB)
http://docs.oracle.com/database/121/CWLIN/procstop.htm#CWLIN10001
This appendix describes how to perform Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) upgrades.
Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are brought down and upgraded while other nodes remain active. Oracle ASM 12c Release 1 (12.1) upgrades can be rolling upgrades. If you upgrade a subset of nodes, then a software-only installation is performed on the existing cluster nodes that you do not select for upgrade.
This appendix contains the following topics:
· Back Up the Oracle Software Before Upgrades
· About Oracle Grid Infrastructure and Oracle ASM Upgrade and Downgrade
· Options for Oracle Grid Infrastructure Upgrades and Downgrades
· Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades
· Preparing to Upgrade an Existing Oracle Clusterware Installation
· Using CVU to Validate Readiness for Oracle Clusterware Upgrades
· Understanding Rolling Upgrades Using Batches
· Performing Rolling Upgrade of Oracle Grid Infrastructure
· Performing Rolling Upgrade of Oracle ASM
· Applying Patches to Oracle ASM
· Updating Oracle Enterprise Manager Cloud Control Target Parameters
· Unlocking the Existing Oracle Clusterware Installation
· Checking Cluster Health Monitor Repository Size After Upgrading
· Downgrading Oracle Clusterware After an Upgrade
B.0 Downloading Oracle 12c GI and DB Software
Grid Infrastructure:
http://download.oracle.com/otn/linux/oracle12c/121010/linuxamd64_12c_grid_1of2.zip
http://download.oracle.com/otn/linux/oracle12c/121010/linuxamd64_12c_grid_2of2.zip
RDBMS:
http://download.oracle.com/otn/linux/oracle12c/121020/linuxamd64_12102_database_1of2.zip
http://download.oracle.com/otn/linux/oracle12c/121020/linuxamd64_12102_database_2of2.zip
B.1 Back Up the Oracle Software Before Upgrades
Before you make any changes to the Oracle software, Oracle recommends that you create a backup of the Oracle software and databases.
GRID_HOME=/u01/app/11.2.0.3/grid
RDBMS_HOME=/u01/app/oracle/product/11.2.0.3
ORACLE_DB=LABDB
ORACLE_SIDs=LABDB1, LABDB2
cd /u01/app/backup
tar -cvf oracle_home_11203.tar /u01/app/oracle/product/11.2.0.3
gzip oracle_home_11203.tar
tar -cvf oracle_inventory.tar /u01/app/oracle/oraInventory
gzip oracle_inventory.tar
B.3 Options for Oracle Grid Infrastructure Upgrades and Downgrades
Upgrade options from Oracle Grid Infrastructure 11g to Oracle Grid Infrastructure 12c include the following:
· Oracle Grid Infrastructure rolling upgrade which involves upgrading individual nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster
· Oracle Grid Infrastructure non-rolling upgrade by bringing the cluster down and upgrading the complete cluster
Upgrade options from Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle Grid Infrastructure 12c include the following:
· Oracle Grid Infrastructure rolling upgrade, with OCR and voting disks on Oracle ASM
· Oracle Grid Infrastructure complete cluster upgrade (downtime, non-rolling), with OCR and voting disks on Oracle ASM
Downgrade options from Oracle Grid Infrastructure 12c to earlier releases include the following:
· Oracle Grid Infrastructure downgrade to Oracle Grid Infrastructure 11g Release 2 (11.2)
· Oracle Grid Infrastructure downgrades to releases before Oracle Grid Infrastructure 11g Release 2 (11.2), Oracle Grid Infrastructure 11gRelease 1 (11.1), Oracle Clusterware and Oracle ASM 10g, if storage for OCR and voting files is on storage other than Oracle ASM
B.4 Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades
Oracle recommends that you use the Cluster Verification Utility tool (CVU) to check if there are any patches required for upgrading your existing Oracle Grid Infrastructure 11g Release 2 (11.2) or Oracle RAC database 11g Release 2 (11.2) installations.
Be aware of the following restrictions and changes for upgrades to Oracle Grid Infrastructure installations, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM):
· Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use OPatch to patch the grid home, and OPatch displays the error message "'checkdir' error: cannot create Grid_home/OPatch".
· To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 12c Release 1 (12.1), you must first verify if you need to apply any mandatory patches for upgrade to succeed. See Section B.6 for steps to check readiness.
See Also:
Oracle 12c Upgrade Companion (My Oracle Support Note 1462240.1
):
https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1462240.1
· Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. You cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
· The same user that owned the earlier release Oracle Grid Infrastructure software must perform the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.
· Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure home.
· During a major release upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), the software in the 12c Release 1 (12.1) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the new Grid homes are not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes.
To manage databases in existing earlier release database homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the existing database homes.
B.5 Preparing to Upgrade an Existing Oracle Clusterware Installation
If you have an existing Oracle Clusterware installation, then you upgrade your existing cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade.
The following sections list the steps you can perform before you upgrade Oracle Grid Infrastructure:
B.5.1 Checks to Complete Before Upgrading Oracle Clusterware
Complete the following tasks before starting an upgrade:
1. For each node, use Cluster Verification Utility to ensure that you have completed preinstallation steps. It can generate Fixup scripts to help you to prepare servers. In addition, the installer will help you to ensure all required prerequisites are met.
Ensure that you have information you will need during installation, including the following:
· An Oracle base location for Oracle Clusterware.
GRID_HOME=/u01/app/11.2.0.3/grid
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3
ORACLE_DB=LABDB
ORACLE_SIDs=LABDB1, LABDB2
· An Oracle Grid Infrastructure home location that is different from your existing Oracle Clusterware location.
GRID_HOME=/u01/app/12.2.0.3/grid
·
· SCAN name and addresses, and other network addresses, as described in Chapter 5.
· Privileged user operating system groups, as described in Chapter 6.
· root user access, to run scripts as root during installation, using one of the options described in Section 8.1.1.
2. For the installation owner running the installation, if you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME and $ORACLE_SID, as these environment variables are used during upgrade. For example:
3. If the cluster was previously forcibly upgraded, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before starting another upgrade. For example, if the cluster was forcibly upgraded from 11.2.0.3 to 12.1.0.1, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before upgrading to another release, for example, 12.1.0.2.
B.5.2 Unset Oracle Environment Variables
Unset Oracle environment variables.
If you have set ORA_CRS_HOME as an environment variable, following instructions from Oracle Support, then unset it before starting an installation or upgrade. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.
Check to ensure that installation owner login shell profiles (for example, .profile or .cshrc) do not have ORA_CRS_HOME set.
If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.
Also, ensure that the $ORACLE_HOME/bin path is removed from your PATH environment variable.
unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID
unset TNS_ADMIN
export PATH=/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:/usr/local/bin:/bin:/usr/bin
B.5.3 Running the Oracle ORAchk Upgrade Readiness Assessment
ORAchk (Oracle RAC Configuration Audit Tool) Upgrade Readiness Assessment can be used to obtain an automated upgrade-specific health check for upgrades to Oracle Grid Infrastructure 11.2.0.3, 11.2.0.4, 12.1.0.1, and 12.1.0.2. You can run the ORAchk Upgrade Readiness Assessment tool and automate many of the manual pre-upgrade and post upgrade checks.
Oracle recommends that you download and run the latest version of ORAchk from My Oracle Support. For information about downloading, configuring, and running ORAchk configuration audit tool, refer to My Oracle Support note 1457357.1
, which is available at the following URL:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1457357.1
Execution Instructions
Note: It is highly recommend that you review Document 1268927.2 as well as the ORAchk Users Guide for a full understanding of ORAchk prior to executing the steps below.
1. Download ORAchk from Document 1268927.2.
2. Log in to the system as the Oracle RDBMS software installation owner.
3. Stage the orachk.zip kit (downloaded in step 1) in its own directory the node on which the tool will be executed.
4. Unzip orachk.zip kit, leaving the script and driver files together in the same directory.
5. Validate the permissions for orachk are 755 (-rwxr-xr-x). If the permissions are not currently set to 755, set the permissions on orachk as follows:
$ chmod 755 orachk
6.
During the 11.2.0.3 upgrade planning phase of your pending
RAC upgrade, execute ORAchk in pre-upgrade mode and follow the on-screen
prompts:
Note: It is HIGHLY recommended that the pre-upgrade checks are
executed in the planning phase of the upgrade. This will allow planning
and implementation of findings highlighted by ORAchk.
$ ./orachk -u -o pre
7. Review the HTML report generated by the ORAchk pre-upgrade execution and implement the recommended changes as necessary.
8. Once you have successfully upgraded (GI and/or RDBMS), execute ORAchk in post-upgrade mode and follow the on-screen prompts:
$ ./orachk -u -o post
9. Review the HTML report generated by the ORAchk post-upgrade execution and implement the recommended changes as necessary.
What to expect
· The target clusterware and database versions are 11.2.0.3, 11.2.0.4 and 12.1.0.1
· In pre-upgrade mode the tool will detect all databases registered in the clusterware automatically and present a list of databases on which to perform pre-upgrade checks. If any databases that were already upgraded are selected, the pre-upgrade checks will be skipped for them.
· In post-upgrade mode the tool will detect all databases registered in the clusterware automatically and present a list of databases on which to perform post-upgrade checks. If any databases that were not upgraded are selected, the post-upgrade checks will be skipped for them.
· In both modes the tool will check the clusterware and OS appropriately.
· When the tool completes the user will be referred to an HTML formated report which will contain the findings and links to additional details and information.
oracle@D2LSENPSH160[LABDB1]# ./orachk -u -o pre
Enter upgrade target version (valid versions are 11.2.0.3.0, 11.2.0.4.0, 12.1.0.1.0, 12.1.0.2.0):- 12.1.0.2.0
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0.3/grid?[y/n][y]y
Checking ssh user equivalency settings on all nodes in cluster
Node d2lsenpsh161 is configured for ssh user equivalency for oracle user
Searching for running databases . . . . .
. .
List of running databases registered in OCR
1. LABDB
2. None of above
Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].1
. .
Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
-------------------------------------------------------------------------------------------------------
Oracle Stack Status
-------------------------------------------------------------------------------------------------------
Host Name CRS Installed RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
-------------------------------------------------------------------------------------------------------
d2lsenpsh160 Yes Yes Yes Yes Yes LABDB1
d2lsenpsh161 Yes Yes Yes Yes Yes LABDB2
-------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
Installed components summary
---------------------------------------------------------------------------------------------------------------------------------
GI_HOME ORACLE_HOME Database Names
---------------------------------------------------------------------------------------------------------------------------------
/u01/app/11.2.0.3/grid - 11.2.0.3.0 /u01/app/oracle/product/11.2.0.3 - 11.2.0.3.0 LABDB
---------------------------------------------------------------------------------------------------------------------------------
…
Results:
WARNING => Shell limit soft nproc for DB is NOT configured according to recommendation
FAIL => Opatch version is lower than recommended in RDBMS_HOME for /u01/app/oracle/product/11.2.0.3
FAIL => Opatch version is lower than recommended in GRID_HOME
INFO => Information about ASM process parameter when its not set to default value
FAIL => ASM_DISKSTRING parameter is either null or set to /dev/*
WARNING => Berkeley Database location does not point to correct GI_HOME
Download and unpacked: p6880880_112000_LINUX.zip - OPatch 11.2.0.3.6
to $ORACLE_HOME and $GRID_HOME
Changed diskstring in ASM database:
Alter system set diskstring=’ORCL:*’;
Exeute orachk again:
./orachk -u -o pre
Data collections completed. Checking best practices on d2lsenpsh160.
--------------------------------------------------------------------------------------
WARNING => Shell limit soft nproc for DB is NOT configured according to recommendation
WARNING => TNS_ADMIN environment variable is set
WARNING => One or More Object Names in ALL_OBJECTS table are Reserved Words for LABDB
WARNING => One or More Column Names in ALL_TAB_COLUMNS table are Reserved Words for LABDB
WARNING => Review the PRE-UPGRADE details for the databases checked below for more information for LABDB
WARNING => OS parameter vm.swappiness is NOT set to the recommended value
WARNING => Berkeley Database location does not point to correct GI_HOME
WARNING => Some Users Needing Network ACLs for Oracle Utility Packages Found for LABDB
Data collections completed. Checking best practices on d2lsenpsh161.
--------------------------------------------------------------------------------------
WARNING => Shell limit soft nproc for DB is NOT configured according to recommendation
WARNING => TNS_ADMIN environment variable is set
INFO => Information about ASM process parameter when its not set to default value
WARNING => Berkeley Database location does not point to correct GI_HOME
B.6 Using CVU to Validate Readiness for Oracle Clusterware Upgrades
You can use Cluster Verification Utility (CVU) to assist you with system checks in preparation for starting an upgrade. CVU runs the appropriate system checks automatically, and either prompts you to fix problems, or provides a fixup script to be run on all nodes in the cluster before proceeding with the upgrade.
This section contains the following topics:
· About the CVU Grid Upgrade Validation Command Options
· Example of Verifying System Upgrade Readiness for Grid Infrastructure
B.6.1 About the CVU Grid Upgrade Validation Command Options
You can run upgrade validations in one of two ways:
· Run OUI, and allow the CVU validation built into OUI to perform system checks and generate fixup scripts
· Run the CVU manual script cluvfy.sh script to perform system checks and generate fixup scripts
To use OUI to perform pre-install checks and generate fixup scripts, run the installation as you normally would. OUI starts CVU, and performs system checks as part of the installation process. Selecting OUI to perform these checks is particularly appropriate if you think you have completed preinstallation checks, and you want to confirm that your system configuration meets minimum requirements for installation.
To use the cluvfy.sh command-line script for CVU, navigate to the staging area for the upgrade, where the runcluvfy.sh command is located, and run the command runcluvfy.sh stage -pre crsinst -upgrade to check the readiness of your Oracle Clusterware installation for upgrades. Running runcluvfy.sh with the -pre crsinst -upgrade options performs system checks to confirm if the cluster is in a correct state for upgrading from an existing clusterware installation.
The command uses the following syntax, where variable content is indicated by italics:
runcluvfy.sh stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome
-dest_crshome dest_Gridhome -dest_version dest_release
[-fixup][-method {sudo|root} [-location dir_path] [-user user_name]] [-verbose]
The options are:
· -n nodelist
The -n flag indicates cluster member nodes, and nodelist is the comma-delimited list of non-domain qualified node names on which you want to run a preupgrade verification. If you do not add the -n flag to the verification command, then all the nodes in the cluster are verified. You must add the -n flag if the clusterware is down on the node where runcluvfy.sh is run.
· -rolling
Use this flag to verify readiness for rolling upgrades.
· -src_crshome src_Gridhome
Use this flag to indicate the location of the source Oracle Clusterware or Grid home that you are upgrading, where src_Gridhome is the path to the home that you want to upgrade.
· -dest_crshome dest_Gridhome
Use this flag to indicate the location of the upgrade Grid home, where dest_ Gridhome is the path to the Grid home.
· -dest_version dest_release
Use the -dest_version flag to indicate the release number of the upgrade, including any patchset. The release number must include the five digits designating the release to the level of the platform-specific patch. For example: 12.1.0.1.0.
· -fixup [-method {sudo|root} [-location dir_path] [-user user_name]
Use the -fixup flag to indicate that you want to generate instructions for any required steps you need to complete to ensure that your cluster is ready for an upgrade. The default location is the CVU work directory.
The -fixup -method flag defines the method by which root scripts are run. The -method flag requires one of the following options:
· sudo: Run as a user on the sudoers list.
· root: Run as the root user.
If you select sudo, then enter the -location flag to provide the path to Sudo on the server, and enter the -user flag to provide the user account with Sudo privileges.
· -verbose
Use the -verbose flag to produce detailed output of individual checks.
B.6.2 Example of Verifying System Upgrade Readiness for Grid Infrastructure
You can verify that the permissions required for installing Oracle Clusterware have been configured by running a command similar to the following:
As root create the new GRID home and change ownership to installer on both nodes:
# mkdir -p /u01/app/12.1.0.1/grid
# chown oracle:oinstall /u01/app/12.1.0.1/grid
# chown oracle:oinstall /u01/app/12.1.0.1
As “oracle”
# cd /u01/app/oracle/media/12c/grid
#
unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID
unset TNS_ADMIN
export PATH=/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:/usr/local/bin:/bin:/us/bin
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.1/grid -dest_version 12.1.0.1.0 -fixup –verbose
oracle@D2LSENPSH160[LABDB1]# ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.1/grid -dest_version 12.1.0.1.0 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "D2LSENPSH160"
Destination Node Reachable?
------------------------------------ ------------------------
d2lsenpsh160 yes
d2lsenpsh161 yes
Result: Node reachability check passed from node "D2LSENPSH160"
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 passed
d2lsenpsh160 passed
Result: User equivalence check passed for user "oracle"
Checking CRS user consistency
Result: CRS user consistency check successful
Checking ASM disk size consistency
ERROR:
PRCT-1207 : Failed to set the ORACLE_SID for running asmcmd from CRS home location /u01/app/11.2.0.3/grid
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 passed
d2lsenpsh160 passed
Verification of the hosts config file successful
Interface information for node "d2lsenpsh161"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.236.28.161 10.236.28.0 0.0.0.0 UNKNOWN 00:50:56:91:21:B4 1500
eth0 10.236.28.163 10.236.28.0 0.0.0.0 UNKNOWN 00:50:56:91:21:B4 1500
eth1 10.239.74.161 10.239.74.0 0.0.0.0 UNKNOWN 00:50:56:91:21:B5 1500
eth1 169.254.184.42 169.254.0.0 0.0.0.0 UNKNOWN 00:50:56:91:21:B5 1500
eth2 10.236.27.161 10.236.27.0 0.0.0.0 UNKNOWN 00:50:56:91:21:B6 1500
eth3 192.168.0.161 192.168.0.0 0.0.0.0 UNKNOWN 00:50:56:91:21:BE 1500
Interface information for node "d2lsenpsh160"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.236.28.160 10.236.28.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B1 1500
eth0 10.236.28.162 10.236.28.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B1 1500
eth0 10.236.28.176 10.236.28.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B1 1500
eth1 10.239.74.160 10.239.74.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B2 1500
eth1 169.254.136.37 169.254.0.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B2 1500
eth2 10.236.27.160 10.236.27.0 0.0.0.0 10.236.28.1 00:50:56:91:21:B3 1500
eth3 192.168.0.160 192.168.0.0 0.0.0.0 10.236.28.1 00:50:56:91:21:BD 1500
Check: Node connectivity using interfaces on subnet "10.236.28.0"
Check: Node connectivity of subnet "10.236.28.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
d2lsenpsh161[10.236.28.161] d2lsenpsh161[10.236.28.163] yes
d2lsenpsh161[10.236.28.161] d2lsenpsh160[10.236.28.176] yes
d2lsenpsh161[10.236.28.161] d2lsenpsh160[10.236.28.160] yes
d2lsenpsh161[10.236.28.161] d2lsenpsh160[10.236.28.162] yes
d2lsenpsh161[10.236.28.163] d2lsenpsh160[10.236.28.176] yes
d2lsenpsh161[10.236.28.163] d2lsenpsh160[10.236.28.160] yes
d2lsenpsh161[10.236.28.163] d2lsenpsh160[10.236.28.162] yes
d2lsenpsh160[10.236.28.176] d2lsenpsh160[10.236.28.160] yes
d2lsenpsh160[10.236.28.176] d2lsenpsh160[10.236.28.162] yes
d2lsenpsh160[10.236.28.160] d2lsenpsh160[10.236.28.162] yes
Result: Node connectivity passed for subnet "10.236.28.0" with node(s) d2lsenpsh161,d2lsenpsh160
Check: TCP connectivity of subnet "10.236.28.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
d2lsenpsh161:10.236.28.161 d2lsenpsh161:10.236.28.163 passed
d2lsenpsh161:10.236.28.161 d2lsenpsh160:10.236.28.176 passed
d2lsenpsh161:10.236.28.161 d2lsenpsh160:10.236.28.160 passed
d2lsenpsh161:10.236.28.161 d2lsenpsh160:10.236.28.162 passed
Result: TCP connectivity check passed for subnet "10.236.28.0"
Check: Node connectivity using interfaces on subnet "10.239.74.0"
Check: Node connectivity of subnet "10.239.74.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
d2lsenpsh160[10.239.74.160] d2lsenpsh161[10.239.74.161] yes
Result: Node connectivity passed for subnet "10.239.74.0" with node(s) d2lsenpsh160,d2lsenpsh161
Check: TCP connectivity of subnet "10.239.74.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
d2lsenpsh160:10.239.74.160 d2lsenpsh161:10.239.74.161 passed
Result: TCP connectivity check passed for subnet "10.239.74.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.236.28.0".
Subnet mask consistency check passed for subnet "10.239.74.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "10.239.74.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.239.74.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Task ASM Integrity check started...
Starting check to see if ASM is running on all cluster nodes...
ASM Running check passed. ASM is running on all specified nodes
Starting Disk Groups check to see if at least one Disk Group configured...
Disk Group Check passed. At least one Disk Group configured
Task ASM Integrity check failed...
Checking OCR integrity...
OCR integrity check passed
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 passed
d2lsenpsh160 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 3.8565GB (4043840.0KB) 4GB (4194304.0KB) failed
d2lsenpsh160 3.8565GB (4043840.0KB) 4GB (4194304.0KB) failed
Result: Total memory check failed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 2.3413GB (2454984.0KB) 50MB (51200.0KB) passed
d2lsenpsh160 1.975GB (2070932.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 6GB (6291448.0KB) 3.8565GB (4043840.0KB) passed
d2lsenpsh160 6GB (6291448.0KB) 3.8565GB (4043840.0KB) passed
Result: Swap space check passed
Check: Free disk space for "d2lsenpsh161:/usr,d2lsenpsh161:/etc,d2lsenpsh161:/u01/app/11.2.0.3/grid,d2lsenpsh161:/sbin"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr d2lsenpsh161 / 21.5986GB 6.9586GB passed
/etc d2lsenpsh161 / 21.5986GB 6.9586GB passed
/u01/app/11.2.0.3/grid d2lsenpsh161 / 21.5986GB 6.9586GB passed
/sbin d2lsenpsh161 / 21.5986GB 6.9586GB passed
Result: Free disk space check passed for "d2lsenpsh161:/usr,d2lsenpsh161:/etc,d2lsenpsh161:/u01/app/11.2.0.3/grid,d2lsenpsh161:/sbin"
Check: Free disk space for "d2lsenpsh160:/usr,d2lsenpsh160:/etc,d2lsenpsh160:/u01/app/11.2.0.3/grid,d2lsenpsh160:/sbin"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr d2lsenpsh160 / 12.582GB 6.9586GB passed
/etc d2lsenpsh160 / 12.582GB 6.9586GB passed
/u01/app/11.2.0.3/grid d2lsenpsh160 / 12.582GB 6.9586GB passed
/sbin d2lsenpsh160 / 12.582GB 6.9586GB passed
Result: Free disk space check passed for "d2lsenpsh160:/usr,d2lsenpsh160:/etc,d2lsenpsh160:/u01/app/11.2.0.3/grid,d2lsenpsh160:/sbin"
Check: Free disk space for "d2lsenpsh161:/var"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/var d2lsenpsh161 /var 808MB 5MB passed
Result: Free disk space check passed for "d2lsenpsh161:/var"
Check: Free disk space for "d2lsenpsh160:/var"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/var d2lsenpsh160 /var 749MB 5MB passed
Result: Free disk space check passed for "d2lsenpsh160:/var"
Check: Free disk space for "d2lsenpsh161:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp d2lsenpsh161 /tmp 3.8213GB 1GB passed
Result: Free disk space check passed for "d2lsenpsh161:/tmp"
Check: Free disk space for "d2lsenpsh160:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp d2lsenpsh160 /tmp 3.0622GB 1GB passed
Result: Free disk space check passed for "d2lsenpsh160:/tmp"
Check: User existence for "oracle"
Node Name Status Comment
------------ ------------------------ ------------------------
d2lsenpsh161 passed exists(1100)
d2lsenpsh160 passed exists(1100)
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "oracle"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
d2lsenpsh161 passed exists
d2lsenpsh160 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
d2lsenpsh161 passed exists
d2lsenpsh160 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 yes yes yes yes passed
d2lsenpsh160 yes yes yes yes passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed
Check: Membership of user "oracle" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
d2lsenpsh161 yes yes yes passed
d2lsenpsh160 yes yes yes passed
Result: Membership check for user "oracle" in group "dba" passed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 3 3,5 passed
d2lsenpsh160 3 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
d2lsenpsh161 hard 65536 65536 passed
d2lsenpsh160 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
d2lsenpsh161 soft 4096 1024 passed
d2lsenpsh160 soft 4096 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
d2lsenpsh161 hard 16384 16384 passed
d2lsenpsh160 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
d2lsenpsh161 soft 2047 2047 passed
d2lsenpsh160 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
There are no oracle patches required for home "/u01/app/11.2.0.3/grid".
There are no oracle patches required for home "/u01/app/11.2.0.3/grid".
Checking for suitability of source home "/u01/app/11.2.0.3/grid" for upgrading to version "12.1.0.1.0".
Result: Source home "/u01/app/11.2.0.3/grid" is suitable for upgrading to version "12.1.0.1.0".
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 x86_64 x86_64 passed
d2lsenpsh160 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 2.6.18-274.el5 2.6.18 passed
d2lsenpsh160 2.6.18-274.el5 2.6.18 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 250 250 250 passed
d2lsenpsh160 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 32000 32000 32000 passed
d2lsenpsh160 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 100 100 100 passed
d2lsenpsh160 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 128 128 128 passed
d2lsenpsh160 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 2070446080 2070446080 2070446080 passed
d2lsenpsh160 2070446080 2070446080 2070446080 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 4096 4096 4096 passed
d2lsenpsh160 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 2097152 2097152 404384 passed
d2lsenpsh160 2097152 2097152 404384 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 6815744 6815744 6815744 passed
d2lsenpsh160 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
d2lsenpsh160 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 4194304 4194304 262144 passed
d2lsenpsh160 4194304 4194304 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 4194304 4194304 4194304 passed
d2lsenpsh160 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 262144 262144 262144 passed
d2lsenpsh160 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 1048586 1048586 1048576 passed
d2lsenpsh160 1048586 1048586 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
d2lsenpsh161 1048576 1048576 1048576 passed
d2lsenpsh160 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 make-3.81-3.el5 make-3.81 passed
d2lsenpsh160 make-3.81-3.el5 make-3.81 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
d2lsenpsh160 binutils-2.17.50.0.6-26.el5 binutils-2.17.50.0.6 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 gcc(x86_64)-4.1.2-51.el5 gcc(x86_64)-4.1.2 passed
d2lsenpsh160 gcc(x86_64)-4.1.2-54.el5 gcc(x86_64)-4.1.2 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
d2lsenpsh160 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 glibc(x86_64)-2.5-65 glibc(x86_64)-2.5-58 passed
d2lsenpsh160 glibc(x86_64)-2.5-118.el5_10.2 glibc(x86_64)-2.5-58 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
d2lsenpsh160 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 glibc-devel(x86_64)-2.5-65 glibc-devel(x86_64)-2.5 passed
d2lsenpsh160 glibc-devel(x86_64)-2.5-118.el5_10.2 glibc-devel(x86_64)-2.5 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 gcc-c++(x86_64)-4.1.2-51.el5 gcc-c++(x86_64)-4.1.2 passed
d2lsenpsh160 gcc-c++(x86_64)-4.1.2-54.el5 gcc-c++(x86_64)-4.1.2 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
d2lsenpsh160 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 libgcc(x86_64)-4.1.2-51.el5 libgcc(x86_64)-4.1.2 passed
d2lsenpsh160 libgcc(x86_64)-4.1.2-54.el5 libgcc(x86_64)-4.1.2 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 libstdc++(x86_64)-4.1.2-51.el5 libstdc++(x86_64)-4.1.2 passed
d2lsenpsh160 libstdc++(x86_64)-4.1.2-54.el5 libstdc++(x86_64)-4.1.2 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 libstdc++-devel(x86_64)-4.1.2-51.el5 libstdc++-devel(x86_64)-4.1.2 passed
d2lsenpsh160 libstdc++-devel(x86_64)-4.1.2-54.el5 libstdc++-devel(x86_64)-4.1.2 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed
d2lsenpsh160 sysstat-7.0.2-12.el5 sysstat-7.0.2 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 ksh-20100202-1.el5_6.6 ksh-... passed
d2lsenpsh160 ksh-20100621-18.el5_10.1 ksh-... passed
Result: Package existence check passed for "ksh"
Check: Package existence for "nfs-utils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 nfs-utils-1.0.9-54.el5 nfs-utils-1.0.9-60 failed
d2lsenpsh160 nfs-utils-1.0.9-70.el5 nfs-utils-1.0.9-60 passed
Result: Package existence check failed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 passed
d2lsenpsh160 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
------------------------------------ ------------------------
d2lsenpsh161 yes
d2lsenpsh160 yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
Node Name Port Open?
------------------------------------ ------------------------
d2lsenpsh161 yes
d2lsenpsh160 yes
NTP common Time Server Check started...
NTP Time Server ".INIT." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[d2lsenpsh161, d2lsenpsh160]"...
Check: Clock time offset from NTP Time Server
Time Server: .INIT.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
------------ ------------------------ ------------------------
d2lsenpsh161 0.0 passed
d2lsenpsh160 0.0 passed
Time Server ".INIT." has time offsets that are within permissible limits for nodes "[d2lsenpsh161, d2lsenpsh160]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "oracle" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
d2lsenpsh161 passed does not exist
d2lsenpsh160 passed does not exist
Result: User "oracle" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
d2lsenpsh161 0022 0022 passed
d2lsenpsh160 0022 0022 passed
Result: Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
More than one "search" entry does not exist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 passed
d2lsenpsh160 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
Check for integrity of file "/etc/resolv.conf" passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations
Check: Time zone consistency
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Clusterware version consistency passed.
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Check: Daemon "avahi-daemon" not configured
Node Name Configured Status
------------ ------------------------ ------------------------
d2lsenpsh161 yes failed
d2lsenpsh160 yes failed
Daemon not configured check failed for process "avahi-daemon"
Check: Daemon "avahi-daemon" not running
Node Name Running? Status
------------ ------------------------ ------------------------
d2lsenpsh161 yes failed
d2lsenpsh160 yes failed
Daemon not running check failed for process "avahi-daemon"
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
-------------- --------------- ----------------
Check failed. Failed on nodes Reboot required?
-------------- --------------- ----------------
Daemon "avahi-daemon" not d2lsenpsh161 no
configured and running ,d2lsenpsh160
Execute "/tmp/CVU_12.1.0.1.0_oracle/runfixup.sh" as root user on nodes "d2lsenpsh161,d2lsenpsh160" to perform the fix up operations manually
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.1.0_oracle/runfixup.sh" has completed on nodes "d2lsenpsh161,d2lsenpsh160"
Fix: Daemon "avahi-daemon" not configured and running
Node Name Status
------------------------------------ ------------------------
d2lsenpsh161 failed
d2lsenpsh160 failed
ERROR:
PRVG-9023 : Manual fix up command "/tmp/CVU_12.1.0.1.0_oracle/runfixup.sh" was not issued by root user on node "d2lsenpsh161"
PRVG-9023 : Manual fix up command "/tmp/CVU_12.1.0.1.0_oracle/runfixup.sh" was not issued by root user on node "d2lsenpsh160"
Result: "Daemon "avahi-daemon" not configured and running" could not be fixed on nodes "d2lsenpsh161,d2lsenpsh160"
Fix up operations for selected fixable prerequisites were unsuccessful on nodes "d2lsenpsh161,d2lsenpsh160"
Pre-check for cluster services setup was unsuccessful on all the nodes.
B.7 Understanding Rolling Upgrades Using Batches
Upgrades from earlier releases require that you upgrade the entire cluster. You cannot select or de-select individual nodes for upgrade. Oracle does not support attempting to add additional nodes to a cluster during a rolling upgrade.
Oracle recommends that you leave Oracle RAC instances running when upgrading Oracle Clusterware. When you start the root script on each node, the database instances on that node are shut down and then the rootupgrade.sh script starts the instances again. If you upgrade from Oracle Grid Infrastructure 11g Release 11.2.0.2 and later to any later release of Oracle Grid Infrastructure, then all nodes are selected for upgrade by default.
You can use root user automation to automate running the rootupgrade.sh script during the upgrade. When you use root automation, you can divide the nodes into groups, or batches, and start upgrades of these batches. Between batches, you can move services from nodes running the previous release to the upgraded nodes, so that services are not affected by the upgrade. Oracle recommends that you use rootautomation, and allow the rootupgrade.sh script to stop and start instances automatically. You can also continue to run root scripts manually.
B.8 Performing Rolling Upgrade of Oracle Grid Infrastructure
This section contains the following topics:
· Performing a Standard Upgrade from an Earlier Release
· Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable
· Upgrading Inaccessible Nodes After Forcing an Upgrade
B.8.1 Performing a Standard Upgrade from an Earlier Release
Use the following procedure to upgrade the cluster from an earlier release:
1. Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
2. On the node selection page, select all nodes.
3. Select installation options as prompted. Oracle recommends that you configure root script automation, so that the rootupgrade.sh script can be run automatically during the upgrade.
4. Run root scripts, using either automatically or manually:
· Running root scripts automatically
If you have configured root script automation, then use the pause between batches to relocate services from the nodes running the previous release to the new release.
· Running root scripts manually
If you have not cohe script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. When the script is run successfully on all the nodes except the last node, run the script on the last node.
When upgrading from 12.1.0.1 Oracle Flex Cluster, Oracle recommends that you run the rootupgrade.sh script on all Hub Nodes before running it on Leaf Nodes.
5. After running the rootupgrade.sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), and left the check box labeled ASMCA checked, as is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, and the Oracle Grid Infrastructure upgrade is complete. If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.
If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). You can choose to upgrade Oracle ASM at this time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12cRelease 1 (12.1) home (for example, srvctl) do not work.
6. Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
Note:
At the end of the upgrade, if you set the Oracle Cluster Registry (OCR) backup location manually to the earlier release Oracle Clusterware home (CRS home), then you must change the OCR backup location to the new Oracle Grid Infrastructure home (Grid home). If you did not set the OCR backup location manually, then the backup location is changed for you during the upgrade.
Because upgrades of Oracle Clusterware are out-of-place upgrades, the previous release Oracle Clusterware home cannot be the location of the current release OCR backups. Backups in the old Oracle Clusterware home can be deleted.
See Also:
Section A.12, "Failed or Incomplete Installations and Upgrades" for information about completing failed or incomplete upgrades
ASMLib installation and configuration verification. - This task checks the ASMLib installation and configuration across the systems.
Check Failed on Nodes: [D2LSENPSH161, D2LSENPSH160]
Verification result of failed node: D2LSENPSH161 Details:
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "D2LSENPSH161" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "D2LSENPSH161" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "D2LSENPSH161" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
Back to Top
Verification result of failed node: D2LSENPSH160 Details:
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "D2LSENPSH160" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "D2LSENPSH160" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "D2LSENPSH160" does not match with cluster nodes - Cause: The ASMLib configuration check found inconsistent settings across cluster nodes. - Action: Ensure that the ASMLib is correctly installed and configured on all the nodes with same configuration settings and that the user has the necessary access privileges for the configuration file.
Back to Top
-bash-3.2# uname -a
Linux D2LSENPSH160 2.6.18-400.1.1.el5 #1 SMP Sun Dec 14 06:01:17 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
-bash-3.2#
-bash-3.2# /u01/app/12.1.0.1/grid/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0.1/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2015/01/13 17:15:06 CLSRSC-363: User ignored prerequisites during installation
ASM upgrade has started on first node.
OLR initialization - successful
2015/01/13 17:17:58 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/01/13 17:21:38 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/01/13 17:22:43 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
-bash-3.2#
-bash-3.2# uname -a
Linux D2LSENPSH161 2.6.18-400.1.1.el5 #1 SMP Sun Dec 14 06:01:17 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
-bash-3.2# /u01/app/12.1.0.1/grid/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0.1/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2015/01/13 17:24:21 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2015/01/13 17:26:56 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/01/13 17:30:47 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 12.1.0.1.0
2015/01/13 17:33:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
-bash-3.2#
Cause - The plug-in failed in its perform method Action - Refer to the logs or contact Oracle Support Services. Log File Location
/u01/app/oraInventory/logs/installActions2015-01-13_04-47-36PM.log
>>> Ignoring required pre-requisite failures. Continuing...
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-01-13_04-47-36PM. Please wait ...oracle@D2LSENPSH160[LABDB1]# You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2015-01-13_04-47-36PM.log
B.8.2 Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable
If some nodes become unreachable in the middle of an upgrade, then you cannot complete the upgrade, because the upgrade script (rootupgrade.sh) did not run on the unreachable nodes. Because the upgrade is incomplete, Oracle Clusterware remains in the previous release. You can confirm that the upgrade is incomplete by entering the command crsctl query crs activeversion.
To resolve this problem, run the rootupgrade command with the -force flag on any of the nodes where the rootupgrade.sh script has already completed as follows:
Grid_home/rootupgrade.sh -force
For example:
# /u01/app/12.1.0/grid/rootupgrade.sh-force
This command forces the upgrade to complete. Verify that the upgrade has completed by using the commandcrsctl query crs activeversion. The active release should be the upgrade release.
The force cluster upgrade has the following limitations:
· All active nodes must be upgraded to the newer release.
· All inactive nodes (accessible or inaccessible) may be either upgraded or not upgraded.
· For inaccessible nodes, after patch set upgrades, you can delete the node from the cluster. If the node becomes accessible later, and the patch version upgrade path is supported, then you can upgrade it to the new patch version.
· If the cluster was previously forcibly upgraded, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before starting the upgrade.
B.8.3 Upgrading Inaccessible Nodes After Forcing an Upgrade
Starting with Oracle Grid Infrastructure 12c, after you complete a force cluster upgrade, you can join inaccessible nodes to the cluster as an alternative to deleting the nodes, which was required in earlier releases. To use this option, Oracle Grid Infrastructure 12c Release 1 (12.1) software must already be installed on the nodes.
To complete the upgrade of nodes that were inaccessible or unreachable:
1. Log in as the Grid user on the node that is to be joined to the cluster.
2. Change directory to the /crs/install directory in the Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home. For example:
3. $ cd /u01/12.1.0/grid/crs/install
4. Run the following PERL command, where existingnode is the name of the option and upgraded_node is any node that was successfully upgraded and is currently part of the cluster:
5. $ rootupgrade.sh -join -existingnode upgraded_node
Note:
The -join operation is not supported for Oracle Clusterware releases earlier than 11.2.0.1.0. In such cases, delete the node and add it to the clusterware using the addNode command.
B.8.4 Changing the First Node for Install and Upgrade
If the first node becomes inaccessible, you can force another node to be the first node for installation or upgrade. During installation, ifroot.sh fails to complete on the first node, run the following command on another node using the -force option:
root.sh -force -first
For upgrade, run the following command:
rootupgrade.sh -force -first
B.9 Performing Rolling Upgrade of Oracle ASM
After you have completed the Oracle Clusterware portion of Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade, you may need to upgrade Oracle ASM separately under the following conditions:
· If you are upgrading from a release in which Oracle ASM was in a separate Oracle home, such as Oracle ASM 10g Release 2 (10.2) or Oracle ASM 11g Release 1 (11.1)
· If the Oracle ASM portion of the Oracle Grid Infrastructure upgrade failed, or for some other reason Automatic Storage Management Configuration assistant (asmca) did not run.
You can use asmca to complete the upgrade separately, but you should do it soon after you upgrade Oracle Clusterware, as Oracle ASM management tools such as srvctl do not work until Oracle ASM is upgraded.
Note:
ASMCA performs a rolling upgrade only if the earlier release of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA performs a non-rolling upgrade, in which ASMCA shuts down all Oracle ASM instances on all nodes of the cluster, and then starts an Oracle ASM instance on each node from the new Oracle Grid Infrastructure home.
After you have upgraded Oracle ASM with Oracle Grid Infrastructure 12c Release 1, you can install individual patches for Oracle ASM by downloading them from the Oracle Automated Release Update site. See Section B.9.1, "About Upgrading Oracle ASM Separately" for more information about upgrading Oracle ASM separately using ASMCA.
B.9.1 About Upgrading Oracle ASM Separately
Note the following if you intend to perform either full release or software patch level rolling upgrades of Oracle ASM:
· The active release of Oracle Clusterware must be 12c Release 1 (12.1). To determine the active release, enter the following command:
· $ crsctl query crs activeversion
· You can upgrade a single instance Oracle ASM installation to a clustered Oracle ASM installation. However, you can only upgrade an existing single instance Oracle ASM installation if you run the installation from the node on which the Oracle ASM installation is installed. You cannot upgrade a single instance Oracle ASM installation on a remote node.
· You must ensure that any rebalance operations on your existing Oracle ASM installation are completed before starting the upgrade process.
· During the upgrade process, you alter the Oracle ASM instances to an upgrade state. You do not need to shut down database clients unless they are on Oracle ACFS. However, because this upgrade state limits Oracle ASM operations, you should complete the upgrade process soon after you begin. The following are the operations allowed when an Oracle ASM instance is in the upgrade state:
· Diskgroup mounts and dismounts
· Opening, closing, resizing, or deleting database files
· Recovering instances
· Queries of fixed views and packages: Users are allowed to query fixed views and run anonymous PL/SQL blocks using fixed packages, such as dbms_diskgroup)
· You do not need to shut down database clients unless they are on Oracle ACFS.
See Also:
See Section B.9.2, "Upgrading Oracle ASM Using ASMCA" for steps to upgrade Oracle ASM separately using ASMCA
B.9.2 Upgrading Oracle ASM Using ASMCA
Complete the following tasks if you must upgrade from an Oracle ASM release where Oracle ASM was installed in a separate Oracle home, or if the Oracle ASM portion of Oracle Grid Infrastructure upgrade failed to complete:
1. On the node you plan to start the upgrade, set the environment variable ASMCA_ROLLING_UPGRADE as true. For example:
2. $ export ASMCA_ROLLING_UPGRADE=true
3. From the Oracle Grid Infrastructure 12c Release 1 (12.1) home, start ASMCA. For example:
4. $ cd /u01/12.1/grid/bin
5. $ ./asmca
6. Select Upgrade.
ASM Configuration Assistant upgrades Oracle ASM in succession for all nodes in the cluster.
7. After you complete the upgrade, run the command to unset the ASMCA_ROLLING_UPGRADE environment variable.
See Also:
Oracle Database Upgrade Guide and Oracle Automatic Storage Management Administrator's Guide for additional information about preparing an upgrade plan for Oracle ASM, and for starting, completing, and stopping Oracle ASM upgrades
B.10 Applying Patches to Oracle ASM
After you have upgraded Oracle ASM with Oracle Grid Infrastructure 12c Release 1, you can install individual patches for Oracle ASM by downloading them from My Oracle Support.
This section explains about Oracle ASM patches as follows:
· About Individual (One-Off) Oracle ASM Patches
· About Oracle ASM Software Patch Levels
· Patching Oracle ASM to a Software Patch Level
B.10.1 About Individual (One-Off) Oracle ASM Patches
Individual patches are called one-off patches. An Oracle ASM one-off patch is available for a specific released release of Oracle ASM. If a patch you want is available, then you can download the patch and apply it to Oracle ASM using the OPatch Utility. The OPatch inventory keeps track of the patches you have installed for your release of Oracle ASM. If there is a conflict between the patches you have installed and patches you want to apply, then the OPatch Utility advises you of these conflicts. See Section B.10.3, "Patching Oracle ASM to a Software Patch Level"for information about applying patches to Oracle ASM using the OPatch Utility.
B.10.2 About Oracle ASM Software Patch Levels
The software patch level for Oracle Grid Infrastructure represents the set of all one-off patches applied to the Oracle Grid Infrastructure software release, including Oracle ASM. The release is the release number, in the format of major, minor, and patch set release number. For example, with the release number 12.1.0.1, the major release is 12, the minor release is 1, and 0.0 is the patch set number. With one-off patches, the major and minor release remains the same, though the patch levels change each time you apply or roll back an interim patch.
As with standard upgrades to Oracle Grid Infrastructure, at any given point in time for normal operation of the cluster, all the nodes in the cluster must have the same software release and patch level. Because one-off patches can be applied as rolling upgrades, all possible patch levels on a particular software release are compatible with each other.
See Also:
· Section B.8.1, "Performing a Standard Upgrade from an Earlier Release" for information about upgrading Oracle Grid Infrastructure
· Section B.10.3, "Patching Oracle ASM to a Software Patch Level" for information about applying patches to Oracle ASM using the OPatch Utility
B.10.3 Patching Oracle ASM to a Software Patch Level
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), a new cluster state called "Rolling Patch" is available. This mode is similar to the existing "Rolling Upgrade" mode in terms of the Oracle ASM operations allowed in this quiesce state.
1. Download patches you want to apply from My Oracle Support:
Select the Patches and Updates tab to locate the patch.
Oracle recommends that you select Recommended Patch Advisor, and enter the product group, release, and platform for your software. My Oracle Support provides you with a list of the most recent patch set updates (PSUs) and critical patch updates (CPUs).
Place the patches in an accessible directory, such as /tmp.
2. Change directory to the /opatch directory in the Grid home. For example:
3. $ cd /u01/app/12.1.0/grid/opatch
4. Review the patch documentation for the patch you want to apply, and complete all required steps before starting the patch upgrade.
5. Follow the instructions in the patch documentation to apply the patch.
B.11 Updating Oracle Enterprise Manager Cloud Control Target Parameters
Because Oracle Grid Infrastructure 12c Release 1 (12.1) is an out-of-place upgrade of the Oracle Clusterware home in a new location (the Oracle Grid Infrastructure for a cluster home, or Grid home), the path for the CRS_HOME parameter in some parameter files must be changed. If you do not change the parameter, then you encounter errors such as "cluster target broken" on Oracle Enterprise Manager Cloud Control.
To resolve the issue, upgrade the Enterprise Manager Cloud Control target, and then update the Enterprise Manager Agent Base Directory on each cluster member node running an agent, as described in the following sections:
· Updating the Enterprise Manager Cloud Control Target After Upgrades
· Updating the Enterprise Manager Agent Base Directory After Upgrades
B.11.1 Updating the Enterprise Manager Cloud Control Target After Upgrades
1. Log in to Enterprise Manager Cloud Control.
2. Navigate to the Targets menu, and then to the Cluster page.
3. Click a cluster target that was upgraded.
4. Click Cluster, then Target Setup, and then Monitoring Configuration from the menu.
5. Update the value for Oracle Home with the new Grid home path.
6. Save the updates.
B.11.2 Updating the Enterprise Manager Agent Base Directory After Upgrades
1. Navigate to the bin directory in the Management Agent home.
The Agent Base directory is a directory where the Management Agent home is created. The Management Agent home is in the pathAgent_Base_Directory/core/EMAgent_Version. For example, if the Agent Base directory is /u01/app/emagent, then the Management Agent home is created as /u01/app/emagent/core/12.1.0.1.0.
2. In the /u01/app/emagent/core/12.1.0.1.0/bin directory, open the file emctl with a text editor.
3. Locate the parameter CRS_HOME, and update the parameter to the new Grid home path.
4. Repeat steps 1-3 on each node of the cluster with an Enterprise Manager agent.
B.12 Unlocking the Existing Oracle Clusterware Installation
After upgrade from previous releases, if you want to deinstall the previous release Oracle Grid Infrastructure Grid home, then you must first change the permission and ownership of the previous release Grid home. Complete this task using the following procedure:
Log in as root, and change the permission and ownership of the previous release Grid home using the following command syntax, whereoldGH is the previous release Grid home, swowner is the Oracle Grid Infrastructure installation owner, and oldGHParent is the parent directory of the previous release Grid home:
#chmod -R 755 oldGH
#chown -R swowner oldGH
#chown swowner oldGHParent
For example:
#chmod -R 755 /u01/app/11.2.0.3/grid
#chown -R oracle /u01/app/11.2.0.3/grid
#chown oracle /u01/app/11.2.0.3
After you change the permissions and ownership of the previous release Grid home, log in as the Oracle Grid Infrastructure Installation owner (grid, in the preceding example), and use the Oracle Grid Infrastructure 12c deinstallation tool to remove the previous release Grid home (oldGH).
See Also:
Section 10.6.1, "About the Deinstallation Tool"
B.13 Checking Cluster Health Monitor Repository Size After Upgrading
If you are upgrading from a prior release using IPD/OS to Oracle Grid Infrastructure then review the Cluster Health Monitor repository size (the CHM repository). Oracle recommends that you review your CHM repository needs, and enlarge the repository size if you want to maintain a larger CHM repository.
Note:
Your previous IPD/OS repository is deleted when you install Oracle Grid Infrastructure, and you run the root.sh script on each node.
Cluster Health Monitor is not available with IBM: Linux on System z configurations.
By default, the CHM repository size is a minimum of either 1GB or 3600 seconds (1 hour). The CHM repository is one gigabyte (1 GB), regardless of the size of the cluster.
To enlarge the CHM repository, use the following command syntax, where retention_time is the size of CHM repository in number of seconds:
oclumon manage -repos changeretentiontime retention_time
The value for retention_time must be more than 3600 (one hour) and less than 259200 (three days). If you enlarge the CHM repository size, then you must ensure that there is local space available for the repository size you select on each node of the cluster. If there is not sufficient space available, then you can move the repository to shared storage.
For example, to set the repository size to four hours:
$ oclumon manage -repos changeretentiontime 14400
B.14 Downgrading Oracle Clusterware After an Upgrade
After a successful or a failed upgrade to Oracle Clusterware 12c Release 1 (12.1), you can restore Oracle Clusterware to the previous release. This section contains the following topics:
· About Downgrading Oracle Clusterware After an Upgrade
· Downgrading to Releases Before 11g Release 2 (11.2.0.2)
· Downgrading to 11g Release 1 (11.2.0.2) or Later Release
B.14.1 About Downgrading Oracle Clusterware After an Upgrade
Downgrading Oracle Clusterware restores the Oracle Clusterware configuration to the state it was in before the Oracle Clusterware 12cRelease 1 (12.1) upgrade. Any configuration changes you performed during or after the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade are removed and cannot be recovered.
In the downgrade procedures, the following variables are used:
· first node is the first node on which the rootupgrade script completed successfully.
· non-first nodes are all other nodes where the rootupgrade script completed successfully.
To restore Oracle Clusterware to the previous release, use the downgrade procedure for the release to which you want to downgrade.
Note:
When downgrading after a failed upgrade, if rootcrs.sh does not exist on a node, then use perl rootcrs.pl instead ofrootcrs.sh.
B.14.2 Downgrading to Releases Before 11g Release 2 (11.2.0.2)
To downgrade Oracle Clusterware:
1. If the rootupgrade script failed on a node, then downgrade the node where the upgrade failed:
2. rootcrs.sh -downgrade
3. On all other nodes where the rootupgrade script ran successfully, use the command syntaxGrid_home/crs/install/rootcrs.sh -downgrade [-force] to stop the 12c Release 1 (12.1) resources, and shut down the Oracle Grid Infrastructure 12c Release 1 (12.1) stack.
4. rootcrs.sh -downgrade
5. After the rootcrs.sh -downgrade script has completed on all non-first nodes, on the first node use the command syntaxGrid_home/crs/install/rootcrs.sh -downgrade [-force] -lastnode.
For example:
# /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade -lastnode
Note:
With Oracle Grid Infrastructure 12c, you no longer need to provide the location of the previous release Grid home or release number.
This script downgrades the OCR. If you want to stop a partial or failed Oracle Grid Infrastructure 12c Release 1 (12.1) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command. Run this command from a directory that has write permissions for the Oracle Grid Infrastructure installation user.
6. On any of the cluster member nodes where the rootcrs script has run successfully:
a. Log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/12.1.0/grid is the location of the new (upgraded) Grid home:
7. ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
8. CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
Add the flag -cfs if the Grid home is a shared home.
9. On any of the cluster member nodes where the rootupgrade.sh script has run successfully:
a. Log in as the Oracle Grid Infrastructure installation owner (grid).
b. Use the following command to start the installer, where the path you provide for the flag ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware installation
For example:
$ cd /u01/app/12.1.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
CRS=true ORACLE_HOME=/u01/app/crs
c. For downgrades to 11.1 and earlier releases
If you are downgrading to Oracle Clusterware 11g Release 1 (11.1) or an earlier release, then you must run root.sh manually from the earlier release Oracle Clusterware home to complete the downgrade after you complete step b.
OUI prompts you to run root.sh manually from the earlier release Oracle Clusterware installation home in sequence on each member node of the cluster to complete the downgrade. After you complete this task, downgrade is completed.
Running root.sh from the earlier release Oracle Clusterware installation home restarts the Oracle Clusterware stack, starts up all the resources previously registered with Oracle Clusterware in the earlier release, and configures the old initialization scripts to run the earlier release Oracle Clusterware stack.
After completing the downgrade, update the entry for Oracle ASM instance in the oratab file (/etc/oratab or/var/opt/oracle/oratab) on every node in the cluster as follows:
+ASM<instance#>:<RAC-ASM home>:N
B.14.3 Downgrading to 11g Release 1 (11.2.0.2) or Later Release
Follow these steps to downgrade Oracle Grid Infrastructure:
1. On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade [-force] to stop the 12c Release 1 (12.1) resources, and shut down the Oracle Grid Infrastructure 12c Release 1 (12.1) stack.
2. # /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade
If you want to stop a partial or failed Oracle Grid Infrastructure 12c Release 1 (12.1) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command.
3. After the rootcrs.sh -downgrade script has completed on all remote nodes, on the local node use the command syntaxGrid_home/crs/install/rootcrs.sh -downgrade [-force] -lastnode
For example:
# /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade -lastnode
Note:
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), you no longer need to provide the location of the earlier release Grid home or earlier release number.
This script downgrades the OCR. If you want to stop a partial or failed Oracle Grid Infrastructure 12c Release 1 (12.1) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command. Run this command from a directory that has write permissions for the Oracle Grid Infrastructure installation user.
4. On any of the cluster member nodes where the rootupgrade.sh script has run successfully:
a. Log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/12.1.0/grid is the location of the new (upgraded) Grid home:
5. $ cd /u01/app/12.1.0/grid/oui/bin
6. ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList
7. -silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
Add the flag -cfs if the Grid home is a shared home.
8. On any of the cluster member nodes where the rootupgrade script has run successfully:
a. Log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide for the flag ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware installation
For example:
$ cd /u01/app/12.1.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs
c. For downgrades to 11.2.0.2
If you are downgrading to Oracle Clusterware 11g Release 1 (11.2.0.2), then you must start the Oracle Clusterware stack manually after you complete step b.
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home using the command crsctl start crs. For example, where the earlier release home is /u01/app/11.2.0/grid, use the following command on each node:
/u01/app/11.2.0/grid/bin/crsctl start crs
9. For downgrades to 12.1.0.1
If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.1), then run the following commands to configure the Grid Management Database:
a. Start the 12.1.0.1 Oracle Clusterware stack on all nodes.
b. On any node, remove the MGMTDB resource as follows:
c. 12101_Grid_home/bin/srvctl remove mgmtdb
d. Run DBCA in the silent mode from the 12.1.0.1 Oracle home and create the Management Database as follows:
e.12101_Grid_home/bin/dbca -silent -createDatabase -templateName
f. MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM
g. -diskGroupName ASM_DG_NAME
h.-datafileJarLocation 12101_grid_home/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords
i. Configure the Management Database by running the Configuration Assistant from the location 12101_Grid_home/bin/mgmtca.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Hi Ken
The referenced document is a good starting point, however please notice the following:
1. We cannot clone the ORACLE_HOME mainly because we will be replacing the host operation system (Solaris) with a different operating system (RHEL 6). Therefore the RDBMS Home as well as the OMS binaries will all be installed and not cloned.
2. Even the database cannot be cloned for the same reason because the platforms we are cloning from and two are different thus the database will need to be converted first and then imported.
As for your questions:
Questions I do have is:
1. Do we currently use Load Balancer in this environment? No
2. Any BI Publisher? No
3. Do we have a multi-OMS setup in the current environment? Currently No but the new system will be multi-OMS. Please review the DPS.
Thanks,
Omer
From: Chando, Kenneth
Sent: Thursday, October 22, 2015 10:16 AM
To: Abdalla, Omer
Subject: RE: Enterprise Manager 1-System Upgrade (different host) from
10.2.0.5 or from 11.1.0.1 to 12c (Doc ID 329.1)
Hi Omer,
I found the above piece of guidelines interesting. I believe it may be a good pointer towards attaining our goal of the cross-platform migration of OEM 12c.
As of now, Oracle doesn’t seem to have put a comprehensive guide on this. However, I have included at the bottom of the page(see attached) some Oracle Helpful link, should we want to upgrade our current OEM 12c after we successfully migrate it over from Solaris server to our new RHEL server.
**NOTE:
Moving OEM 12c from Solaris to RHEL, we might have to clone the current Existing $ORACLE_HOME (in old platform: Solaris) and then move it to the new RHEL server so as to save time on patching. We might have to include just recent patches that’s not been applied to the $ORACLE_HOME that was existing in Old Solaris Server.
In a high-level, we would have to:
1. Backup & Recover Enterprise Manager,
2. Install New OS platform (In this case RHEL)
3. Create accounts
4. Assign Storage volume
5. Relocate the Management Repository to the new host (RHEL)
**NOTE:
Questions I do have is:
1. Do we currently use Load Balancer in this environment?
2. Any BI Publisher?
3. Do we have a multi-OMS setup in the current environment?
Let me know your thoughts.
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Abdalla, Omer
Sent: Monday, October 19, 2015 1:55 PM
To: Chando, Kenneth
Subject: RE: Enterprise Manager 1-System Upgrade (different host) from
10.2.0.5 or from 11.1.0.1 to 12c (Doc ID 329.1)
Ken,
I expected you to go through available documentation and find out what is relevant to our situation and to write a concise a step-by-step guide to get it migrated based on the vendor documents you reviewed.
source environment: Oracle 12c OEM and Oracle 11g RDMBS running on Solaris 10
Target environment: Oracle 12c OEM and Oracle 12c RDBMS running on RHEL 6.
Thanks,
Omer
From: Chando, Kenneth
Sent: Monday, October 19, 2015 1:43 PM
To: Abdalla, Omer
Subject: Enterprise Manager 1-System Upgrade (different host) from
10.2.0.5 or from 11.1.0.1 to 12c (Doc ID 329.1)
Hi Omer,
Here is the cross-platform that I researched. Please go through and let me know your thoughts. After you go through, we can see how to customize it in our environment.
Here is the OEM Cross-platform Migration link:
http://docs.oracle.com/cd/E24628_01/upgrade.121/e22625/toc.htm
Thank you!
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
(Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Hi Lionel,
Pele(The Brazilian soccer player) once said:
You’re INDEED a great TEAM player! You know how to lead people when they’re down and disappointed. You’re full of great inspiration.
My short experience working with you has been a wonderful treasure. You embody the skillset to be a great and wonderful leader.
Thank you very much!!!
I wish you well and look forward to be privileged to work with you again some day.
**If you don’t mind, I would like to have your contact to keep in touch**
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Charles, Lionel
Sent: Tuesday, November 10, 2015 1:02 PM
To: DC2 DATABASE SUPPORT
Subject: Final days
Team,
I plan to be on vacation beginning Monday November 16th through Friday November 27th 2015. I will return on Monday November 30th 2015 to say goodbye to a great team I will be leaving behind.
Lionel Charles
(240) 419-0146 – (cell)
(703) 713-7390 – (Lync)
Email Lionel.Charles@hpe.com
Thank you for your feedback |Recognition@hpe.com
Ken,
I am humbled by your comparison and praise and I thank you. The team is a great one of talented individuals and I am happy to be a part of it. Be a good listener, respect others, let all have a fair chance, be patient and ask if you don’t know. Don’t remember to do your homework.
I will share my contact information when I return in November 30th .
Here is my email address:
Hope your family is doing well.
Regards
Lionel
Hi Rajeev,
The group distribution list for our team is DC2 Database Support.
Kindly grant SQL Server access to everyone in this list in the DC2LAB.
Below is some of helpful information we got from Bryan.
|
Physical Node |
|
IP |
Port |
DC2 |
D2LSENPSH138 |
MGMT3 (Public) |
10.236.28.138 |
717 |
|
|
PROD3 |
10.239.74.138 |
|
|
|
BUR3 |
10.236.27.138 |
|
|
|
Heartbeat Network |
10.10.10.10 |
|
|
|
Replica Endpoint |
|
5022 |
|
|
|
|
|
DC2 |
D2LSENPSH139 |
MGMT3 (Public) |
10.236.28.139 |
717 |
|
|
PROD3 |
10.239.74.139 |
|
|
|
BUR3 |
10.236.27.139 |
|
|
|
Heartbeat Network |
10.10.10.11 |
|
|
|
Replica Endpoint |
|
5022 |
|
|
|
|
|
DC1 |
D2LSENPSH338 |
MGMT3 (Public) |
10.78.28.138 |
717 |
|
|
PROD3 |
10.78.74.138 |
|
|
|
BUR3 |
10.78.27.138 |
|
|
|
Heartbeat Network |
192.168.0.138 |
|
|
|
Replica Endpoint |
|
|
|
|
|
|
|
DC1 |
D2LSENPSH339 |
MGMT3 (Public) |
10.78.28.139 |
717 |
|
|
PROD3 |
10.78.74.139 |
|
|
|
BUR3 |
10.78.27.139 |
|
|
|
Heartbeat Network |
192.168.0.139 |
|
|
|
Replica Endpoint |
|
5022 |
|
|
|
|
|
Cluster IP for DC2 |
|
|
10.236.28.140 |
|
Cluster IP for DC1 |
|
|
10.78.28.140 |
|
|
|
|
|
|
Cluster Name |
|
WINSQLCLSTR |
|
|
|
|
|
|
|
Node and File Share Majority |
|
|
||
|
|
|
|
|
SQL AG Name |
|
SQLAG1 |
|
|
Sorry for the long delay, give it another shot with the new vcenter (SAME IP, 10.236.28.130 for vCenter Manager).
Bryan
Thanks!
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Bruce, Ken, Omer, Cyrus, Johnnie, Abdel and Hank,
Being a part of your special team meant so much to me. You guys truly exhibit what team work is all about. You never said no when I asked. You did not abandon me when I did not know. I never felt alone among you. You were always ready to share and care. Even when you were away from work I could always count on you. You are all special in your individual roles and as a team.
Please know that I will treasure the experiences I shared with you and know I wish you continued success in your work life and family life. Don’t stray from the values you hold dear. Keep supporting each other and make Hank look good. He thinks the world of your team and he brags about it.
Thank you Hank, Omer, Ken, Johnnie, Cyrus, Abdel and Bruce.
Lionel Charles
(New contact info below)
(301) 377-3944
echo >> $RMAN_LOG_FILE
chmod 666 $RMAN_LOG_FILE
# ---------------------------------------------------------------------------
# Log the start of this script.
# ---------------------------------------------------------------------------
echo Script $0 >> $RMAN_LOG_FILE
echo ==== started on `date` ==== >> $RMAN_LOG_FILE
echo >> $RMAN_LOG_FILE
# ---------------------------------------------------------------------------
# Replace /u01/app/oracle/product/11.2.0, below, with the Oracle home path.
# ---------------------------------------------------------------------------
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3
export ORACLE_HOME
# ---------------------------------------------------------------------------
# Replace ora81, below, with the Oracle SID of the target database.
# ---------------------------------------------------------------------------
ORACLE_SID=EAIRP
export ORACLE_SID
# ---------------------------------------------------------------------------
# Replace ora81, below, with the Oracle DBA user id (account).
# ---------------------------------------------------------------------------
ORACLE_USER=oracle
# ---------------------------------------------------------------------------
# Set the target connect string.
# Replace "sys/manager", below, with the target connect string.
# ---------------------------------------------------------------------------
TARGET_CONNECT_STR=/
# ---------------------------------------------------------------------------
# Set the Oracle Recovery Manager name.
# ---------------------------------------------------------------------------
RMAN=$ORACLE_HOME/bin/rman
# ---------------------------------------------------------------------------
# Print out the value of the variables set by this script.
# ---------------------------------------------------------------------------
echo >> $RMAN_LOG_FILE
echo "RMAN: $RMAN" >> $RMAN_LOG_FILE
echo "ORACLE_SID: $ORACLE_SID" >> $RMAN_LOG_FILE
echo "ORACLE_USER: $ORACLE_USER" >> $RMAN_LOG_FILE
echo "ORACLE_HOME: $ORACLE_HOME" >> $RMAN_LOG_FILE
# ---------------------------------------------------------------------------
# Print out the value of the variables set by bphdb.
# ---------------------------------------------------------------------------
echo >> $RMAN_LOG_FILE
echo "NB_ORA_FULL: $NB_ORA_FULL" >> $RMAN_LOG_FILE
echo "NB_ORA_INCR: $NB_ORA_INCR" >> $RMAN_LOG_FILE
echo "NB_ORA_CINC: $NB_ORA_CINC" >> $RMAN_LOG_FILE
echo "NB_ORA_SERV: $NB_ORA_SERV" >> $RMAN_LOG_FILE
echo "NB_ORA_POLICY: $NB_ORA_POLICY" >> $RMAN_LOG_FILE
# ---------------------------------------------------------------------------
# NOTE: This script assumes that the database is properly opened. If desired,
# this would be the place to verify that.
# ---------------------------------------------------------------------------
echo >> $RMAN_LOG_FILE
# ---------------------------------------------------------------------------
# If this script is executed from a NetBackup schedule, NetBackup
# sets an NB_ORA environment variable based on the schedule type.
# The NB_ORA variable is then used to dynamically set BACKUP_TYPE
# For example, when:
# schedule type is BACKUP_TYPE is
# ---------------- --------------
# Automatic Full INCREMENTAL LEVEL=0
# Automatic Differential Incremental INCREMENTAL LEVEL=1
# Automatic Cumulative Incremental INCREMENTAL LEVEL=1 CUMULATIVE
#
# For user initiated backups, BACKUP_TYPE defaults to incremental
# level 0 (full). To change the default for a user initiated
# backup to incremental or incremental cumulative, uncomment
# one of the following two lines.
# BACKUP_TYPE="INCREMENTAL LEVEL=1"
# BACKUP_TYPE="INCREMENTAL LEVEL=1 CUMULATIVE"
#
# Note that we use incremental level 0 to specify full backups.
# That is because, although they are identical in content, only
# the incremental level 0 backup can have incremental backups of
# level > 0 applied to it.
# ---------------------------------------------------------------------------
if [ "$NB_ORA_FULL" = "1" ]
then
echo "Full backup requested" >> $RMAN_LOG_FILE
BACKUP_TYPE="INCREMENTAL LEVEL=0"
elif [ "$NB_ORA_INCR" = "1" ]
then
echo "Differential incremental backup requested" >> $RMAN_LOG_FILE
BACKUP_TYPE="INCREMENTAL LEVEL=1"
elif [ "$NB_ORA_CINC" = "1" ]
then
echo "Cumulative incremental backup requested" >> $RMAN_LOG_FILE
BACKUP_TYPE="INCREMENTAL LEVEL=1 CUMULATIVE"
elif [ "$BACKUP_TYPE" = "" ]
then
echo "Default - Full backup requested" >> $RMAN_LOG_FILE
BACKUP_TYPE="INCREMENTAL LEVEL=0"
fi
# ---------------------------------------------------------------------------
# Call Recovery Manager to initiate the backup. This example does not use a
# Recovery Catalog. If you choose to use one, replace the option 'nocatalog'
# from the rman command line below with the
# 'rcvcat <userid>/<passwd>@<tns alias>' statement.
#
# Note: Any environment variables needed at run time by RMAN
# must be set and exported within the switch user (su) command.
# ---------------------------------------------------------------------------
# Backs up the whole database. This backup is part of the incremental
# strategy (this means it can have incremental backups of levels > 0
# applied to it).
#
# We do not need to explicitly request the control file to be included
# in this backup, as it is automatically included each time file 1 of
# the system tablespace is backed up (the inference: as it is a whole
# database backup, file 1 of the system tablespace will be backed up,
# hence the controlfile will also be included automatically).
#
# Typically, a level 0 backup would be done at least once a week.
#
# The scenario assumes:
# o you are backing your database up to two tape drives
# o you want each backup set to include a maximum of 5 files
# o you wish to include offline datafiles, and read-only tablespaces,
# in the backup
# o you want the backup to continue if any files are inaccessible.
# o you are not using a Recovery Catalog
# o you are explicitly backing up the control file. Since you are
# specifying nocatalog, the controlfile backup that occurs
# automatically as the result of backing up the system file is
# not sufficient; it will not contain records for the backup that
# is currently in progress.
# o you want to archive the current log, back up all the
# archive logs using two channels, putting a maximum of 20 logs
# in a backup set, and deleting them once the backup is complete.
#
# Note that the format string is constructed to guarantee uniqueness and
# to enhance NetBackup for Oracle backup and restore performance.
#
#
# NOTE WHEN USING TNS ALIAS: When connecting to a database
# using a TNS alias, you must use a send command or a parms operand to
# specify environment variables. In other words, when accessing a database
# through a listener, the environment variables set at the system level are not
# visible when RMAN is running. For more information on the environment
# variables, please refer to the NetBackup for Oracle Admin. Guide.
#
# ---------------------------------------------------------------------------
CMD_STR="
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORACLE_SID=$ORACLE_SID
export ORACLE_SID
$RMAN target $TARGET_CONNECT_STR nocatalog msglog $RMAN_LOG_FILE append << EOF
RUN {
ALLOCATE CHANNEL ch00 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL ch01 TYPE 'SBT_TAPE';
SEND 'NB_ORA_POLICY=COMP_PROJ_Linux_ORA_EAIRP_Inc';
BACKUP
$BACKUP_TYPE
SKIP INACCESSIBLE
TAG db_inc_backup
FILESPERSET 5
# recommended format
FORMAT 'bk_%s_%p_%t'
DATABASE;
sql 'alter system archive log current';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
# backup all archive logs
ALLOCATE CHANNEL ch00 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL ch01 TYPE 'SBT_TAPE';
BACKUP
TAG db_arch_backup
filesperset 20
FORMAT 'al_%s_%p_%t'
ARCHIVELOG ALL DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
#
# Note: During the process of backing up the database, RMAN also backs up the
# control file. This version of the control file does not contain the
# information about the current backup because "nocatalog" has been specified.
# To include the information about the current backup, the control file should
# be backed up as the last step of the RMAN section. This step would not be
# necessary if we were using a recovery catalog.
#
ALLOCATE CHANNEL ch00 TYPE 'SBT_TAPE';
BACKUP
# recommended format
FORMAT 'cntrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
}
EOF
"
# Initiate the command string
if [ "$CUSER" = "root" ]
then
su - $ORACLE_USER -c "$CMD_STR" >> $RMAN_LOG_FILE
RSTAT=$?
else
/usr/bin/sh -c "$CMD_STR" >> $RMAN_LOG_FILE
RSTAT=$?
fi
# ---------------------------------------------------------------------------
# Log the completion of this script.
# ---------------------------------------------------------------------------
if [ "$RSTAT" = "0" ]
then
LOGMSG="ended successfully"
else
LOGMSG="ended in error"
fi
echo >> $RMAN_LOG_FILE
echo Script $0 >> $RMAN_LOG_FILE
echo ==== $LOGMSG on `date` ==== >> $RMAN_LOG_FILE
echo >> $RMAN_LOG_FILE
exit $RSTAT
oracle@D2CSEVPHQ004[EAIRP]# vi nb_inc_backup_EAIRP.sh
#!/bin/sh
# $Header: hot_database_backup.sh,v 1.2 2002/08/06 23:51:42 $
#
#bcpyrght
#***************************************************************************
#* $VRTScprght: Copyright 1993 - 2007 Symantec Corporation, All Rights Reserved $ *
#***************************************************************************
#ecpyrght
#
# ---------------------------------------------------------------------------
# hot_database_backup.sh
# ---------------------------------------------------------------------------
# This script uses Recovery Manager to take a hot (inconsistent) database
# backup. A hot backup is inconsistent because portions of the database are
# being modified and written to the disk while the backup is progressing.
# You must run your database in ARCHIVELOG mode to make hot backups. It is
# assumed that this script will be executed by user root. In order for RMAN
# to work properly we switch user (su -) to the oracle dba account before
# execution. If this script runs under a user account that has Oracle dba
# privilege, it will be executed using this user's account.
# ---------------------------------------------------------------------------
# ---------------------------------------------------------------------------
# Determine the user which is executing this script.
# ---------------------------------------------------------------------------
CUSER=`id |cut -d"(" -f2 | cut -d ")" -f1`
# ---------------------------------------------------------------------------
# Put output in <this file name>.out. Change as desired.
# Note: output directory requires write permission.
# ---------------------------------------------------------------------------
RMAN_LOG_FILE=${0}.out
# ---------------------------------------------------------------------------
# You may want to delete the output file so that backup information does
# not accumulate. If not, delete the following lines.
# ---------------------------------------------------------------------------
if [ -f "$RMAN_LOG_FILE" ]
then
rm -f "$RMAN_LOG_FILE"
fi
# -----------------------------------------------------------------
# Initialize the log file.
"nb_inc_backup_EAIRP.sh" 295L, 11626C
I am not sure if you are all aware that HPE is a Diamond Partner of Oracle (there are only a handful of companies who have that kind of partnership with Oracle). This partnership entitle us to access a lot of internal vendor training and certification materials that are not available to the general public and HPE provides vouchers for taking some of the certification exams that are deemed necessary to maintain the partnership status.
Our Company Id is 6057. Please review he following SharePoint site for more information on registering for OPN:
http://ent192.sharepoint.hp.com/teams/hpopncollab/Wiki%20Pages/Set-up.aspx
The HPE OPN team also has a presence in Yammer (I am sure you are all aware of what Yammer is):
https://www.yammer.com/hpe.com/#/threads/inGroup?type=in_group&feedId=5364881
Thanks,
OMER ABDALLA
DC2 Database Support
DC2 Program, An ISO 20000:2011 Organization
omer.abdalla@hpe.com
omer.abdalla@associates.hq.dhs.gov
T +1 919 424 5448
M +1 202 870 3162
Hewlett-Packard Enterprise
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
USA
HARDENING SCRIPT:
select OWNER, TABLE_NAME, PRIVILEGE from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
PROMPT
PROMPT Please revoke these privileges from PUBLIC by executing the following statements:
set head off feed off
select 'revoke '||PRIVILEGE||' on '||trim(OWNER)||'.'||TABLE_NAME||' from PUBLIC;'
from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
set head on feed on
PROMPT
PROMPT All system privileges except for CREATE SESSION must be restricted to DBAs,
PROMPT application object owner accounts/schemas (locked accounts), and default Oracle accounts.
PROMPT List of system privileges assigned to Roles
break on grantee skip 1;
col privilege format a35
select grantee, privilege , admin_option
from dba_sys_privs
where grantee in (select role from dba_roles)
and grantee not in ('SELECT_CATALOG_ROLE', 'DBA'
,'IMP_FULL_DATABASE'
,'EXP_FULL_DATABASE','RECOVERY_CATALOG_OWNER'
,'SCHEDULER_ADMIN', 'AQ_ADMINISTRATOR_ROLE')
and privilege not in ('CREATE SESSION')
and (admin_option = 'YES' or privilege like '%ANY%')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of Roles assigned to Users
break on granted_role skip 1 ;
select granted_role, grantee, admin_option
from dba_role_privs
where grantee not in ('SYS','SYSTEM', 'DBA',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS',
'OLAPSYS','SYSMAN','PERFSTAT')
order by granted_role, grantee
/
clear breaks;
PROMPT
PROMPT List of system privs assigned directly to users
PROMPT These should be reassigned using roles.
break on grantee skip 1;
select grantee, privilege
from dba_sys_privs
where grantee not in (select role from dba_roles)
and grantee not in ('SYS','SYSTEM',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS','ORDPLUGINS',
'XDB','WMSYS','DBSNMP','OLAPSYS','SYSMAN','PERFSTAT')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of object privs assigned directly to users
PROMPT Privileges should be controlled using roles.
col privilege format a10;
col grantee format a15;
col owner_object format a40;
break on grantee on privilege skip 1;
select grantee, privilege,
owner||'.'||table_name owner_object
from dba_tab_privs
where grantee not in (select role from dba_roles)
and grantee not in ('SYS','SYSTEM','PUBLIC',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS',
'SDB','WMSYS','XDB','DBSNMP',
'OLAPSYS','SYSMAN','PERFSTAT')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of users that can pass on system privs and the objects they control
PROMPT Users should not be able to pass system privs to others
break on grantee;
select grantee, privilege
from dba_sys_privs
where admin_option='YES'
and grantee not in ('DBA','SYSTEM','SYS', 'SCHEDULER_ADMIN'
,'XDB','AQ_ADMINISTRATOR_ROLE')
order by grantee, privilege
/
clear breaks;
PROMPT
PROMPT List of system privileges that should be reviewed and possibly revoked
break on grantee skip 1;
select grantee, privilege
from dba_sys_privs
where( privilege like 'ADMINISTER %'
or privilege like '%ANY%'
or (privilege like 'ALTER%' and privilege not like '%SESSION')
or
privilege like 'DROP %'
or
privilege like 'AUDIT%'
or privilege in ('BECOME USER', 'CREATE DATABASE LINK', 'CREATE PROFILE',
'CREATE ROLE', 'CREATE USER', 'CREATE ROLLBACK SEGMENT',
'EXPORT FULL DATABASE', 'IMPORT FULL DATABASE', 'MANAGE TABLESPACE')
)
and grantee not in ('DBA','SYSTEM','SYS','IMP_FULL_DATABASE'
,'EXP_FULL_DATABASE','DMSYS', 'SCHEDULER_ADMIN','ORDSYS', 'XDB'
,'MDSYS','RECOVERY_CATALOG_OWNER','WMSYS','CTXSYS','DMSYS','DBSNMP',
'PERFSTAT','ORDPLUGINS', 'AQ_ADMINISTRATOR_ROLE','OUTLN' )
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of object privs that should be reviewed and possibly revoked.
col owner_object format a40;
col grantee format a15;
col privilege format a10;
break on grantee skip 1;
select grantee, privilege,
owner||'.'||table_name owner_object
from dba_tab_privs
where owner in ('SYS','SYSTEM')
and table_name like 'DBA%'
and grantee not in ('SELECT_CATALOG_ROLE','SYSTEM','DBA'
,'MDSYS','ORDSYS','WMSYS','DMSYS','AQ_ADMINISTRATOR_ROLE','CTXSYS')
/
clear breaks;
clear columns;
PROMPT
PROMPT List of objects created using sys or system
PROMPT excluding those created on installation
break on object_type skip 1;
col object_name format a40;
select distinct object_type, object_name
from dba_objects
where owner in ('SYS','SYSTEM')
and trunc(created) > (select trunc(created) from v$database)
and object_type not like 'INDEX%'
order by object_type, object_name
/
clear breaks;
column owner format a10;
column segment_name format a25;
column segment_type format a25;
set feedback off heading off
select 'The Following is a list of all objects that are owned by users other than SYS and SYSTEM '||chr(13)||chr(10),
'but are stored in the SYSTEM tablespace....'
from dual
where 0 < ( select count(*)
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM','OUTLN')
and tablespace_name = 'SYSTEM' )
/
set heading on
break on owner skip 1;
select owner, segment_name, segment_type
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM','OUTLN')
and tablespace_name = 'SYSTEM'
order by owner, segment_name
/
prompt
prompt
set feedback off heading off
select 'The Following Users have the SYSTEM tablespace as their Default or '||chr(13)||chr(10),
'Temporary Tablespace. Please change that for all non-system accounts'
from dual
where 0 <
( select count(*)
from sys.dba_users
where username not in ('SYS', 'SYSTEM','OUTLN')
and ( default_tablespace = 'SYSTEM' or temporary_tablespace = 'SYSTEM') )
/
set heading on
column username format a10;
column default_tablespace format a15 heading 'Default';
column temporary_tablespace format a15 heading 'Temporary';
column account_status format a16 heading 'Account Status';
select username, default_tablespace, temporary_tablespace, account_status
from sys.dba_users
where username not in ('SYS', 'SYSTEM','OUTLN')
and ( default_tablespace = 'SYSTEM' or temporary_tablespace = 'SYSTEM')
order by username
/
set feedback on
prompt
prompt
set feedback on
column username format a20;
PROMPT Opened Accounts
select username , account_status
from dba_users where account_status = 'OPEN'
/
PROMPT Accounts NOT open
select username , account_status
from dba_users where account_status != 'OPEN'
/
spool off;
oracle@d2asedvic004[BASSD]#
Thank you!
Derek E. Doolittle
Special Agent
Federal Investigative Services
P.O. Box 71159
Soldier Support Center
4-2843 Normandy Drive, Room B-H-2
email: derek.doolittle@opm.gov
Confidentiality Notice: The documents accompanying this email transmittal and/or the content of the email may contain confidential information belonging to the sender. This information is intended only for the use of the individual or entity named above. The authorized recipient of this information is prohibited from disclosing this information to any other party without prior authorization. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or action taken in reliance on the contents of these documents is strictly prohibited. If you have received this email transmittal in error, please notify the sender immediately by reply email and destroy all copies of the original message.
From: Chando, Kenneth [mailto:kenneth.chando@hpe.com]
Sent: Wednesday, January 27, 2016 10:46 AM
To: Doolittle, Derek E
Subject: HR Details
Hi Derek,
My HR details are:
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Hi Derek,
See requested address for my siblings below:
NAME |
Address |
Susan |
19243 Lancer Circle, Purcellville, VA 20132 |
Josephine |
109 Houndschase Run, Cary, NC 27513 |
Janvier |
6163 Bushmill Rd, Raleigh NC 27613 |
Mathias |
2825 Schubert Dr., SilverSpring, MD 20904 |
John |
2632 Weddington Ave, Charlotte, NC 28204 |
Florence |
Douala, Cameroon (Cameroon has no organized street address) |
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Doolittle, Derek E [mailto:Derek.Doolittle@opm.gov]
Sent: Thursday, January 28, 2016 7:55 AM
To: Chando, Kenneth
Subject: RE: HR Details
I also need to know the current home addresses for all the individuals you provided. Are Janvier and Florence US citizens or registered aliens?
Derek E. Doolittle
Special Agent
Federal Investigative Services
P.O. Box 71159
Soldier Support Center
4-2843 Normandy Drive, Room B-H-2
email: derek.doolittle@opm.gov
Confidentiality Notice: The documents accompanying this email transmittal and/or the content of the email may contain confidential information belonging to the sender. This information is intended only for the use of the individual or entity named above. The authorized recipient of this information is prohibited from disclosing this information to any other party without prior authorization. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or action taken in reliance on the contents of these documents is strictly prohibited. If you have received this email transmittal in error, please notify the sender immediately by reply email and destroy all copies of the original message.
From: Chando, Kenneth [mailto:kenneth.chando@hpe.com]
Sent: Wednesday, January 27, 2016 10:46 AM
To: Doolittle, Derek E
Subject: HR Details
Hi Derek,
My HR details are:
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Ken,
I need one last piece of information. What does Janvier do for a living (employment) and how often do you have contact with him? Weekly, monthly, etc. and the type (phone, email, etc.).
Thanks,
Derek E. Doolittle
Special Agent
Federal Investigative Services
P.O. Box 71159
Soldier Support Center
4-2843 Normandy Drive, Room B-H-2
email: derek.doolittle@opm.gov
Confidentiality Notice: The documents accompanying this email transmittal and/or the content of the email may contain confidential information belonging to the sender. This information is intended only for the use of the individual or entity named above. The authorized recipient of this information is prohibited from disclosing this information to any other party without prior authorization. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or action taken in reliance on the contents of these documents is strictly prohibited. If you have received this email transmittal in error, please notify the sender immediately by reply email and destroy all copies of the original message.
Ken,
One last thing, I need the phone number for Mathias.
Thanks,
Derek E. Doolittle
Special Agent
Federal Investigative Services
P.O. Box 71159
Soldier Support Center
4-2843 Normandy Drive, Room B-H-2
email: derek.doolittle@opm.gov
Confidentiality Notice: The documents accompanying this email transmittal and/or the content of the email may contain confidential information belonging to the sender. This information is intended only for the use of the individual or entity named above. The authorized recipient of this information is prohibited from disclosing this information to any other party without prior authorization. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or action taken in reliance on the contents of these documents is strictly prohibited. If you have received this email transmittal in error, please notify the sender immediately by reply email and destroy all copies of the original message.
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Chando, Kenneth
Sent: Thursday, April 14, 2016 2:18 PM
To: Chando, Kenneth <kenneth.chando@hpe.com>
Subject: Testing Results
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
83%
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Chando, Kenneth
Sent: Thursday, April 14, 2016 2:34 PM
To: Chando, Kenneth <kenneth.chando@hpe.com>
Subject: RE: Testing Results-ANSWERS
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Chando, Kenneth
Sent: Thursday, April 14, 2016 2:18 PM
To: Chando, Kenneth <kenneth.chando@hpe.com>
Subject: Testing Results
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
DBASE TUNING REPORT
oracle@d2aseutsh018.ndc.local[openview]# vi rpt_db_tuning.sql
substr(to_char(a.extents,'9999'),2)||'/'||
substr(to_char(b.max_extents,'9999'),2) "Extents",
100*(a.extents/b.max_extents) "% to Max"
from sys.dba_segments a, sys.dba_indexes b
where a.owner not in ('SYS','SYSTEM')
and b.index_name = a.segment_name
and b.owner = a.owner
and a.segment_type = 'INDEX'
and 100*(a.extents/b.max_extents) >= 75
union
select substr(a.tablespace_name,1,19) "Tablespace",
a.segment_type "Type",
substr(a.owner||'.'||a.segment_name,1,37) "Segment",
bytes "Bytes",
substr(to_char(a.extents,'9999'),2)||'/'||
substr(to_char(b.max_extents,'9999'),2) "Extents",
100*(a.extents/b.max_extents) "% to Max"
from sys.dba_segments a, sys.dba_tables b
where a.owner not in ('SYS','SYSTEM')
and b.table_name = a.segment_name
and b.owner = a.owner
and a.segment_type = 'TABLE'
and 100*(a.extents/b.max_extents) >= 75
order by 2, 1, 3
/
clear computes
ttitle off
prompt
prompt
rem ******************************* USER STATS ********************************
set newpage 0
rem Shows number of accounts , DBA accounts and then analyzes breakdown
rem by tablespaces
ttitle center 'Number of Users per Tablespace' skip 2
column df format a20 heading 'Default Tablespace'
column tmp format a20 heading 'Temporary Tablespace'
column cnt_df format 9,999 heading '# of|Users'
column cnt_tmp format 9,999 heading '# of|Users'
select
Default_tablespace df,
count(*) cnt_df,
temporary_tablespace tmp,
count(*) cnt_tmp
from sys.dba_users
group by default_tablespace, temporary_tablespace
/
clear computes
clear breaks
ttitle off
rem ********************** DATABASE OBJECTS ****************************
rem This counts the number of DB objects defined in the catalog
rem - it can provide additional information when sizing the
rem DC_ Dictionary cache parameters and reducing recursive calls
set newpage 0
column object_type format a20 heading 'Object Type'
column cnt format 9,999,999 heading 'Count'
compute sum of cnt on report
break on report
ttitle center 'Count of Objects' skip 2
select owner, object_type, count(*)
from dba_objects
group by owner, object_type;
select object_type , count(object_type) cnt
from sys.dba_objects
group by object_type
union
select 'COLUMN' , count(*) kount
from sys.col$
union
select 'DB LINKS' , count(*) kount
from sys.link$
union
select 'CONSTRAINT' , count(*) kount
from sys.con$
/
prompt
prompt
clear computes
clear breaks
ttitle off
rem *********** THIS PRINTS OUT THE CURRENT SGA PARAMETERS ********
set newpage 0 termout on
prompt
prompt Reporting Database Parameters
prompt
set termout off
column name format a45 heading 'Parameter Name'
column value format a60 heading 'Parameter Value'
ttitle center 'Database Parameter Values' skip 2
select name , value
from v$parameter
order by name
/
clear computes
clear breaks
ttitle off
rem ******************************* Initialization Parameters - Recommendation ******************************
rem checks various key INIT.ORA parameters and advises on their appropriateness.
set newpage 0
set heading off
set feedback off
set verify off
set echo off
ttitle '* * * * *' center skip 2
select 'Archiving is not turned on for the '||
' database! This means that recovery is only ' nl,
'possible up to the last cold backup or export. This is not good practice for a ' nl,
'production database. Check if this is acceptable.' nl
from v$parameter
where name = 'log_archive_start'
and value = 'FALSE'
/
select 'The Buffer cache (DB_BLOCK_BUFFERS * DB_BLOCK_SIZE ) is set too low for a Production ' nl,
'database. It is set to '||to_char(max(bytes)) ||'. It should be at least 16 Megabytes for a serious' nl,
'production system. If you have sufficient free memory, consider increasing it '
from v$sgastat
where name = 'db_block_buffers'
having max(bytes) < 16000000
/
select 'Your SORT_AREA_RETAINED_SIZE and SORT_AREA_SIZE are set to the same value('||a.value||'). ' nl,
'Unless you are running a database which is totally dedicated to large batch jobs, it is best ' nl,
'to allocate the extra memory only to the people that need it. Typical settings are 64K for' nl,
'SORT_AREA_RETAINED_SIZE and 2Meg for SORT_AREA_SIZE '
from v$parameter a , v$parameter b
where a.name = 'sort_area_size'
and b.name = 'sort_area_retained_size'
and b.value = a.value
/
select 'The SEQUENCE_CACHE_ENTRIES is undersized. It should ideally be sized to fit all of the ' nl,
'cached entries required for all sequences. The parameter is set to '||b.value||'.' nl,
'It should be set to '||sum (a.cache_size) ||'.'
from sys.dba_sequences a, v$parameter b
where b.name='sequence_cache_entries'
group by b.value
having sum (a.cache_size) < b.value
/
select 'Your LOG_BUFFER could be enlarged to improve performance. It is currently set ' nl,
'to '||b.value||'. There have been a number of redo log space request ('||a.value||') waits.' nl,
'Consider enlarging the LOG_BUFFER to a value such as '||b.value * 1.5||'.'
from v$parameter b, v$sysstat a
where b.name = 'log_buffer'
and a.name = 'redo log space requests'
and a.value > 50
and b.value < (select 1000000 from dual)
/
select 'Warning: Enqueue Timeouts are '||value||'. They should be zero if the INIT.ora parameter is ' line1,
'high enough. Try increasing INIT.ora parameter ENQUEUE_RESOURCES and see if the Timeouts reduces.'
from v$sysstat
where name = 'enqueue timeouts'
and value > 0
/
rem ******************************* System Tablespace - Recommendation **************************************
rem Make sure that SYSTEM tablespace is used only to store Oracle objects.
ttitle off
column owner format a10;
column segment_name format a25;
column segment_type format a25;
column nl newline;
select 'The Following is a list of all objects that are owned by users other than SYS and SYSTEM ' nl,
'but are stored in the SYSTEM tablespace....' nl
from dual
where 0 < ( select count(*)
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM')
and tablespace_name = 'SYSTEM' )
/
break on owner skip 1;
select ' ', owner, segment_name, segment_type
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM')
and tablespace_name = 'SYSTEM'
order by owner, segment_name
/
prompt
prompt
select 'You have modified your SYSTEM Tablespace PCTINCREASE to '||pct_increase nl,
'This is different to Oracles recommended setting of 50.' nl,
'Make sure that you will not have any problems as a result of this. ' nl
from sys.dba_tablespaces
where pct_increase != 50
and tablespace_name = 'SYSTEM'
/
prompt
prompt
select 'The Following Users have the SYSTEM tablespace as their Default Tablespace.' nl,
'This is bad practice because it can often cause the SYSTEM tablespace to ' nl,
'fill and Oracle to grind to a halt. ' nl
' nl
from dual
where 0 <
( select count(*)
from sys.dba_users
where username not in ('SYS', 'SYSTEM')
and default_tablespace = 'SYSTEM' )
/
select ' ', username
from sys.dba_users
where username not in ('SYS', 'SYSTEM')
and default_tablespace = 'SYSTEM'
order by username
/
prompt
prompt
select 'The Following Users have the SYSTEM tablespace as their Temporary Tablespace.' nl,
'This is bad practice because it can often cause the SYSTEM tabelspace to ' nl,
'fill and Oracle to grind to a halt. ' nl
from dual
where 0 <
( select count(*)
from sys.dba_users
where username not in ('SYS', 'SYSTEM')
and temporary_tablespace = 'SYSTEM' )
/
select ' ', username
from sys.dba_users
where username not in ('SYS', 'SYSTEM')
and temporary_tablespace = 'SYSTEM'
order by username
/
prompt
prompt End of Report.
ttitle '************ END OF REPORT ************' center skip 2
select ' ' from dual;
clear computes
clear breaks
ttitle off
spool off
set heading on feedback on termout on
======2====================================================================================================================
oracle@d2aseutsh018.ndc.local[openview]# vi rpt_hardening.sql
-- Oracle Hardening Report
-- This script will generate a file named rpt_hardening_<SID>_<DATE>.txt
-- Written by: Omer Abdalla - 10/29/2008 - Last updated 05/12/2011
--
col dbname new_value n_dbname noprint
col rptnme new_value n_rptnme noprint
select name||'_'||to_char(sysdate,'YYMMDD') rptnme, name dbname
from v$database;
spool rpt_hardening_&n_rptnme..txt
set pagesize 6000;
set linesize 120;
set feedback off;
set echo off;
btitle off;
ttitle off;
set head off;
select
'Oracle Hardening Report for '||instance_name||' as of '||
to_char(sysdate,'DL')||' '||to_char(sysdate,'HH24:MI')
from v$instance
/
select
'Oracle Version: '||version from v$instance
/
col name format a60
col status format a20
PROMPT
PROMPT There should be at least two copies of the controlfile, each in a separate filesystem/disk.
PROMPT Here are the current control files for this instance:
select name, status from v$controlfile
/
PROMPT
PROMPT There should be at least 3 redo logs with 2 members in each group (in separate disks).
PROMPT Here are the current groups and logfiles:
break on groupno;
column groupno format a10
col member format a60
select 'GROUP '||group# groupno, member from v$logfile
order by group#
/
PROMPT
PROMPT The following are file destinations as set in the spfile:
col name format a30;
col value format a50;
select name, value from v$spparameter
where name like '%_dest'
/
set feedback on;
PROMPT
PROMPT
PROMPT Now checking Security Compliance items
PROMPT
PROMPT 1.1 Oracle Default Users
PROMPT With the exception of SYS, SYSTEM, DBSNMP, and SYSMAN
PROMPT All default accounts should be locked and expired.
PROMPT Please lock the following accounts (if any listed):
col username format a37
col profile format a20
select username, profile, account_status
from dba_users
where username in (
'DIP',
'DMSYS',
'EXFSYS',
'MDDATA',
'SCOTT',
'SI_INFORMTN_SCHEMA',
'OUTLN',
'WKPROXY',
'WMSYS',
'ORDSYS',
'ORDPLUGINS',
'MDSYS',
'CTXSYS',
'XDB',
'ANONYMOUS',
'OWNER',
'WKSYS',
'ODM_MTR',
'ODM',
'OLAPSYS',
'HR',
'OE',
'PM',
'SH',
'QS_ADM',
'QS',
'QS_WS',
'QS_ES',
'QS_OS',
'QS_CBADM',
'QS_CB',
'QS_CS') and rtrim(account_status) not in ('LOCKED','EXPIRED '||Chr(38)||' LOCKED')
/
PROMPT
PROMPT 1.1 Database Demonstration Objects
PROMPT The following demo/sample schema accounts should not exist
PROMPT in production databases. Please remove them:
select username, account_status
from dba_users
where username in (
'SH',
'HR',
'OE',
'PM',
'QS_ADM',
'QS',
'SCOTT',
'QS_WS',
'QS_ES',
'QS_OS',
'QS_CBADM',
'QS_CB',
'QS_CS')
/
PROMPT
PROMPT 1.2 Default Database Accounts With Default Passwords
PROMPT The following accounts (if any listed) are using default passwords
PROMPT Either remove the accounts or change the passwords,
PROMPT lock and expire the accounts, and audit them:
PROMPT Running Oracle Default Password Scanner from patch 4926128
@dfltpass
btitle off;
ttitle off;
REM 1.2 To identify accounts with default passwords in 11g we run the following query
PROMPT 1.2 Identifying Default Database Accounts With Default Passwords in 11g
SELECT d.username, u.account_status from DBA_USERS_WITH_DEFPWD d, DBA_USERS u
WHERE d.username = u.username and u.username not in ('XS$NULL')
ORDER by 2,1;
PROMPT Execute the following statements to change default passwords on all default oracle accounts
PROMPT
select 'ALTER USER '||username||' identified by &New_Password;'
from DBA_USERS_WITH_DEFPWD
WHERE username not in ('XS$NULL');
PROMPT
PROMPT 1.3 The following accounts are assigned the default profile
PROMPT Please configure and assign user profile definitions to
PROMPT each database user that adheres to password policy guidelines.
PROMPT
col username format a30
select username, profile from dba_users
where profile = 'DEFAULT' and username not in ('XS$NULL');
PROMPT Execute the following statements to apply the DHS_H_APPL to all default oracle accounts
PROMPT
select 'ALTER USER '||username||' PROFILE DHS_H_APPL;'
from dba_users where profile = 'DEFAULT' and username not in ('XS$NULL');
PROMPT
PROMPT List of Profiles and and resources assigned to profiles
col limit format a20;
break on profile skip 1;
select profile, resource_name, limit
from dba_profiles
group by profile, resource_name, limit
order by profile, resource_name, limit
/
clear breaks;
clear columns;
set heading off
set feedback off
-- select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
-- 'remote_login_passwordfile should be set to ''NONE'' . It is currently set to '||value||chr(13)||chr(10),
-- 'You need to set it by running the following command:'||chr(13)||chr(10)||chr(13)||chr(10),
-- 'ALTER SYSTEM SET remote_login_passwordfile=NONE SCOPE=SPFILE; '
-- from v$spparameter
-- where rtrim(lower(name))='remote_login_passwordfile' and (value is null or value not in ('None','NONE','none'));
select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
'audit_sys_operations should be set to ''True'' . It is currently set to '||value||chr(13)||chr(10),
'You need to set it by running the following command:'||chr(13)||chr(10)||chr(13)||chr(10),
'ALTER SYSTEM SET audit_sys_operations=TRUE SCOPE=SPFILE; '
from v$spparameter
where rtrim(lower(name))='audit_sys_operations' and (value is null or value not in ('True','TRUE','true'));
select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
'********************************************************************************',
''||chr(13)||chr(10)||''||chr(13)||chr(10),
'audit_trail should be set to ''db_extended'' . It is currently set to '||value||chr(13)||chr(10),
'You need to set it by running the following command:'||chr(13)||chr(10)||chr(13)||chr(10),
'ALTER SYSTEM SET audit_trail=''db_extended'' SCOPE=spfile; '
from v$spparameter
where rtrim(lower(name))='audit_trail' and (value is null or rtrim(lower(value)) not in ('db_extended'));
select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
'The AUD$ should be owned by SYS user.'||chr(13)||chr(10),
'It is currently owned by '||owner
from dba_tables where table_name='AUD$' and owner!='SYS';
select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
'o7_dictionary_accessibility should be set to ''FALSE'' . '||chr(13)||chr(10)||
'It is currently set to '||value||chr(13)||chr(10),
'You need to set it by running the following command:'||chr(13)||chr(10)||chr(13)||chr(10),
'ALTER SYSTEM SET o7_dictionary_accessibility=FALSE SCOPE=SPFILE; '
from v$spparameter
where rtrim(lower(name))='o7_dictionary_accessibility' and rtrim(lower(value))='true';
select ''||chr(13)||chr(10)||''||chr(13)||chr(10),
'utl_file_dir is currently set to '||value||chr(13)||chr(10),
'This parameter should not be set. Use CREATE DIRECTORY options'||chr(13)||chr(10)||
'to setup file I/O for PL/SQL.'
from v$spparameter
where rtrim(lower(name))='utl_file_dir' and value is not null;
set feedback off
select '_trace_files_public is not set to FALSE. '||chr(13)||chr(10)||
' Add the following undocumented parameter setting to init.ora:'||chr(13)||chr(10)||
'_trace_files_public=false' AS "init.ora setting"
from sys.x$ksppi x,sys.x$ksppcv y
where x.inst_id=userenv('Instance')
and y.inst_id=userenv('Instance')
and x.indx=y.indx
and x.ksppinm='_trace_files_public' and y.ksppstvl!='FALSE';
set heading on
set feedback on
PROMPT Following priviliges are assigned to PUBLIC
PROMPT Revoke all unnecessary privileges and roles from PUBLIC.
PROMPT
PROMPT
PROMPT List of system privs assigned to PUBLIC
PROMPT These should be revoked.
col privilege format a15;
col grantee format a15;
select grantee, privilege
from dba_sys_privs
where grantee in ('PUBLIC')
/
col OWNER format a10
col TABLE_NAME format a20
col PRIVILEGE format a15
PROMPT
PROMPT Revoke unnecessary execute privileges on Oracle-supplied PL/SQL packages
PROMPT from the PUBLIC role. These Privileges should be controlled using roles.
PROMPT
select OWNER, TABLE_NAME, PRIVILEGE from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
PROMPT
PROMPT Please revoke these privileges from PUBLIC by executing the following statements:
set head off feed off
select distinct 'grant EXECUTE on SYS.UTL_FILE to MDSYS, OLAPSYS, WMSYS, ORACLE_OCM, ORDPLUGINS, XDB, ORDSYS;'||chr(13)||chr(10)||
'grant EXECUTE on SYS.DBMS_LDAP to WMSYS, APEX_030200;'||chr(13)||chr(10)||
'grant EXECUTE on SYS.UTL_HTTP to ORDPLUGINS, MDSYS;'||chr(13)||chr(10)||
'grant EXECUTE on SYS.DBMS_RANDOM to MDSYS, DBSNMP;'||chr(13)||chr(10)||
'grant EXECUTE on SYS.DBMS_JAVA to MDSYS;'
from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
select distinct 'revoke '||PRIVILEGE||' on '||trim(OWNER)||'.'||TABLE_NAME||' from PUBLIC;'
from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
select distinct '@?/rdbms/admin/utlrp.sql'
from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
set feedback on
prompt
prompt
set feedback on
column username format a40;
column profile format a15;
column account_status format a18;
PROMPT Opened Accounts
select username, profile , account_status
from dba_users where account_status = 'OPEN'
/
PROMPT Accounts NOT open
select username, profile , account_status
from dba_users where account_status != 'OPEN'
/
spool off;
============3====================================================================================================
oracle@d2aseutsh018.ndc.local[openview]# vi rpt_scanner.sql
-- ----------------------------------------------------------------------------
-- --
-- PENTEST LIMITED --
-- --------------- --
-- --
-- File Name : %M%.%R% --
-- Author : Pentest Limited --
-- Date : November 2001 --
-- --
-- Description --
-- ----------- --
-- --
-- Simple Oracle scanner to review some basic aspects of an Oracle --
-- installation. --
-- --
-- Version History --
-- =============== --
-- --
-- Who Ver Date Description --
-- ------- ----- -------- ------------------------------ --
-- PF 1.0 Dec 2001 First Issue --
-- ----------------------------------------------------------------------------
-- set up SQL*PLUS
-- ----------------------------------------------------------------------------
whenever sqlerror exit rollback
set head on
set feed on
set linesize 80
set termout on
set serveroutput on size 1000000
-- ----------------------------------------------------------------------------
-- capture the output
-- ----------------------------------------------------------------------------
spool scanner.lis
-- ----------------------------------------------------------------------------
-- create an anonymouse block to scan with
-- ----------------------------------------------------------------------------
declare
type user_tab is table of varchar2(30) index by binary_integer;
type pwd_tab is table of varchar2(30) index by binary_integer;
type hash_tab is table of varchar2(16) index by binary_integer;
username user_tab;
password pwd_tab;
hash hash_tab;
tab_key binary_integer:=1;
i binary_integer:=1;
--
cursor c_user is
select username,
password
from dba_users;
--
cursor c_utl_cur is
select rtrim(name) name,
rtrim(value) value
from v$parameter
where name='utl_file_dir';
--
cursor c_trace is
select rtrim(name) name,
decode(rtrim(value),NULL,'NULL',rtrim(value)) value
from v$parameter
where name like '%dest%';
--
cursor c_utl_trace is
select rtrim(a.name) name
from v$parameter a,
v$parameter b
where a.name='utl_file_dir'
and b.name like '%dest%'
and a.value=b.value;
--
cursor c_sys_priv (cp_priv in dba_sys_privs.privilege%type) is
select grantee,
privilege
from dba_sys_privs
where privilege like cp_priv;
--
cursor c_admin is
select grantee,
privilege priv
from dba_sys_privs
where admin_option='YES'
union
select grantee,
granted_role priv
from dba_role_privs
where admin_option='YES';
--
cursor c_grant is
select grantee,
privilege,
table_name
from dba_tab_privs
where grantable='YES'
union
select grantee,
privilege,
table_name
from dba_col_privs
where grantable='YES';
--
cursor c_ext is
select username
from dba_users
where password='EXTERNAL';
--
cursor c_dba is
select grantee
from dba_role_privs
where granted_role='DBA';
--
cursor c_links is
select name,
host,
userid,
password,
authusr,
authpwd
from sys.link$
where password is not null;
--
--
lv_sys_priv c_sys_priv%rowtype;
lv_utl_cur c_utl_cur%rowtype;
lv_trace c_trace%rowtype;
lv_utl_trace c_utl_trace%rowtype;
--
found number:=0;
--
begin
-- --------------------------------------------------------------------
-- manually load the user list into the tables
-- --------------------------------------------------------------------
tab_key:=1;
username(tab_key):='ADAMS';
password(tab_key):='WOOD';
hash(tab_key):='72CDEF4A3483F60D';
--
tab_key:=2;
username(tab_key):='ADLDEMO';
password(tab_key):='ADLDEMO';
hash(tab_key):='147215F51929A6E8';
--
tab_key:=3;
username(tab_key):='APPLSYS';
password(tab_key):='FND';
hash(tab_key):='0F886772980B8C79';
--
tab_key:=4;
username(tab_key):='APPLYSYSPUB';
password(tab_key):='PUB';
hash(tab_key):='A5E09E84EC486FC9';
--
tab_key:=5;
username(tab_key):='APPS';
password(tab_key):='APPS';
hash(tab_key):='D728438E8A5925E0';
--
tab_key:=6;
username(tab_key):='AQDEMO';
password(tab_key):='AQDEMO';
hash(tab_key):='5140E342712061DD';
--
tab_key:=7;
username(tab_key):='AQJAVA';
password(tab_key):='AQJAVA';
hash(tab_key):='8765D2543274B42E';
--
tab_key:=8;
username(tab_key):='AQUSER';
password(tab_key):='AQUSER';
hash(tab_key):='4CF13BDAC1D7511C';
--
tab_key:=9;
username(tab_key):='AUDIOUSER';
password(tab_key):='AUDIOUSER';
hash(tab_key):='CB4F2CEC5A352488';
--
tab_key:=10;
username(tab_key):='AURORA$ORB$UNAUTHENTICATED';
password(tab_key):='INVALID';
hash(tab_key):='80C099F0EADF877E';
--
tab_key:=11;
username(tab_key):='BLAKE';
password(tab_key):='PAPER';
hash(tab_key):='9435F2E60569158E';
--
tab_key:=12;
username(tab_key):='CATALOG';
password(tab_key):='CATALOG';
hash(tab_key):='397129246919E8DA';
--
tab_key:=13;
username(tab_key):='CDEMO82';
password(tab_key):='CDEMO83';
hash(tab_key):='7299A5E2A5A05820';
--
tab_key:=14;
username(tab_key):='CDEMOCOR';
password(tab_key):='CDEMOCOR';
hash(tab_key):='3A34F0B26B951F3F';
--
tab_key:=15;
username(tab_key):='CDEMOUCB';
password(tab_key):='CDEMOUCB';
hash(tab_key):='CEAE780F25D556F8';
--
tab_key:=16;
username(tab_key):='CDEMORID';
password(tab_key):='CDEMORID';
hash(tab_key):='E39CEFE64B73B308';
--
tab_key:=17;
username(tab_key):='CENTRA';
password(tab_key):='CENTRA';
hash(tab_key):='63BF5FFE5E3EA16D';
--
tab_key:=18;
username(tab_key):='CLARK';
password(tab_key):='CLOTH';
hash(tab_key):='7AAFE7D01511D73F';
--
tab_key:=19;
username(tab_key):='COMPANY';
password(tab_key):='COMPANY';
hash(tab_key):='402B659C15EAF6CB';
--
tab_key:=20;
username(tab_key):='CSMIG';
password(tab_key):='CSMIG';
hash(tab_key):='09B4BB013FBD0D65';
--
tab_key:=21;
username(tab_key):='CTXDEMO';
password(tab_key):='CTXDEMO';
hash(tab_key):='CB6B5E9D9672FE89';
--
tab_key:=22;
username(tab_key):='CTXSYS';
password(tab_key):='CTXSYS';
hash(tab_key):='24ABAB8B06281B4C';
--
tab_key:=23;
username(tab_key):='DBSNMP';
password(tab_key):='DBSNMP';
hash(tab_key):='E066D214D5421CCC';
--
tab_key:=24;
username(tab_key):='DEMO';
password(tab_key):='DEMO';
hash(tab_key):='4646116A123897CF';
--
tab_key:=25;
username(tab_key):='DEMO8';
password(tab_key):='DEMO9';
hash(tab_key):='0E7260738FDFD678';
--
tab_key:=26;
username(tab_key):='EMP';
password(tab_key):='EMP';
hash(tab_key):='B40C23C6E2B4EA3D';
--
tab_key:=27;
username(tab_key):='EVENT';
password(tab_key):='EVENT';
hash(tab_key):='7CA0A42DA768F96D';
--
tab_key:=28;
username(tab_key):='FINANCE';
password(tab_key):='FINANCE';
hash(tab_key):='6CBBF17292A1B9AA';
--
tab_key:=29;
username(tab_key):='FND';
password(tab_key):='FND';
hash(tab_key):='0C0832F8B6897321';
--
tab_key:=30;
username(tab_key):='GPFD';
password(tab_key):='GPFD';
hash(tab_key):='BA787E988F8BC424';
--
tab_key:=31;
username(tab_key):='GPLD';
password(tab_key):='GPLD';
hash(tab_key):='9D561E4D6585824B';
--
tab_key:=32;
username(tab_key):='HR';
password(tab_key):='HR';
hash(tab_key):='4C6D73C3E8B0F0DA';
--
tab_key:=33;
username(tab_key):='HLW';
password(tab_key):='HLW';
hash(tab_key):='855296220C095810';
--
tab_key:=34;
username(tab_key):='IMAGEUSER';
password(tab_key):='IMAGEUSER';
hash(tab_key):='E079BF5E433F0B89';
--
tab_key:=35;
username(tab_key):='IMEDIA';
password(tab_key):='IMEDIA';
hash(tab_key):='8FB1DC9A6F8CE827';
--
tab_key:=36;
username(tab_key):='JONES';
password(tab_key):='STEEL';
hash(tab_key):='B9E99443032F059D';
--
tab_key:=37;
username(tab_key):='JMUSER';
password(tab_key):='JMUSER';
hash(tab_key):='063BA85BF749DF8E';
--
tab_key:=38;
username(tab_key):='LBACSYS';
password(tab_key):='LBACSYS';
hash(tab_key):='AC9700FD3F1410EB';
--
tab_key:=39;
username(tab_key):='MDSYS';
password(tab_key):='MDSYS';
hash(tab_key):='9AAEB2214DCC9A31';
--
tab_key:=40;
username(tab_key):='MFG';
password(tab_key):='MFG';
hash(tab_key):='FC1B0DD35E790847';
--
tab_key:=41;
username(tab_key):='MIGRATE';
password(tab_key):='MIGRATE';
hash(tab_key):='5A88CE52084E9700';
--
tab_key:=42;
username(tab_key):='MILLER';
password(tab_key):='MILLER';
hash(tab_key):='D0EFCD03C95DF106';
--
tab_key:=43;
username(tab_key):='MMO2';
password(tab_key):='MMO3';
hash(tab_key):='AE128772645F6709';
--
tab_key:=44;
username(tab_key):='MODTEST';
password(tab_key):='YES';
hash(tab_key):='BBFF58334CDEF86D';
--
tab_key:=45;
username(tab_key):='MOREAU';
password(tab_key):='MOREAU';
hash(tab_key):='CF5A081E7585936B';
--
tab_key:=46;
username(tab_key):='NAMES';
password(tab_key):='NAMES';
hash(tab_key):='9B95D28A979CC5C4';
--
tab_key:=47;
username(tab_key):='MTSSYS';
password(tab_key):='MTSSYS';
hash(tab_key):='6465913FF5FF1831';
--
tab_key:=48;
username(tab_key):='MXAGENT';
password(tab_key):='MXAGENT';
hash(tab_key):='C5F0512A64EB0E7F';
--
tab_key:=49;
username(tab_key):='OCITEST';
password(tab_key):='OCITEST';
hash(tab_key):='C09011CB0205B347';
--
tab_key:=50;
username(tab_key):='ODS';
password(tab_key):='ODS';
hash(tab_key):='89804494ADFC71BC';
--
tab_key:=51;
username(tab_key):='ODSCOMMON';
password(tab_key):='ODSCOMMON';
hash(tab_key):='59BBED977430C1A8';
--
tab_key:=52;
username(tab_key):='OE';
password(tab_key):='OE';
hash(tab_key):='D1A2DFC623FDA40A';
--
tab_key:=53;
username(tab_key):='OEMADM';
password(tab_key):='OEMADM';
hash(tab_key):='9DCE98CCF541AAE6';
--
tab_key:=54;
username(tab_key):='OLAPDBA';
password(tab_key):='OLAPDBA';
hash(tab_key):='1AF71599EDACFB00';
--
tab_key:=55;
username(tab_key):='OLAPSVR';
password(tab_key):='INSTANCE';
hash(tab_key):='AF52CFD036E8F425';
--
tab_key:=56;
username(tab_key):='OLAPSYS';
password(tab_key):='MANAGER';
hash(tab_key):='3FB8EF9DB538647C';
--
tab_key:=57;
username(tab_key):='ORAREGSYS';
password(tab_key):='ORAREGSYS';
hash(tab_key):='28D778112C63CB15';
--
tab_key:=58;
username(tab_key):='ORDPLUGINS';
password(tab_key):='ORDPLUGINS';
hash(tab_key):='88A2B2C183431F00';
--
tab_key:=59;
username(tab_key):='ORDSYS';
password(tab_key):='ORDSYS';
hash(tab_key):='7EFA02EC7EA6B86F';
--
tab_key:=60;
username(tab_key):='OUTLN';
password(tab_key):='OUTLN';
hash(tab_key):='4A3BA55E08595C81';
--
tab_key:=61;
username(tab_key):='PERFSTAT';
password(tab_key):='PERFSTAT';
hash(tab_key):='AC98877DE1297365';
--
tab_key:=62;
username(tab_key):='PM';
password(tab_key):='PM';
hash(tab_key):='C7A235E6D2AF6018';
--
tab_key:=63;
username(tab_key):='PO';
password(tab_key):='PO';
hash(tab_key):='355CBEC355C10FEF';
--
tab_key:=64;
username(tab_key):='PO8';
password(tab_key):='PO8';
hash(tab_key):='7E15FBACA7CDEBEC';
--
tab_key:=65;
username(tab_key):='PO7';
password(tab_key):='PO7';
hash(tab_key):='6B870AF28F711204';
--
tab_key:=66;
username(tab_key):='PORTAL30';
password(tab_key):='PORTAL31';
hash(tab_key):='D373ABE86992BE68';
--
tab_key:=67;
username(tab_key):='PORTAL30_DEMO';
password(tab_key):='PORTAL30_DEMO';
hash(tab_key):='CFD1302A7F832068';
--
tab_key:=68;
username(tab_key):='PORTAL30_PUBLIC';
password(tab_key):='PORTAL30_PUBLIC';
hash(tab_key):='42068201613CA6E2';
--
tab_key:=69;
username(tab_key):='PORTAL30_SSO';
password(tab_key):='PORTAL30_SSO';
hash(tab_key):='882B80B587FCDBC8';
--
tab_key:=70;
username(tab_key):='PORTAL30_SSO_PS';
password(tab_key):='PORTAL30_SSO_PS';
hash(tab_key):='F2C3DC8003BC90F8';
--
tab_key:=71;
username(tab_key):='PORTAL30_SSO_PUBLIC';
password(tab_key):='PORTAL30_SSO_PUBLIC';
hash(tab_key):='98741BDA2AC7FFB2';
--
tab_key:=72;
username(tab_key):='POWERCARTUSER';
password(tab_key):='POWERCARTUSER';
hash(tab_key):='2C5ECE3BEC35CE69';
--
tab_key:=73;
username(tab_key):='PRIMARY';
password(tab_key):='PRIMARY';
hash(tab_key):='70C3248DFFB90152';
--
tab_key:=74;
username(tab_key):='PUBSUB';
password(tab_key):='PUBSUB';
hash(tab_key):='80294AE45A46E77B';
--
tab_key:=75;
username(tab_key):='QS';
password(tab_key):='QS';
hash(tab_key):='4603BCD2744BDE4F';
--
tab_key:=76;
username(tab_key):='QS_ADM';
password(tab_key):='QS_ADM';
hash(tab_key):='3990FB418162F2A0';
--
tab_key:=77;
username(tab_key):='QS_CB';
password(tab_key):='QS_CB';
hash(tab_key):='870C36D8E6CD7CF5';
--
tab_key:=78;
username(tab_key):='QS_CBADM';
password(tab_key):='QS_CBADM';
hash(tab_key):='20E788F9D4F1D92C';
--
tab_key:=79;
username(tab_key):='QS_CS';
password(tab_key):='QS_CS';
hash(tab_key):='2CA6D0FC25128CF3';
--
tab_key:=80;
username(tab_key):='QS_ES';
password(tab_key):='QS_ES';
hash(tab_key):='9A5F2D9F5D1A9EF4';
--
tab_key:=81;
username(tab_key):='QS_OS';
password(tab_key):='QS_OS';
hash(tab_key):='0EF5997DC2638A61';
--
tab_key:=82;
username(tab_key):='QS_WS';
password(tab_key):='QS_WS';
hash(tab_key):='0447F2F756B4F460';
--
tab_key:=83;
username(tab_key):='RE';
password(tab_key):='RE';
hash(tab_key):='933B9A9475E882A6';
--
tab_key:=84;
username(tab_key):='REPADMIN';
password(tab_key):='REPADMIN';
hash(tab_key):='915C93F34954F5F8';
--
tab_key:=85;
username(tab_key):='RMAIL';
password(tab_key):='RMAIL';
hash(tab_key):='DA4435BBF8CAE54C';
--
tab_key:=86;
username(tab_key):='RMAN';
password(tab_key):='RMAN';
hash(tab_key):='E7B5D92911C831E1';
--
tab_key:=87;
username(tab_key):='SAMPLE';
password(tab_key):='SAMPLE';
hash(tab_key):='E74B15A3F7A19CA8';
--
tab_key:=88;
username(tab_key):='SCOTT';
password(tab_key):='TIGER';
hash(tab_key):='F894844C34402B67';
--
tab_key:=89;
username(tab_key):='SDOS_ICSAP';
password(tab_key):='SDOS_ICSAP';
hash(tab_key):='C789210ACC24DA16';
--
tab_key:=90;
username(tab_key):='SECDEMO';
password(tab_key):='SECDEMO';
hash(tab_key):='009BBE8142502E10';
--
tab_key:=91;
username(tab_key):='SH';
password(tab_key):='SH';
hash(tab_key):='54B253CBBAAA8C48';
--
tab_key:=92;
username(tab_key):='SYS';
password(tab_key):='CHANGE_ON_INSTALL';
hash(tab_key):='D4C5016086B2DC6A';
--
tab_key:=93;
username(tab_key):='SYSADM';
password(tab_key):='SYSADM';
hash(tab_key):='BA3E855E93B5B9B0';
--
tab_key:=94;
username(tab_key):='SYSTEM';
password(tab_key):='MANAGER';
hash(tab_key):='D4DF7931AB130E37';
--
tab_key:=95;
username(tab_key):='TAHITI';
password(tab_key):='TAHITI';
hash(tab_key):='F339612C73D27861';
--
tab_key:=96;
username(tab_key):='TDOS_ICSAP';
password(tab_key):='TDOS_ICSAP';
hash(tab_key):='7C0900F751723768';
--
tab_key:=97;
username(tab_key):='TRACESVR';
password(tab_key):='TRACE';
hash(tab_key):='F9DA8977092B7B81';
--
tab_key:=98;
username(tab_key):='TSDEV';
password(tab_key):='TSDEV';
hash(tab_key):='29268859446F5A8C';
--
tab_key:=99;
username(tab_key):='TSUSER';
password(tab_key):='TSUSER';
hash(tab_key):='90C4F894E2972F08';
--
tab_key:=100;
username(tab_key):='USER0';
password(tab_key):='USER0';
hash(tab_key):='8A0760E2710AB0B4';
--
tab_key:=101;
username(tab_key):='USER1';
password(tab_key):='USER1';
hash(tab_key):='BBE7786A584F9103';
--
tab_key:=102;
username(tab_key):='USER2';
password(tab_key):='USER2';
hash(tab_key):='1718E5DBB8F89784';
--
tab_key:=103;
username(tab_key):='USER3';
password(tab_key):='USER3';
hash(tab_key):='94152F9F5B35B103';
--
tab_key:=104;
username(tab_key):='USER4';
password(tab_key):='USER4';
hash(tab_key):='2907B1BFA9DA5091';
--
tab_key:=105;
username(tab_key):='USER5';
password(tab_key):='USER5';
hash(tab_key):='6E97FCEA92BAA4CB';
--
tab_key:=106;
username(tab_key):='USER6';
password(tab_key):='USER6';
hash(tab_key):='F73E1A76B1E57F3D';
--
tab_key:=107;
username(tab_key):='USER7';
password(tab_key):='USER7';
hash(tab_key):='3E9C94488C1A3908';
--
tab_key:=108;
username(tab_key):='USER8';
password(tab_key):='USER8';
hash(tab_key):='D148049C2780B869';
--
tab_key:=109;
username(tab_key):='USER9';
password(tab_key):='USER9';
hash(tab_key):='0487AFEE55ECEE66';
--
tab_key:=110;
username(tab_key):='UTLBSTATU';
password(tab_key):='UTLESTAT';
hash(tab_key):='C42D1FA3231AB025';
--
tab_key:=111;
username(tab_key):='VIDEOUSER';
password(tab_key):='VIDEOUSER';
hash(tab_key):='29ECA1F239B0F7DF';
--
tab_key:=112;
username(tab_key):='VIF_DEVELOPER';
password(tab_key):='VIF_DEV_PWD';
hash(tab_key):='9A7DCB0C1D84C488';
--
tab_key:=113;
username(tab_key):='VIRUSER';
password(tab_key):='VIRUSER';
hash(tab_key):='404B03707BF5CEA3';
--
tab_key:=114;
username(tab_key):='VRR1';
password(tab_key):='VRR2';
hash(tab_key):='811C49394C921D66';
--
tab_key:=115;
username(tab_key):='WEBDB';
password(tab_key):='WEBDB';
hash(tab_key):='D4C4DCDD41B05A5D';
--
tab_key:=116;
username(tab_key):='WKSYS';
password(tab_key):='WKSYS';
hash(tab_key):='545E13456B7DDEA0';
--
-- --------------------------------------------------------------------
-- check all users in the database and see if defaults are set still
-- --------------------------------------------------------------------
dbms_output.put_line('Check default user passwords');
dbms_output.put_line('============================');
for lv_user in c_user loop
for i in 1..tab_key loop
if lv_user.username=username(i) then
if lv_user.password=hash(i) then
dbms_output.put_line('Default : '
||username(i)||' passwd is :'
||password(i));
exit;
end if;
end if;
end loop;
end loop;
-- --------------------------------------------------------------------
-- check for some of the dangerous privileges
--
-- ALTER SYSTEM
-- --------------------------------------------------------------------
found:=0;
dbms_output.put_line('.');
dbms_output.put_line('Display Users that have the "ALTER SYSTEM" privilege');
dbms_output.put_line('====================================================');
for lv_sys_priv in c_sys_priv('ALTER SYSTEM') loop
dbms_output.put_line(lv_sys_priv.privilege||' :'||lv_sys_priv.grantee);
end loop;
-- --------------------------------------------------------------------
-- check for CREATE LIBRARY
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Display Users that have the "CREATE LIBRARY" privilege');
dbms_output.put_line('======================================================'); for lv_sys_priv in c_sys_priv('CREATE%LIBRARY') loop
dbms_output.put_line(lv_sys_priv.privilege||' :'||lv_sys_priv.grantee);
end loop;
-- --------------------------------------------------------------------
-- check the location of utl_file_dir and ensure its not the same as
-- the trace directories
-- --------------------------------------------------------------------
found:=0;
dbms_output.put_line('.');
dbms_output.put_line('Dislay utl_file_dir');
dbms_output.put_line('===================');
open c_utl_cur;
loop
fetch c_utl_cur into lv_utl_cur;
if c_utl_cur%notfound then
if found=0 then
dbms_output.put_line('utl_file_dir is not set');
end if;
exit;
else
found:=1;
dbms_output.put_line('utl_file_dir is '||rtrim(lv_utl_cur.value));
end if;
end loop;
close c_utl_cur;
dbms_output.put_line('.');
dbms_output.put_line('Dislay destinations');
dbms_output.put_line('===================');
found:=0;
open c_trace;
loop
fetch c_trace into lv_trace;
if c_trace%notfound then
if found=0 then
dbms_output.put_line('no trace directories set');
end if;
exit;
else
found:=1;
dbms_output.put_line(rtrim(lv_trace.name)
||' is '||rtrim(lv_trace.value));
end if;
end loop;
close c_trace;
-- --------------------------------------------------------------------
-- check if the utl_file_dir clashes with any of the dest directions
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Display any clash between utl_file_dir and destination direcories');
dbms_output.put_line('=================================================================');
found:=0;
open c_utl_trace;
loop
fetch c_utl_trace into lv_utl_trace;
if c_utl_trace%notfound then
if found=0 then
dbms_output.put_line('No apparent match between utl_file_dir and dest directories');
end if;
exit;
else
dbms_output.put_line(lv_utl_trace.name||' matches utl_file_dir');
end if;
end loop;
close c_utl_trace;
-- --------------------------------------------------------------------
-- check for users with the DBA privilege
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Check for users with "DBA" privilege');
dbms_output.put_line('====================================');
for lv_dba in c_dba loop
dbms_output.put_line(lv_dba.grantee);
end loop;
-- --------------------------------------------------------------------
-- check out which users have ANY
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Check for users with "ANY" privilege');
dbms_output.put_line('====================================');
for lv_sys_priv in c_sys_priv('%ANY%') loop
dbms_output.put_line(lv_sys_priv.privilege||' :'||lv_sys_priv.grantee);
end loop;
-- --------------------------------------------------------------------
-- check out users or roles that have "with admin"
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Check for users or roles the have "with admin"');
dbms_output.put_line('==============================================');
for lv_admin in c_admin loop
dbms_output.put_line(lv_admin.priv||' :'||lv_admin.grantee);
end loop;
-- --------------------------------------------------------------------
-- check out which privileges have with with grant
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Check for users and roles that have "grantable"');
dbms_output.put_line('===============================================');
for lv_grant in c_grant loop
dbms_output.put_line(lv_grant.privilege||' :'
||lv_grant.table_name||' :'||lv_grant.grantee);
end loop;
-- --------------------------------------------------------------------
-- check out external users
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Display External Users');
dbms_output.put_line('======================');
for lv_ext in c_ext loop
dbms_output.put_line(lv_ext.username);
end loop;
-- --------------------------------------------------------------------
-- check out database links where there is a password set.
-- --------------------------------------------------------------------
dbms_output.put_line('.');
dbms_output.put_line('Display Database links where there is a password set');
dbms_output.put_line('====================================================');
for lv_links in c_links loop
dbms_output.put_line(lv_links.name||' :'||lv_links.host||' :'
||lv_links.userid||' :'||lv_links.password
||' :'||lv_links.authusr||' :'||lv_links.authpwd);
end loop;
end;
/
=======4=======================================================================================================
oracle@d2aseutsh018.ndc.local[openview]# vi rpt_user_audit.sql
rem Set up environment
set termout off
set pause off
set pages 5400 lines 80
set feedback off
set time off
rem ***************************************************************************
rem send output to a file
col dbname new_value n_dbname noprint
col rptnme new_value n_rptnme noprint
select name||'_'||to_char(sysdate,'YYMMDD') rptnme, name dbname
from v$database;
spool useraudit_&n_rptnme..txt
rem ***************************************************************************
rem Print overall heading for report
set heading off
prompt ########################################################################
prompt # Oracle Database Security Report #
prompt ########################################################################
prompt
prompt Instance Name:
select value from v$parameter where name='db_name'
/
prompt
prompt
prompt Date Of This Report:
Column today format a30
select to_char(sysdate,'dd Month YYYY HH24:MI') today from sys.dual;
set heading on
rem ***************************************************************************
rem System Privileges
prompt ########################################################################
prompt
prompt System Privileges
prompt
Column grantee format a25
Column privilege format a30
select * from sys.dba_sys_privs
order by grantee,privilege
/
prompt
prompt
rem ***************************************************************************
rem List of Users
column username format a15
column default_tablespace format a30
column temporary_tablespace format a30
prompt ########################################################################
prompt
prompt List of Users
prompt
select username,default_tablespace,temporary_tablespace
from sys.dba_users
order by username
/
prompt ########################################################################
prompt
prompt Recent database logon sessions
prompt
column osuser format a15
select distinct username, osuser, logon_time
from v$session
where username is not null
/
prompt
prompt
rem ***************************************************************************
rem Roles
prompt ########################################################################
prompt
prompt Roles
prompt
select * from sys.dba_roles
order by role
/
prompt
prompt
rem ***************************************************************************
rem Role Privileges
prompt ########################################################################
prompt
prompt Role Privileges
prompt
select * from sys.dba_role_privs
order by grantee,granted_role
/
prompt
prompt
prompt
prompt
rem Close out SQL*Plus script
spool off
rem exit
rem ***************************************************************************
rem ***************************************************************************
=====5====================================================================================================================
oracle@d2aseutsh018.ndc.local[openview]# vi rpt_user_privs.sql
rem Set up environment
set termout off
set pause off
set pages 5400 lines 80
set feedback off
set time off
rem ***************************************************************************
rem send output to a file
col dbname new_value n_dbname noprint
col rptnme new_value n_rptnme noprint
select name||'_'||to_char(sysdate,'YYMMDD') rptnme, name dbname
from v$database;
spool userprivs_&n_rptnme..txt
rem ***************************************************************************
rem Print overall heading for report
set heading off
prompt ########################################################################
prompt # Oracle Database Security Report #
prompt ########################################################################
prompt
prompt Instance Name:
select value from v$parameter where name='db_name'
/
prompt
prompt
prompt Date Of This Report:
Column today format a30
select to_char(sysdate,'dd Month YYYY HH24:MI') today from sys.dual;
set heading on
rem ***************************************************************************
rem System Privileges
prompt ########################################################################
prompt
prompt System Privileges
prompt
Column grantee format a25
Column privilege format a30
select * from sys.dba_sys_privs
order by grantee,privilege
/
prompt
prompt
rem ***************************************************************************
rem Users
column username format a15
column default_tablespace format a30
column temporary_tablespace format a30
prompt ########################################################################
prompt
prompt Users
prompt
select username,default_tablespace,temporary_tablespace
from sys.dba_users
order by username
/
prompt
prompt
rem ***************************************************************************
rem Roles
prompt ########################################################################
prompt
prompt Roles
prompt
select * from sys.dba_roles
order by role
/
prompt
prompt
rem ***************************************************************************
rem Role Privileges
prompt ########################################################################
prompt
prompt Role Privileges
prompt
select * from sys.dba_role_privs
order by grantee,granted_role
/
prompt
prompt
rem ***************************************************************************
rem Table Privileges
set lines 85
column grantee format a25
column table_name format a30
column owner format a12
column privilege format a15
prompt ########################################################################
prompt
prompt Table Privileges
prompt
select grantee,table_name,owner,privilege
from sys.dba_tab_privs
where grantee not in ('SYS','SYSTEM','EXP_FULL_DATABASE')
order by grantee,table_name
/
prompt
prompt
rem ***************************************************************************
rem Close out SQL*Plus script
spool off
rem exit
rem ***************************************************************************
rem ***************************************************************************
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Gonzalez,
Christopher D.
Sent: Friday, April 29, 2016 10:04 AM
To: Chando, Kenneth <kenneth.chando@hpe.com>
Cc: Kegley, Hank <hank.kegley@hpe.com>
Subject: RE: Final Clearance Notification
You have TOP SECRET eligibility and TOP SECRET access. The previous email listed SECRET.
From: Gonzalez, Christopher D.
Sent: Friday, April 29, 2016 9:54 AM
To: Chando, Kenneth <kenneth.chando@hpe.com>
Cc: Kegley, Hank <hank.kegley@hpe.com>
Subject: Final Clearance Notification
The HP Enterprise Services Industrial Security Office received notification from the Defense Security Service that you have been granted SECRET eligibility. Per the contract you are currently working on your access level is SECRET. All documentation in your record is up-to-date and no action is required at this time.
Please note that your _SSBI_investigation closed on 2016 04 22 , and your eligibility was granted on 2016 04 28 by DOD CAF. It is important that you keep this information for future clearance applications.
T3 level investigations should be submitted for a periodic re-investigation (PR) at the 9-3/4 year mark. SSBI level investigations should be submitted for a PR at the 4-3/4 year mark. Use the date the investigation closed to determine the time period for beginning the re-investigation process. It is your responsibility to notify the industrial security office of any changes in your status. This includes, contract change or any job related change, marital status, address/phone number, etc. so that we may report the most up to date information to the government.
Please contact the Industrial Security Office at industrialsecurity@hp.com if you have any questions.
Attached is a copy of an SF 312. Please give me a call at your earliest convenience so we can perform a Verbal Attestation. It should only take roughly 2 minutes to complete a Verbal Attestation.
*** Reminder if you are leaving the company, going on LOA (for more than 30 days), or no longer supporting a classified contract contact the industrialsecurity@hp.com mailbox for the required SECURITY DEBRIEFING DOCUMENTS. ***
V/R,
Christopher Gonzalez, AFSO
Industrial Security Office
HP Enterprise Services
13600 EDS Drive
A3S-C53
Herndon, VA 20171
Phone: 703-713-7212
Fax: 703-742-1757
__________________________________________________________________________
Visit HPES Industrial Security Office (on the HP network) for help with
your
security needs. HPES Industrial
Security Office
New Email for HPES Industrial Security: industrialsecurity@hpe.com
The information transmitted in this message is intended only for the person(s)
or entity to which it is addressed and may contain sensitive and/or privileged
material. Any review, retransmission, dissemination or other use of, or taking
of any action in reliance upon, this information by persons or entities other
than the intended recipient(s) is prohibited. If you received this in error,
please contact the sender and destroy any copies of this document.
Consulting is an employee-heavy kind of business and the division nearly doubled HP's headcount, from 172,000 to more than 311,000 in 2008.
From the get go, the unit had layoffs but as Enterprise Services revenues tanked, by 2012, HP began cutting deeper.
The company got rid of more than 55,000 people, and had plans to cut another 25,000 - 30,000 people.
On top of that, Whitman also said she was offshoring up to 60% of the unit's remaining jobs in order to lower costs and bring profit margins up to 9%.
HP Enterprise did other things to reduce headcount. Last year, it told thousands that they were to go for a contract labor company, doing their same jobs, typically for less pay and benefits. If they refused and quit they might not be entitled to severance. (Some HP employees staged a revolt and, surprisingly, won.)
All told, HP had been spending about $1 billion a year for the last seven years cutting jobs.
Employees shifting to the new company probably won't be spared layoffs.
Whitman warned that there will "cost synergies" of $1.5 billion to be had after the transaction closes in 2017 and that word almost always includes cutting jobs.
But the savings won't just come from layoffs. For instance, between the two companies they have 95 data centers. "Okay, we definitely do not need 95 data centers," Whitman says.
Good Day,
Following our conversation just earlier, I am writing to formally declare my resignation from HPE, effective today, 2-Jun-2016.
I came to HPE with the hope of making a great difference by leveraging knowledge, creativity, and innovation.
Unfortunately, the constraints of this position too severely put limits to those qualities that give me intrinsic job satisfaction and a sense of accomplishment.
I regret that I am letting down Ken Chando, Hassan Abdel, Bruce Franklin, and Abdalla Omer, in that I was never able to truly contribute.
I wish you all the very best success in your endeavors, both within HPE and in your personal lives.
I hope that you will be able to bring in someone with clearance to begin more immediately assisting the DBA Team with their work load.
Best,
Johnny R. Grimes
From: Grimes, Johnny Ralph
Sent: Tuesday, 17 May 2016 11:36 AM
To: Kegley, Hank <hank.kegley@hpe.com>
Cc: Hackney, Terry <terry.hackney@hpe.com>
Subject: HPE follow-up
Good Day,
I hope this message finds you in improving health. I’ve also included Terry herein while you are in recovery as I am not sure who will need to follow-up.
Please find attached letter regarding concerns/questions I have regarding current role and responsibilities. Of note, I have run this letter by Omer Abdalla on 4-May to gather his thoughts and incorporated slight revision following his feedback.
I have most enjoyed our conversations and meeting you. I wish you the very best, if you are ever in the need of work or looking for another opportunity, please contact me.
Wherever I may be, I will try to bring you in. My company contact: grimesj@broadsidesoftware.com
Best to you and your family,
Johnny
From: Grimes, Johnny Ralph
Sent: Thursday, 2 June 2016 3:46 PM
To: Hackney, Terry <terry.hackney@hpe.com>
Cc: Kegley, Hank <hank.kegley@hpe.com>;
Abdel Hassan (abdel.e.hassan@hpe.com)
<abdel.e.hassan@hpe.com>;
'Chando, Kenneth' <kenneth.chando@hpe.com>;
Omer Abdalla (omer.abdalla@hpe.com)
<omer.abdalla@hpe.com>;
'Franklin, Bruce' <bruce.franklin@hpe.com>;
Omer Abdalla (omer.abdalla@hpe.com)
<omer.abdalla@hpe.com>; Green,
Jennifer (US Public Sector) <jenniferg@hpe.com>;
Brown, Heather Bryant (ISO) <heather.b.brown@hpe.com>
Subject: RE: HPE follow-up
Good Day,
Following our conversation just earlier, I am writing to formally declare my resignation from HPE, effective today, 2-Jun-2016.
I came to HPE with the hope of making a great difference by leveraging knowledge, creativity, and innovation.
Unfortunately, the constraints of this position too severely put limits to those qualities that give me intrinsic job satisfaction and a sense of accomplishment.
I regret that I am letting down Ken Chando, Hassan Abdel, Bruce Franklin, and Abdalla Omer, in that I was never able to truly contribute.
I wish you all the very best success in your endeavors, both within HPE and in your personal lives.
I hope that you will be able to bring in someone with clearance to begin more immediately assisting the DBA Team with their work load.
Best,
Johnny R. Grimes
From: Grimes, Johnny Ralph
Sent: Tuesday, 17 May 2016 11:36 AM
To: Kegley, Hank <hank.kegley@hpe.com>
Cc: Hackney, Terry <terry.hackney@hpe.com>
Subject: HPE follow-up
Good Day,
I hope this message finds you in improving health. I’ve also included Terry herein while you are in recovery as I am not sure who will need to follow-up.
Please find attached letter regarding concerns/questions I have regarding current role and responsibilities. Of note, I have run this letter by Omer Abdalla on 4-May to gather his thoughts and incorporated slight revision following his feedback.
MSSQL SERVER
Questions
from training
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: VC-HRGS-AMS
Sent: Tuesday, April 12, 2016 11:52 AM
To: Haynes, Annmarie (TDLS) <annmarie.haynes@hpe.com>
Cc: Chando, Kenneth <kenneth.chando@hpe.com>
Subject: FW: Introduction to Microsoft Windows Containers connecting
issues via MYROOM and Sab Cloud
Hi Annmarie,
Please assist the below learner.
Regards
Saumia
Hewlett Packard Enterprise
Event Management Team
From: Chando, Kenneth
Sent: Tuesday, April 12, 2016 9:04 PM
To: VC-HRGS-AMS <vc-hrgs-ams@hpe.com>
Subject: Introduction to Microsoft Windows Containers connecting issues
via MYROOM and Sab Cloud
Hi VC team,
I’m trying to connect to this ongoing training “Introduction to Microsoft Windows Containers”
I’m currently connected in my both myRoom and Saba. Also, following the guide that was sent, I’m having issues connecting to the training via MyRoom or Saba cloud.
Please assist/direct me on how to connect.
This event will be held in MyRoom. Prior to the beginning of the event, ensure you or Install or Update to the latest version of MyRoom 10.4.0.0174: https://www.myroom.hpe.com/ . If you experience any difficulty with HP MyRoom, please call Technical Support at 1-888-351-4732 or 1-919-595-4243, http://www.myroom.hpe.com/Support .
If you need to cancel your enrollment, please see the Cancel Enrollment section below.
Declining the calendar appointment does not cancel your enrollment in Accelerating U.
Offering ID: |
02008666 |
||||||||
Course Title: |
Introduction to Microsoft Windows Containers |
||||||||
Delivery Time Zones: |
Pacific Time Zone |
||||||||
Sessions Details: (Date, Start & End Time) |
|
||||||||
Time Zone Conversion: |
Use the following URL to make the conversion from (USA PST / Europe CET / APJ Singapore Time) to your time zone.
|
||||||||
Expectations |
Expectations for successful class completion: · Be on time and attend all class lecture · Participate in face-to-face and/or phone-based virtual classes · If you know that you are unable to fulfill class expectations please consider cancelling ahead of time to allow attendance by other learners |
||||||||
Participant MyRoom Keys and Dial-In Numbers: |
To Install MyRoom, please follow the steps from the “Set up information” provided after this column. If you experience any difficulty installing MyRoom, please call Technical Support at 1-888-351-4732 or 1-919-595-4243, www.myroom.hpe.com/Support.
To attend your My Room event USE the following KEY:
Directions to use the MyRoom Key: Either: · Launch HP MyRoom client from your computer o Sign in with your e-mail address and password o Enter the HP MyRoom participant key provided above into the Key field at the bottom of your tray o Press Enter on your keyboard Or: · Launch HP MyRoom client from your computer o Click the Enter with key link o Enter your first and last name in the User Name field o Enter the participant key provided above into the Key field o Click the Enter Room button
Note: It is recommended to use the MyRoom audio functionality. A headset is a must to attend the trainings in MyRoom.
Attempting to connect and experiencing issues:
Your collaboration is critical to the success of your event. For more information please see your confirmation email from Accelerating U or your Outlook Calendar Appointment.
|
||||||||
Setup Information: |
|
||||||||
Delivery Language: |
English |
||||||||
VC Logistical Support (Same day of the offering): |
Support for the same day of the offering: For phone line problems or logistical issues such as invalid VC keys, insufficient number of seats booked, or missing scheduling information on the same day of the offering, please use the contact information below. All countries other than the US, Canada and Puerto Rico should call the U.S. using AT&T access codes. For problems on the day of the offering, contact our support lines provided below:
Hours of Operation: 23hours a day, 5 days a week - Global support (No support during 5.30 AM – 6.30 AM IST) Use the following URL to make the conversion from (USA PST / Europe CET / APJ Singapore Time) to your time zone. http://www.timezoneconverter.com/cgi-bin/tzc.tzc |
||||||||
Cancel Enrollment: |
|
||||||||
Region Generic Mailbox: |
For queries on this training, please send email to VC-HRGS-AMS@hpe.com
|
||||||||
Remarks: |
Please login using your full name |
Hello All,
Thank you all for your patience during these 4 days. I hope there was some beneficial information for each one in this workshop.
Few important articles for the topics discussed today are as follows:
Distributed Replay
· Installing and Configuring SQL Server 2012 Distributed Replay
· Performing a Distributed Replay with Multiple Clients using SQL Server 2012 Distributed Replay *****
· Introducing the SQL Server 2012 Distributed Replay Utility (en-US)
· Replay a Trace File (SQL Server Profiler)
· Configure Distributed Replay
· Ebook on Distributed Replay available for download from Microsoft website
PowerShell
· Hey, Scripting Guy! How Can I Use Profiles with Windows PowerShell?
· Convert URNs to SQL Server Provider Paths
Contained Databases
· Security Best Practices with Contained Databases
· Migrate to a Partially Contained Database
· SQL Server 2012: Sometimes Partial Is Preferable
The following is the link to the virtual labs for SQL 2014 Alwayson Failover cluster instance as well as availability group. All you would need is a MSDN subscription:
https://vlabs.holsystems.com/vlabs/technet?eng=VLabs&auth=none&src=vlabs&altadd=true&labid=12694
http://www.microsoft.com/en-us/server-cloud/support/learning-center/virtual-labs.aspx
Please shoot me an email if you have any questions or concerns related to the new features of SQL 2012. I will try to respond as soon as possible.
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Thursday, March 10, 2016 7:15 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Following are the important articles on topics that we discussed today:
AlwaysOn
· Prerequisites, Restrictions, and Recommendations for AlwaysOn Availability Groups (SQL Server) *****Important
· Interoperability and Coexistence with Other Database Engine Features
· Restrictions and limitations for using other features with AlwaysOn Availability Groups
· Active Secondaries: Backup on Secondary Replicas (AlwaysOn Availability Groups)
· Active Secondaries: Readable Secondary Replicas (AlwaysOn Availability Groups) *****Important
· Configure Read-Only Routing for an Availability Group (SQL Server)
· Configure Backup on Availability Replicas (SQL Server)
· Transaction_log_Backup_details*****Important
SQL Server Failover Clusters
·Failover Cluster Cmdlets in Windows PowerShell Listed by Task Focus
·Understanding MS DTC Resources in Windows Server 2008 Failover Clusters
·What does Cluster-Aware mean?
·SQL Server Multi-Subnet Clustering (SQL Server)
·View Cluster Quorum NodeWeight Settings
·Whitepaper on SQL Server 2012 AlwaysOn: Multisite Failover Cluster Instance *****
·View and Read Failover Cluster Instance Diagnostics Log
In addition to the above articles, here's a query to check the role of the Availability Replica on the current instance for a particular database, this could be used to schedule all jobs on a SQL Server environment and have them only run if they are in fact, the replica owner.
select role, role_desc
from sys.dm_hadr_availability_replica_states ars
join sys.dm_hadr_database_replica_states drs
on ars.group_id = drs.group_id
where ars.is_local = 1
and drs.is_local = 1
and drs.database_id = db_id('AdventureWorks')
Tomorrow we will start at the same time 10 am Central time and the topics for tomorrow are:
011_SQL_Server_2012_Features_for_Admins_Module_3_Lesson 11_PowerShell_and_WMI_Scripting
012_SQL_Server_2012_Features_for_Admins_Module_3_Lesson_12_Utilizing_Distributed_Replay
015_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 15_Contained_Databases
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Wednesday, March 9, 2016 7:52 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Following are the important articles on topics that we discussed today (these links are not there in the PowerPoint slides/notes):
· SQL Server Policy-Based Management Team Blog
· Part 1: Anatomy of SQL Server 2008 Resource Governor CPU Demo
· Part 2: Resource Governor CPU Demo on multiple CPUs
· Get Started with Microsoft SQL Server Data Tools
· FAQ: Microsoft SQL Server Data Tools
· Learn More About Microsoft SQL Server Data Tools
and here is the 31-days blog series on Extended Event from Jonathan Kehayias in which he talks about some interesting scenarios to use extended events (some of them listed below) – An XEvent A Day: 31 days of Extended Events. If you read one blog post from this series every day, you will become a master in Extended Events in just 31 days J. Some interesting post in this series
Tomorrow we will start at the same time 10 am Central time and the topics for tomorrow are:
013_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 13_Failover_Clustering
014_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 14_AlwaysOn_Availability_Groups
015_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 15_Contained_Databases
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Tuesday, March 8, 2016 7:13 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Tomorrow we will start at 10 am Central time.
Following are the links that might be beneficial for the topics discussed during the session today –
· Breakthrough performance with in-memory technologies
· How Fast is Project Codenamed “Hekaton” – It’s ‘Wicked Fast’!
· SQL Server Columnstore Index FAQ
· SQL Server Columnstore Performance Tuning
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Growing and Shrinking the Buffer Pool Under NUMA
· SQL Server 2012 Setup just got smarter…
· http://mssqlwiki.com/2013/04/22/max-server-memory-do-i-need-to-configure/******
SQL Server 2012 Licensing
· SQL Server 2012 Licensing Guide
· Processor To Core Renewal Guide
· Virtualization Licensing Guide
· SQL Server 2012 Core Factor Table
· Video: Licensing SQL Server 2012
· Features Supported by the Editions of SQL Server 2012
· SQL Server 2012 Licensing Value vs. Oracle Database ******
And some more that I usually share with my customers J
· Upgrade to a Different Edition of SQL Server 2012 (Setup)
· SQL Server 2012 Enterprise Editions Explained
· Using Upgrade Advisor to Prepare for Upgrades
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Lock Pages in Memory ... do you really need it?
· Do I have to assign the Lock Pages in Memory privilege for Local System?
· Find Non-Buffer Pool Memory (MemToLeave) in "Private Bytes"
· New SQLOS features in SQL Server 2012
· SQL 2012: Indirect Checkpoint Explained !!!
· Virtual Accounts and Managed Service Accounts in SQL Server 2012
· Managed Service Accounts Frequently Asked Questions (FAQ)
· Managed Service Accounts Step-by-Step Guide
· Transaction Log VLFs – too many or too few?
· Understanding Recovery Performance in SQL Server
· PerfMon Objects, Counters, Thresholds, & Utilities for SQL Server
· Disk Partition Alignment Best Practices for SQL Server
· Microsoft® SQL Server 2012 Best Practices Analyzer
· ALTER SERVER CONFIGURATION (Transact-SQL)
The plan for tomorrow is to cover the following lessons:
Lesson 05_Installation_Techniques_Using_the_Command_Prompt
Lesson 06_Upgrade_and_Migration_Overview
Lesson_09_SQL_Server_Management_Studio_and_Developer_Tools
& Maybe Lesson 10_Extended_Events_Enhancements
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, March 7, 2016 5:36 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Just a reminder, we start tomorrow at 10 am Central time. Below is the information to connect to the workshop.
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, February 29, 2016 7:40 PM
Subject: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello Premier Customers,
You are registered for a Microsoft Remote Delivery starting Tuesday morning and this e-mail will explain the details. Please read this entire email as there are some action items for you to complete before Tuesday’s start to ensure a smooth delivery and positive educational experience.
This workshop is being delivered “remotely,” which means the instructors will be located at a different location than you; the content of the workshop will be hosted in the cloud and downloaded the first day of the workshop.
Americas Education Services Presents
Premier Remote Delivery – WorkshopPLUS - SQL Server 2012: Features for Administrators
March 8-11, 2016
10:00 AM Central Time
Presenter: Ankita Matai
How the Offering Works:The Premier Remote Delivery workshops are delivered via a web platform. The instructor provides a combination of PowerPoint, live demos, and Q&A to deliver a rich and effective learning experience.
Cloud Hosted Lab Environment
The lab environment for the workshop will be hosted on the internet at the following location:
· Lab Access: https://www.premier-education-services.com/EntryPoint
· Lab Code*: MDaXqpqW
*Please Test Connectivity as soon as possible, using the Connectivity Code highlighted above. To test your connection, please log-into the Premier Education Services website (www.Premier-Education-Services.com) and click Test Your Connection link in the Hosted Labs area of your Premier Entry Point (You’ll need a Microsoft account, formerly known as Windows Live). You can use any Microsoft account. Sign up for one at: http://signup.live.com
Attendee Access:https://educationservices.eventbuilder.com/SQLSer2012FAMarch8
· This link contains the meeting access points for each day of the delivery.
· Attendees require the following in order to view the stream, which is covered in detail in the EventBuilder Streaming Preparation document attached. Attendees may also wish to provide their IT department with the EventBuilder Minimum Requirements document attached.
1. Supported browsers: Internet Explorer (10 or higher), Chrome, Firefox or Safari
2. Flash Player, which can be installed here
3. Working computer speakers
4. Broadband internet connection of 1 mbps per attendee
5. Access to video streaming via their network. Ports 80, 443 and 1935 must be open. This can be evaluated via the test page.
6. Webcast support: support@eventbuilder.com
1. Once you click the link ‘Attendee Access’ URL you will be directed to a landing page that lists all of the days of your workshop. Please find the correct day and click the “Join” button.
2. Click on the “Join” button and you will be asked to fill in a few fields with basic information (first name, last name, email, company).
If you need assistance, we are available to help via email. We will also be available 30 minutesbefore class starts on Tuesday to help troubleshoot any connectivity issues.
Understand the Logistics – It can be challenging to keep the class on-schedule in a remote setting. We will be adhering to start/end times pretty strictly for that reason. We plan to break for about 1 hour for lunch, which typically occurs at noon central time. We will also be including other 15 minutes breaks as we do with our in person workshops. It may be a good idea to bring lunch for the next three days. Please plan accordingly as there will not be a recording of this available after the workshop.
If you have difficulty testing your connectivity to the hosted lab, please contact me directly.
You will receive an email survey after the workshop – please take the time to fill that out so that we may know if we’ve satisfied your expectations. Enjoy the workshop!
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
Hello All,
Following are the important articles on topics that we discussed today:
AlwaysOn
· Prerequisites, Restrictions, and Recommendations for AlwaysOn Availability Groups (SQL Server) *****Important
· Interoperability and Coexistence with Other Database Engine Features
· Restrictions and limitations for using other features with AlwaysOn Availability Groups
· Active Secondaries: Backup on Secondary Replicas (AlwaysOn Availability Groups)
· Active Secondaries: Readable Secondary Replicas (AlwaysOn Availability Groups) *****Important
· Configure Read-Only Routing for an Availability Group (SQL Server)
· Configure Backup on Availability Replicas (SQL Server)
· Transaction_log_Backup_details*****Important
SQL Server Failover Clusters
·Failover Cluster Cmdlets in Windows PowerShell Listed by Task Focus
·Understanding MS DTC Resources in Windows Server 2008 Failover Clusters
·What does Cluster-Aware mean?
·SQL Server Multi-Subnet Clustering (SQL Server)
·View Cluster Quorum NodeWeight Settings
·Whitepaper on SQL Server 2012 AlwaysOn: Multisite Failover Cluster Instance *****
·View and Read Failover Cluster Instance Diagnostics Log
In addition to the above articles, here's a query to check the role of the Availability Replica on the current instance for a particular database, this could be used to schedule all jobs on a SQL Server environment and have them only run if they are in fact, the replica owner.
select role, role_desc
from sys.dm_hadr_availability_replica_states ars
join sys.dm_hadr_database_replica_states drs
on ars.group_id = drs.group_id
where ars.is_local = 1
and drs.is_local = 1
and drs.database_id = db_id('AdventureWorks')
Tomorrow we will start at the same time 10 am Central time and the topics for tomorrow are:
011_SQL_Server_2012_Features_for_Admins_Module_3_Lesson 11_PowerShell_and_WMI_Scripting
012_SQL_Server_2012_Features_for_Admins_Module_3_Lesson_12_Utilizing_Distributed_Replay
015_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 15_Contained_Databases
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Wednesday, March 9, 2016 7:52 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Following are the important articles on topics that we discussed today (these links are not there in the PowerPoint slides/notes):
· SQL Server Policy-Based Management Team Blog
· Part 1: Anatomy of SQL Server 2008 Resource Governor CPU Demo
· Part 2: Resource Governor CPU Demo on multiple CPUs
· Get Started with Microsoft SQL Server Data Tools
· FAQ: Microsoft SQL Server Data Tools
· Learn More About Microsoft SQL Server Data Tools
and here is the 31-days blog series on Extended Event from Jonathan Kehayias in which he talks about some interesting scenarios to use extended events (some of them listed below) – An XEvent A Day: 31 days of Extended Events. If you read one blog post from this series every day, you will become a master in Extended Events in just 31 days J. Some interesting post in this series
Tomorrow we will start at the same time 10 am Central time and the topics for tomorrow are:
013_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 13_Failover_Clustering
014_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 14_AlwaysOn_Availability_Groups
015_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 15_Contained_Databases
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Tuesday, March 8, 2016 7:13 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Tomorrow we will start at 10 am Central time.
Following are the links that might be beneficial for the topics discussed during the session today –
· Breakthrough performance with in-memory technologies
· How Fast is Project Codenamed “Hekaton” – It’s ‘Wicked Fast’!
· SQL Server Columnstore Index FAQ
· SQL Server Columnstore Performance Tuning
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Growing and Shrinking the Buffer Pool Under NUMA
· SQL Server 2012 Setup just got smarter…
· http://mssqlwiki.com/2013/04/22/max-server-memory-do-i-need-to-configure/******
SQL Server 2012 Licensing
· SQL Server 2012 Licensing Guide
· Processor To Core Renewal Guide
· Virtualization Licensing Guide
· SQL Server 2012 Core Factor Table
· Video: Licensing SQL Server 2012
· Features Supported by the Editions of SQL Server 2012
· SQL Server 2012 Licensing Value vs. Oracle Database ******
And some more that I usually share with my customers J
· Upgrade to a Different Edition of SQL Server 2012 (Setup)
· SQL Server 2012 Enterprise Editions Explained
· Using Upgrade Advisor to Prepare for Upgrades
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Lock Pages in Memory ... do you really need it?
· Do I have to assign the Lock Pages in Memory privilege for Local System?
· Find Non-Buffer Pool Memory (MemToLeave) in "Private Bytes"
· New SQLOS features in SQL Server 2012
· SQL 2012: Indirect Checkpoint Explained !!!
· Virtual Accounts and Managed Service Accounts in SQL Server 2012
· Managed Service Accounts Frequently Asked Questions (FAQ)
· Managed Service Accounts Step-by-Step Guide
· Transaction Log VLFs – too many or too few?
· Understanding Recovery Performance in SQL Server
· PerfMon Objects, Counters, Thresholds, & Utilities for SQL Server
· Disk Partition Alignment Best Practices for SQL Server
· Microsoft® SQL Server 2012 Best Practices Analyzer
· ALTER SERVER CONFIGURATION (Transact-SQL)
The plan for tomorrow is to cover the following lessons:
Lesson 05_Installation_Techniques_Using_the_Command_Prompt
Lesson 06_Upgrade_and_Migration_Overview
Lesson_09_SQL_Server_Management_Studio_and_Developer_Tools
& Maybe Lesson 10_Extended_Events_Enhancements
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, March 7, 2016 5:36 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Just a reminder, we start tomorrow at 10 am Central time. Below is the information to connect to the workshop.
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, February 29, 2016 7:40 PM
Subject: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello Premier Customers,
You are registered for a Microsoft Remote Delivery starting Tuesday morning and this e-mail will explain the details. Please read this entire email as there are some action items for you to complete before Tuesday’s start to ensure a smooth delivery and positive educational experience.
This workshop is being delivered “remotely,” which means the instructors will be located at a different location than you; the content of the workshop will be hosted in the cloud and downloaded the first day of the workshop.
Americas Education Services Presents
Premier Remote Delivery – WorkshopPLUS - SQL Server 2012: Features for Administrators
March 8-11, 2016
10:00 AM Central Time
Presenter: Ankita Matai
How the Offering Works:The Premier Remote Delivery workshops are delivered via a web platform. The instructor provides a combination of PowerPoint, live demos, and Q&A to deliver a rich and effective learning experience.
Cloud Hosted Lab Environment
The lab environment for the workshop will be hosted on the internet at the following location:
· Lab Access: https://www.premier-education-services.com/EntryPoint
· Lab Code*: MDaXqpqW
*Please Test Connectivity as soon as possible, using the Connectivity Code highlighted above. To test your connection, please log-into the Premier Education Services website (www.Premier-Education-Services.com) and click Test Your Connection link in the Hosted Labs area of your Premier Entry Point (You’ll need a Microsoft account, formerly known as Windows Live). You can use any Microsoft account. Sign up for one at: http://signup.live.com
Meeting Information
Attendee Access:https://educationservices.eventbuilder.com/SQLSer2012FAMarch8
· This link contains the meeting access points for each day of the delivery.
· Attendees require the following in order to view the stream, which is covered in detail in the EventBuilder Streaming Preparation document attached. Attendees may also wish to provide their IT department with the EventBuilder Minimum Requirements document attached.
7. Supported browsers: Internet Explorer (10 or higher), Chrome, Firefox or Safari
8. Flash Player, which can be installed here
9. Working computer speakers
10. Broadband internet connection of 1 mbps per attendee
11. Access to video streaming via their network. Ports 80, 443 and 1935 must be open. This can be evaluated via the test page.
12. Webcast support: support@eventbuilder.com
1. Once you click the link ‘Attendee Access’ URL you will be directed to a landing page that lists all of the days of your workshop. Please find the correct day and click the “Join” button.
2. Click on the “Join” button and you will be asked to fill in a few fields with basic information (first name, last name, email, company).
If you need assistance, we are available to help via email. We will also be available 30 minutesbefore class starts on Tuesday to help troubleshoot any connectivity issues.
Understand the Logistics – It can be challenging to keep the class on-schedule in a remote setting. We will be adhering to start/end times pretty strictly for that reason. We plan to break for about 1 hour for lunch, which typically occurs at noon central time. We will also be including other 15 minutes breaks as we do with our in person workshops. It may be a good idea to bring lunch for the next three days. Please plan accordingly as there will not be a recording of this available after the workshop.
If you have difficulty testing your connectivity to the hosted lab, please contact me directly.
You will receive an email survey after the workshop – please take the time to fill that out so that we may know if we’ve satisfied your expectations. Enjoy the workshop!
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
Hello All,
Following are the important articles on topics that we discussed today (these links are not there in the PowerPoint slides/notes):
· SQL Server Policy-Based Management Team Blog
· Part 1: Anatomy of SQL Server 2008 Resource Governor CPU Demo
· Part 2: Resource Governor CPU Demo on multiple CPUs
· Get Started with Microsoft SQL Server Data Tools
· FAQ: Microsoft SQL Server Data Tools
· Learn More About Microsoft SQL Server Data Tools
and here is the 31-days blog series on Extended Event from Jonathan Kehayias in which he talks about some interesting scenarios to use extended events (some of them listed below) – An XEvent A Day: 31 days of Extended Events. If you read one blog post from this series every day, you will become a master in Extended Events in just 31 days J. Some interesting post in this series
Tomorrow we will start at the same time 10 am Central time and the topics for tomorrow are:
013_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 13_Failover_Clustering
014_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 14_AlwaysOn_Availability_Groups
015_SQL_Server_2012_Features_for_Admins_Module_4_Lesson 15_Contained_Databases
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Tuesday, March 8, 2016 7:13 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Tomorrow we will start at 10 am Central time.
Following are the links that might be beneficial for the topics discussed during the session today –
· Breakthrough performance with in-memory technologies
· How Fast is Project Codenamed “Hekaton” – It’s ‘Wicked Fast’!
· SQL Server Columnstore Index FAQ
· SQL Server Columnstore Performance Tuning
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Growing and Shrinking the Buffer Pool Under NUMA
· SQL Server 2012 Setup just got smarter…
· http://mssqlwiki.com/2013/04/22/max-server-memory-do-i-need-to-configure/******
SQL Server 2012 Licensing
· SQL Server 2012 Licensing Guide
· Processor To Core Renewal Guide
· Virtualization Licensing Guide
· SQL Server 2012 Core Factor Table
· Video: Licensing SQL Server 2012
· Features Supported by the Editions of SQL Server 2012
· SQL Server 2012 Licensing Value vs. Oracle Database ******
And some more that I usually share with my customers J
· Upgrade to a Different Edition of SQL Server 2012 (Setup)
· SQL Server 2012 Enterprise Editions Explained
· Using Upgrade Advisor to Prepare for Upgrades
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Lock Pages in Memory ... do you really need it?
· Do I have to assign the Lock Pages in Memory privilege for Local System?
· Find Non-Buffer Pool Memory (MemToLeave) in "Private Bytes"
· New SQLOS features in SQL Server 2012
· SQL 2012: Indirect Checkpoint Explained !!!
· Virtual Accounts and Managed Service Accounts in SQL Server 2012
· Managed Service Accounts Frequently Asked Questions (FAQ)
· Managed Service Accounts Step-by-Step Guide
· Transaction Log VLFs – too many or too few?
· Understanding Recovery Performance in SQL Server
· PerfMon Objects, Counters, Thresholds, & Utilities for SQL Server
· Disk Partition Alignment Best Practices for SQL Server
· Microsoft® SQL Server 2012 Best Practices Analyzer
· ALTER SERVER CONFIGURATION (Transact-SQL)
The plan for tomorrow is to cover the following lessons:
Lesson 05_Installation_Techniques_Using_the_Command_Prompt
Lesson 06_Upgrade_and_Migration_Overview
Lesson_09_SQL_Server_Management_Studio_and_Developer_Tools
& Maybe Lesson 10_Extended_Events_Enhancements
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, March 7, 2016 5:36 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Just a reminder, we start tomorrow at 10 am Central time. Below is the information to connect to the workshop.
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, February 29, 2016 7:40 PM
Subject: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello Premier Customers,
You are registered for a Microsoft Remote Delivery starting Tuesday morning and this e-mail will explain the details. Please read this entire email as there are some action items for you to complete before Tuesday’s start to ensure a smooth delivery and positive educational experience.
This workshop is being delivered “remotely,” which means the instructors will be located at a different location than you; the content of the workshop will be hosted in the cloud and downloaded the first day of the workshop.
Americas Education Services Presents
Premier Remote Delivery – WorkshopPLUS - SQL Server 2012: Features for Administrators
March 8-11, 2016
10:00 AM Central Time
Presenter: Ankita Matai
How the Offering Works:The Premier Remote Delivery workshops are delivered via a web platform. The instructor provides a combination of PowerPoint, live demos, and Q&A to deliver a rich and effective learning experience.
Cloud Hosted Lab Environment
The lab environment for the workshop will be hosted on the internet at the following location:
· Lab Access: https://www.premier-education-services.com/EntryPoint
· Lab Code*: MDaXqpqW
*Please Test Connectivity as soon as possible, using the Connectivity Code highlighted above. To test your connection, please log-into the Premier Education Services website (www.Premier-Education-Services.com) and click Test Your Connection link in the Hosted Labs area of your Premier Entry Point (You’ll need a Microsoft account, formerly known as Windows Live). You can use any Microsoft account. Sign up for one at: http://signup.live.com
Meeting Information
Attendee Access:https://educationservices.eventbuilder.com/SQLSer2012FAMarch8
· This link contains the meeting access points for each day of the delivery.
· Attendees require the following in order to view the stream, which is covered in detail in the EventBuilder Streaming Preparation document attached. Attendees may also wish to provide their IT department with the EventBuilder Minimum Requirements document attached.
13. Supported browsers: Internet Explorer (10 or higher), Chrome, Firefox or Safari
14. Flash Player, which can be installed here
15. Working computer speakers
16. Broadband internet connection of 1 mbps per attendee
17. Access to video streaming via their network. Ports 80, 443 and 1935 must be open. This can be evaluated via the test page.
18. Webcast support: support@eventbuilder.com
1. Once you click the link ‘Attendee Access’ URL you will be directed to a landing page that lists all of the days of your workshop. Please find the correct day and click the “Join” button.
2. Click on the “Join” button and you will be asked to fill in a few fields with basic information (first name, last name, email, company).
If you need assistance, we are available to help via email. We will also be available 30 minutesbefore class starts on Tuesday to help troubleshoot any connectivity issues.
Understand the Logistics – It can be challenging to keep the class on-schedule in a remote setting. We will be adhering to start/end times pretty strictly for that reason. We plan to break for about 1 hour for lunch, which typically occurs at noon central time. We will also be including other 15 minutes breaks as we do with our in person workshops. It may be a good idea to bring lunch for the next three days. Please plan accordingly as there will not be a recording of this available after the workshop.
If you have difficulty testing your connectivity to the hosted lab, please contact me directly.
You will receive an email survey after the workshop – please take the time to fill that out so that we may know if we’ve satisfied your expectations. Enjoy the workshop!
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
Hello All,
Tomorrow we will start at 10 am Central time.
Following are the links that might be beneficial for the topics discussed during the session today –
· Breakthrough performance with in-memory technologies
· How Fast is Project Codenamed “Hekaton” – It’s ‘Wicked Fast’!
· SQL Server Columnstore Index FAQ
· SQL Server Columnstore Performance Tuning
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Growing and Shrinking the Buffer Pool Under NUMA
· SQL Server 2012 Setup just got smarter…
· http://mssqlwiki.com/2013/04/22/max-server-memory-do-i-need-to-configure/******
SQL Server 2012 Licensing
· SQL Server 2012 Licensing Guide
· Processor To Core Renewal Guide
· Virtualization Licensing Guide
· SQL Server 2012 Core Factor Table
· Video: Licensing SQL Server 2012
· Features Supported by the Editions of SQL Server 2012
· SQL Server 2012 Licensing Value vs. Oracle Database ******
And some more that I usually share with my customers J
· Upgrade to a Different Edition of SQL Server 2012 (Setup)
· SQL Server 2012 Enterprise Editions Explained
· Using Upgrade Advisor to Prepare for Upgrades
· How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
· Lock Pages in Memory ... do you really need it?
· Do I have to assign the Lock Pages in Memory privilege for Local System?
· Find Non-Buffer Pool Memory (MemToLeave) in "Private Bytes"
· New SQLOS features in SQL Server 2012
· SQL 2012: Indirect Checkpoint Explained !!!
· Virtual Accounts and Managed Service Accounts in SQL Server 2012
· Managed Service Accounts Frequently Asked Questions (FAQ)
· Managed Service Accounts Step-by-Step Guide
· Transaction Log VLFs – too many or too few?
· Understanding Recovery Performance in SQL Server
· PerfMon Objects, Counters, Thresholds, & Utilities for SQL Server
· Disk Partition Alignment Best Practices for SQL Server
· Microsoft® SQL Server 2012 Best Practices Analyzer
· ALTER SERVER CONFIGURATION (Transact-SQL)
The plan for tomorrow is to cover the following lessons:
Lesson 05_Installation_Techniques_Using_the_Command_Prompt
Lesson 06_Upgrade_and_Migration_Overview
Lesson_09_SQL_Server_Management_Studio_and_Developer_Tools
& Maybe Lesson 10_Extended_Events_Enhancements
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, March 7, 2016 5:36 PM
Subject: RE: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello All,
Just a reminder, we start tomorrow at 10 am Central time. Below is the information to connect to the workshop.
Regards,
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
From: Ankita Matai
Sent: Monday, February 29, 2016 7:40 PM
Subject: WorkshopPlus - SQL 2012 Features for Administrator - March
8th-11th
Hello Premier Customers,
You are registered for a Microsoft Remote Delivery starting Tuesday morning and this e-mail will explain the details. Please read this entire email as there are some action items for you to complete before Tuesday’s start to ensure a smooth delivery and positive educational experience.
This workshop is being delivered “remotely,” which means the instructors will be located at a different location than you; the content of the workshop will be hosted in the cloud and downloaded the first day of the workshop.
Americas Education Services Presents
Premier Remote Delivery – WorkshopPLUS - SQL Server 2012: Features for Administrators
March 8-11, 2016
10:00 AM Central Time
Presenter: Ankita Matai
How the Offering Works:The Premier Remote Delivery workshops are delivered via a web platform. The instructor provides a combination of PowerPoint, live demos, and Q&A to deliver a rich and effective learning experience.
Cloud Hosted Lab Environment
The lab environment for the workshop will be hosted on the internet at the following location:
· Lab Access: https://www.premier-education-services.com/EntryPoint
· Lab Code*: MDaXqpqW
*Please Test Connectivity as soon as possible, using the Connectivity Code highlighted above. To test your connection, please log-into the Premier Education Services website (www.Premier-Education-Services.com) and click Test Your Connection link in the Hosted Labs area of your Premier Entry Point (You’ll need a Microsoft account, formerly known as Windows Live). You can use any Microsoft account. Sign up for one at: http://signup.live.com
Meeting Information
Attendee Access:https://educationservices.eventbuilder.com/SQLSer2012FAMarch8
· This link contains the meeting access points for each day of the delivery.
· Attendees require the following in order to view the stream, which is covered in detail in the EventBuilder Streaming Preparation document attached. Attendees may also wish to provide their IT department with the EventBuilder Minimum Requirements document attached.
19. Supported browsers: Internet Explorer (10 or higher), Chrome, Firefox or Safari
20. Flash Player, which can be installed here
21. Working computer speakers
22. Broadband internet connection of 1 mbps per attendee
23. Access to video streaming via their network. Ports 80, 443 and 1935 must be open. This can be evaluated via the test page.
24. Webcast support: support@eventbuilder.com
1. Once you click the link ‘Attendee Access’ URL you will be directed to a landing page that lists all of the days of your workshop. Please find the correct day and click the “Join” button.
2. Click on the “Join” button and you will be asked to fill in a few fields with basic information (first name, last name, email, company).
If you need assistance, we are available to help via email. We will also be available 30 minutesbefore class starts on Tuesday to help troubleshoot any connectivity issues.
Understand the Logistics – It can be challenging to keep the class on-schedule in a remote setting. We will be adhering to start/end times pretty strictly for that reason. We plan to break for about 1 hour for lunch, which typically occurs at noon central time. We will also be including other 15 minutes breaks as we do with our in person workshops. It may be a good idea to bring lunch for the next three days. Please plan accordingly as there will not be a recording of this available after the workshop.
If you have difficulty testing your connectivity to the hosted lab, please contact me directly.
You will receive an email survey after the workshop – please take the time to fill that out so that we may know if we’ve satisfied your expectations. Enjoy the workshop!
Ankita Matai | Premier Field Engineer| SQL Server | Mobile (: 302-766-3268 | Iselin, NJ, USA
Please see the below guidance from the MDC Hardware Services Team.
ALCON,
RE: Data floor equipment
Another friendly reminder, when you have finished using crash carts, trash cans, work benches or tables and other portable work surfaces they are to be moved to the front of the data hall, (west wall). In B2 they should be on the south wall, but at the west end near the single door.
Crash carts are to be stowed neatly!
DO NOT LOCK THE WHEELS ON ANYTHING ON THE DATA FLOOR WHEN YOU ARE NOT USING IT, YOURSELF. Even then the necessity is questionable.
DO NOT remove the dongles from the monitor/USB cable bundle or remove any securing tie-wraps.
The crash carts are not configured exactly the same though we are working on relative standardization.
Report problems encountered to or request additional cables, etc. from the Hardware Services team if needed.
Be careful that documentation or media isn’t left in the data halls. Data security needs to be taken very seriously, so please take all documents and media with you when you leave the data hall.”
Your participation in helping keep these units neat, clean and maintained will not only help the next person, but will in turn help you out as well.
Regards,
@Michael W. Bradish - ITIL v3 Foundation Certified
Technology Consultant III/Hardware technician/U.S. Public Sector – ITO Delivery
Data Center Strategy and Services / Facilities Management / Hardware Services
DC2 Program, an ISO 20000:2011 Organization
Email michael.bradish@hpe.com hardwareguy@hpe.com
Cell 434 568-7164 | Office 434 374-3541
KJ4NSN
Thanks David!
Greatly appreciated.
The baby and mom are doing well. Will extend your regards.
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Willette, David
Sent: Monday, November 09, 2015 9:57 AM
To: Kegley, Hank; Chando, Kenneth
Cc: Green, Jennifer (US Public Sector); DC2 DATABASE SUPPORT
Subject: Re: Congratulations On your new addition
Please accept my congratulations as well Kenneth!
David Willette
Data Center Delivery Manager / Deputy Program Manager
DHS-DC2 Program, An ISO 20000:2011 Organization
Hewlett Packard Enterprise
david.a.willette@hp.com
T +1 434 374 3564
M +1 434 265 0918
Hewlett Packard Enterprise
Mid-Atlantic Datacenter
------ Original message------
From: Kegley, Hank
Date: Mon, Nov 9, 2015 9:52 AM
To: Chando, Kenneth;
Cc: Willette, David;Green, Jennifer (US Public Sector);DC2 DATABASE SUPPORT;
Subject:Congratulations On your new addition
Kenneth Congratulations on the arrival of your daughter
HANK KEGLEY
SERVICE DELIVERY MANAGER
System Support (L2 UNIX/WINTEL/DATABASE)
US Public Sector (HOMELAND SECURITY)
DC2 Program, An ISO 20000:2011 Organization
Telephone + 1 919.424.5644
Lync +1 919.745.4151
Mobile +1 704 506 3281
FAX: 919.424.9858
Email Hank.kegley@hpe.com
2610 Wycliff Road, Raleigh, North Carolina 27607
Thank you for your feedback |Recognition@hp
Thank you, Ken for taking care of the lab databases
Omer
From: Chando, Kenneth
Sent: Wednesday, January 20, 2016 8:57 AM
To: DC2 DATABASE SUPPORT
Subject: Standalone 12c database can now be accessed successfully
Hi Team,
Standalone
12c database can now be accessed successfully. Did clean up used space on /u01
mount point from %100 down to 70%.See below:
See current status below. Database can now be accessed successfully.
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
ALEX DAVIS
I did the SQL commands below myself. Hopefully that’s it.
From: Davis, Alexander
Sent: Thursday, June 09, 2016 9:11 AM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: requesting help cleaning up the db on d2lseutsh049
Omer/Ken:
Support recommends running the following from the standalone db setup guide to make sure all the users and permissions are in place:
5. Create the Database User opsware_admin
Create the database user 'opsware_admin' with the following privileges.
SQL> create user opsware_admin identified by opsware_admin
default tablespace truth_data temporary tablespace temp
SA Oracle Setup for the Model Repository — Standalone Version 17
quota unlimited on truth_data;
SQL> grant alter session to opsware_admin with admin option;
SQL> grant create procedure to opsware_admin with admin option;
SQL> grant create public synonym to opsware_admin with admin option;
SQL> grant create sequence to opsware_admin with admin option;
SQL> grant create session to opsware_admin with admin option;
SQL> grant create table to opsware_admin with admin option;
SQL> grant create trigger to opsware_admin with admin option;
SQL> grant create type to opsware_admin with admin option;
SQL> grant create view to opsware_admin with admin option;
SQL> grant delete any table to opsware_admin with admin option;
SQL> grant drop public synonym to opsware_admin with admin option;
SQL> grant select any table to opsware_admin with admin option;
SQL> grant select_catalog_role to opsware_admin with admin option;
SQL> grant query rewrite to opsware_admin with admin option;
SQL> grant restricted session to opsware_admin with admin option;
SQL> grant execute on dbms_utility to opsware_admin with grant option;
SQL> grant analyze any to opsware_admin;
SQL> grant insert, update, delete, select on sys.aux_stats$ to opsware_admin;
SQL> grant gather_system_statistics to opsware_admin;
SQL> grant create job to opsware_admin with admin option;
SQL> grant create any directory to opsware_admin;
SQL> grant drop any directory to opsware_admin;
SQL> grant alter system to opsware_admin;
SQL> grant create role to opsware_admin;
SQL> grant create user to opsware_admin;
SQL> grant alter user to opsware_admin;
SQL> grant drop user to opsware_admin;
SQL> grant create profile to opsware_admin;
SQL> grant alter profile to opsware_admin;
Once we do this please re-run rerun.sql. Then I will try the model repository setup script again for the secondary core.
I think if we find we just need to start from scratch the db setup scripts are still in /u01/app/oracle/admin/truth/scripts/
From: Abdalla, Omer
Sent: Wednesday, June 08, 2016 4:37 PM
To: Davis, Alexander <alexander.davis@hpe.com>
Subject: RE: requesting help cleaning up the db on d2lseutsh049
From: Davis, Alexander
Sent: Wednesday, June 08, 2016 2:44 PM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: requesting help cleaning up the db on d2lseutsh049
Importance: High
Could someone from the oracle side (preferably in Raleigh today) execute the following as oracle on d2lseutsh049:
(This is from a support case excerpt where they discussed how to fix the sort of situation I’m in on d2lseutsh049 – the software install is OK but the data in the tables is invalid)
Now we are assuming that there are some residue of the old installation and you just want to clean up the objects in step 3 and reinstall a different version of SA. This step just requires that you cleanup all the objects that were created in step 3.
-- Drop All the users
DROP USER AAA CASCADE;
DROP USER TRUTH CASCADE;
DROP USER LCREP CASCADE;
DROP USER GCADMIN CASCADE;
DROP USER AAA_USER CASCADE;
DROP USER SPIN CASCADE;
DROP USER TWIST CASCADE;
DROP USER OPSWARE_PUBLIC_VIEWS CASCADE;
DROP USER VAULT CASCADE;
-- Drop all the Roles
DROP ROLE DATA_OWNER;
DROP ROLE DATA_USER;
DROP ROLE TRUTH_MOD;
DROP ROLE TRUTH_RO;
DROP ROLE TRUTH_API;
DROP ROLE LCREP_RO;
DROP ROLE LCREP_MOD;
DROP ROLE AAA_ADMIN;
DROP ROLE AAA_READER;
DROP ROLE AAA_WRITER;
DROP ROLE AAA_API;
DROP ROLE GCADMIN_ROLE;
-- Drop the Profile
DROP PROFILE OPSWARE_PUBLIC_VIEWS_PRF;
-- Drop all the public Synonyms
Run the following query to generate the list of synonyms to drop. This will give a bunch of delete statements ( if there are any synonyms left undeleted).
Run all the individual deletes that the select creates.
SELECT 'DROP PUBLIC SYNONYM "' || synonym_name || '";'
FROM SYS.dba_synonyms
WHERE owner = 'PUBLIC'
AND table_owner IN ('AAA', 'TRUTH', 'LCREP', 'GCADMIN', 'AAA_USER', 'SPIN', 'TWIST', 'OPSWARE_PUBLIC_VIEWS', 'VAULT');
Hi Omer,
The 049 server rmanbackup script was point to the IWMSD ORACLE_SID instead of truth and it’s OH was still 11.2.0
I had to modify script (rmanbackup_truth.sh) so that ORACLE_SID=truth and $ORACLE_HOME=12.1.0
Also, the database was in NOARCHIVELOG mode. Had to put it in Archivelog mode and tested backup and it was successful on 049.
Here is the dbora file for 049 server. It’s pointing to 12.1.0 OH: vi /etc/init.d/dbora
Old rmanbackup script
032 server =>no oradata, backup nor truth directory exist. The backup script has oradata and truth directories as part of its backup path
For rman_disk_backup.sh
For rmanbackup.sh =>Has IWMSD as its SID
On 032 server, there seems to have been deleted the oradata/backup/truth directory and the backup script is pointing this this location.
Did try to recreate these directories but got prompted to login with my oracle password. Did try our regular oracle password and it didn’t take it. I also tried Password1 to no avail. See:
Also, I couldn’t view the dbora script in /etc/init.d since couldn’t sudo to root for password reasons. See new modified rmanbackup script:
032 is in archvielog mode. Once the oradata, backup and truth directories are recreated, rmanbackup_truth.sh should run successfully.
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
Hi Alex,
From the database perspective, there seems to be no issue of concern. I can login to it and it’s open.
Also checking on the size of the tablespaces and data files, there isn’t any discrepancy. See below:
SQL> @sh_tsdf.sql
June 6, 2016 Datafiles used by TRUTH database
===================================
Size Used Aut
File Name Tablespace (Mb) (in Mb) Used % Xtn Status
------------------------------------------------------- --------------- ---------- ---------- ------- --- ----------
/u02/oradata/truth/aaa_data02.dbf AAA_DATA 32.00 9.50 29.69 YES ONLINE
/u03/oradata/truth/aaa_data01.dbf AAA_DATA 32.00 4.00 12.50 YES ONLINE
/u04/oradata/truth/aaa_indx01.dbf AAA_INDX 32.00 6.13 19.14 YES ONLINE
/u03/oradata/truth/aaa_indx02.dbf AAA_INDX 32.00 16.00 50.00 YES ONLINE
/u03/oradata/truth/audit_data01.dbf AUDIT_DATA 32.00 2.94 9.18 YES ONLINE
/u02/oradata/truth/audit_indx01.dbf AUDIT_INDX 32.00 3.50 10.94 YES ONLINE
/u04/oradata/truth/lcrep_data01.dbf LCREP_DATA 261.00 178.31 68.32 YES ONLINE
/u03/oradata/truth/lcrep_data02.dbf LCREP_DATA 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/lcrep_indx01.dbf LCREP_INDX 133.00 49.00 36.84 YES ONLINE
/u04/oradata/truth/lcrep_indx02.dbf LCREP_INDX 133.00 132.44 99.58 YES ONLINE
/u04/oradata/truth/strg_data01.dbf STRG_DATA 32.00 7.13 22.27 YES ONLINE
/u02/oradata/truth/strg_data02.dbf STRG_DATA 5.00 5.00 100.00 YES ONLINE
/u03/oradata/truth/strg_indx02.dbf STRG_INDX 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/strg_indx01.dbf STRG_INDX 32.00 14.13 44.14 YES ONLINE
/u04/oradata/truth/sysaux01.dbf SYSAUX 1,060.00 935.81 88.28 YES ONLINE
/u04/oradata/truth/system01.dbf SYSTEM 726.00 663.94 91.45 YES SYSTEM
/u04/oradata/truth/temp01.dbf TEMP 128.00 128.00 100.00 YES ONLINE
/u03/oradata/truth/temp02.dbf TEMP 160.00 74.00 46.25 YES ONLINE
/u04/oradata/truth/truth_data02.dbf TRUTH_DATA 4,000.00 3,999.94 100.00 YES ONLINE
/u02/oradata/truth/truth_data01.dbf TRUTH_DATA 4,000.00 4,000.00 100.00 YES ONLINE
/u02/oradata/truth/truth_data03.dbf TRUTH_DATA 20,480.00 1,220.00 5.96 YES ONLINE
/u02/oradata/truth/truth_indx02.dbf TRUTH_INDX 645.00 636.00 98.60 YES ONLINE
/u03/oradata/truth/truth_indx01.dbf TRUTH_INDX 517.00 442.31 85.55 YES ONLINE
/u02/oradata/truth/undo02.dbf UNDO 389.00 23.69 6.09 YES ONLINE
/u04/oradata/truth/undo01.dbf UNDO 153.00 26.63 17.40 YES ONLINE
June 6, 2016 Tablespace used by db_name database
===================================
Initial Next
Extent Extent Total Size Used Free Extent
Name in (KB) in (KB) (in Mb) (in Mb) (in Mb) Used % Type Management Status
--------------- ------- ------- ---------- ---------- ---------- ------- --------- ---------- --------
AAA_INDX 64 64.00 22.13 41.88 34.57 PERMANENT LOCAL ONLINE
AUDIT_DATA 64 32.00 2.94 29.06 9.18 PERMANENT LOCAL ONLINE
STRG_DATA 64 37.00 12.13 24.88 32.77 PERMANENT LOCAL ONLINE
SYSAUX 64 1,060.00 935.81 124.19 88.28 PERMANENT LOCAL ONLINE
LCREP_DATA 64 266.00 183.31 82.69 68.91 PERMANENT LOCAL ONLINE
AUDIT_INDX 64 32.00 3.50 28.50 10.94 PERMANENT LOCAL ONLINE
STRG_INDX 64 37.00 19.13 17.88 51.69 PERMANENT LOCAL ONLINE
SYSTEM 64 726.00 663.94 62.06 91.45 PERMANENT LOCAL ONLINE
AAA_DATA 64 64.00 13.50 50.50 21.09 PERMANENT LOCAL ONLINE
TRUTH_DATA 64 28,480.00 9,219.94 19,260.06 32.37 PERMANENT LOCAL ONLINE
TRUTH_INDX 64 1,162.00 1,078.31 83.69 92.80 PERMANENT LOCAL ONLINE
UNDO 64 542.00 50.31 491.69 9.28 UNDO LOCAL ONLINE
LCREP_INDX 64 266.00 181.44 84.56 68.21 PERMANENT LOCAL ONLINE
TEMP 1,024 1,024 288.00 202.00 86.00 70.14 TEMPORARY LOCAL ONLINE
Redo Log Files
GROUP# Status MEMBER Megabytes
------- ---------- ------------------------------------------------------- ---------
1 CURRENT /u02/oradata/truth/redo1a.log 100
2 INACTIVE /u02/oradata/truth/redo2a.log 100
3 INACTIVE /u02/oradata/truth/redo3a.log 100
Control Files
Status NAME IS_ BLOCK_SIZE FILE_SIZE_BLKS CON_ID
---------- ------------------------------------------------------------ --- ---------- -------------- ----------
/u04/oradata/truth/control01.ctl NO 16384 1236 0
/u02/oradata/truth/control02.ctl NO 16384 1236 0
/u03/oradata/truth/control03.ctl NO 16384 1236 0
SQL>
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Davis, Alexander
Sent: Friday, June 03, 2016 4:27 PM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: check database on d2lseutsh049 in lab please
Database team:
Please check out the health of the database on d2lseutsh049 in the lab. It seems to be running, but the application is unable to start. It would appear that something catastrophic has happened to the database, as the ‘u’ volumes are using way too little space:
032:
/dev/mapper/OpswareVG00-u01Vol
576G 241G 307G 44% /u01
/dev/mapper/OpswareVG00-u02Vol
74G 49G 23G 69% /u02
/dev/mapper/OpswareVG00-u03Vol
74G 23G 48G 32% /u03
/dev/mapper/OpswareVG00-u04Vol
74G 66G 4.7G 94% /u04
049:
/dev/mapper/opswareVG-u01vol
692G 17G 641G 3% /u01
/dev/mapper/opswareVG-u02vol
200G 26G 164G 14% /u02
/dev/mapper/opswareVG-u03vol
200G 776M 189G 1% /u03
/dev/mapper/opswareVG-u04vol
200G 6.5G 184G 4% /u04
Thanks,
Alex
Omer,
I don’t understand what you mean by “query the schema objects”. Can you give me the exact command and how to run it?
Thanks,
Alex
From: Abdalla, Omer
Sent: Monday, June 06, 2016 10:35 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
Chando, Kenneth <kenneth.chando@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: check database on d2lseutsh049 in lab please
Alex,
As system DBA, we don’t know much about the content of the tablespaces. If the tablespace exist and its files are in place that is all what is needed to bring the database up and access it. If any file is deleted then the instance would not start. If however, an application process deleted all schema objects and emptied these tablespace, we would not know about it.
So, my suggestion is to login to each instance and query the schema objects (do a simple select count (*) between the two instances to see if they match or not.
Thanks,
Omer
From: Davis, Alexander
Sent: Monday, June 06, 2016 10:25 AM
To: Chando, Kenneth <kenneth.chando@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: check database on d2lseutsh049 in lab please
No concern about the difference in actual disk utilization?
049:
/dev/mapper/opswareVG-u01vol
692G 17G 641G 3% /u01
/dev/mapper/opswareVG-u02vol
200G 26G 164G 14% /u02
/dev/mapper/opswareVG-u03vol
200G 776M 189G 1% /u03
/dev/mapper/opswareVG-u04vol
200G 6.5G 184G 4% /u04
032:
/dev/mapper/OpswareVG00-u01Vol
576G 253G 295G 47% /u01
/dev/mapper/OpswareVG00-u02Vol
74G 49G 23G 69% /u02
/dev/mapper/OpswareVG00-u03Vol
74G 23G 48G 32% /u03
/dev/mapper/OpswareVG00-u04Vol
74G 66G 4.7G 94% /u04
From: Chando, Kenneth
Sent: Monday, June 06, 2016 10:07 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: check database on d2lseutsh049 in lab please
Hi Alex,
From the database perspective, there seems to be no issue of concern. I can login to it and it’s open.
Also checking on the size of the tablespaces and data files, there isn’t any discrepancy. See below:
SQL> @sh_tsdf.sql
June 6, 2016 Datafiles used by TRUTH database
===================================
Size Used Aut
File Name Tablespace (Mb) (in Mb) Used % Xtn Status
------------------------------------------------------- --------------- ---------- ---------- ------- --- ----------
/u02/oradata/truth/aaa_data02.dbf AAA_DATA 32.00 9.50 29.69 YES ONLINE
/u03/oradata/truth/aaa_data01.dbf AAA_DATA 32.00 4.00 12.50 YES ONLINE
/u04/oradata/truth/aaa_indx01.dbf AAA_INDX 32.00 6.13 19.14 YES ONLINE
/u03/oradata/truth/aaa_indx02.dbf AAA_INDX 32.00 16.00 50.00 YES ONLINE
/u03/oradata/truth/audit_data01.dbf AUDIT_DATA 32.00 2.94 9.18 YES ONLINE
/u02/oradata/truth/audit_indx01.dbf AUDIT_INDX 32.00 3.50 10.94 YES ONLINE
/u04/oradata/truth/lcrep_data01.dbf LCREP_DATA 261.00 178.31 68.32 YES ONLINE
/u03/oradata/truth/lcrep_data02.dbf LCREP_DATA 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/lcrep_indx01.dbf LCREP_INDX 133.00 49.00 36.84 YES ONLINE
/u04/oradata/truth/lcrep_indx02.dbf LCREP_INDX 133.00 132.44 99.58 YES ONLINE
/u04/oradata/truth/strg_data01.dbf STRG_DATA 32.00 7.13 22.27 YES ONLINE
/u02/oradata/truth/strg_data02.dbf STRG_DATA 5.00 5.00 100.00 YES ONLINE
/u03/oradata/truth/strg_indx02.dbf STRG_INDX 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/strg_indx01.dbf STRG_INDX 32.00 14.13 44.14 YES ONLINE
/u04/oradata/truth/sysaux01.dbf SYSAUX 1,060.00 935.81 88.28 YES ONLINE
/u04/oradata/truth/system01.dbf SYSTEM 726.00 663.94 91.45 YES SYSTEM
/u04/oradata/truth/temp01.dbf TEMP 128.00 128.00 100.00 YES ONLINE
/u03/oradata/truth/temp02.dbf TEMP 160.00 74.00 46.25 YES ONLINE
/u04/oradata/truth/truth_data02.dbf TRUTH_DATA 4,000.00 3,999.94 100.00 YES ONLINE
/u02/oradata/truth/truth_data01.dbf TRUTH_DATA 4,000.00 4,000.00 100.00 YES ONLINE
/u02/oradata/truth/truth_data03.dbf TRUTH_DATA 20,480.00 1,220.00 5.96 YES ONLINE
/u02/oradata/truth/truth_indx02.dbf TRUTH_INDX 645.00 636.00 98.60 YES ONLINE
/u03/oradata/truth/truth_indx01.dbf TRUTH_INDX 517.00 442.31 85.55 YES ONLINE
/u02/oradata/truth/undo02.dbf UNDO 389.00 23.69 6.09 YES ONLINE
/u04/oradata/truth/undo01.dbf UNDO 153.00 26.63 17.40 YES ONLINE
June 6, 2016 Tablespace used by db_name database
===================================
Initial Next
Extent Extent Total Size Used Free Extent
Name in (KB) in (KB) (in Mb) (in Mb) (in Mb) Used % Type Management Status
--------------- ------- ------- ---------- ---------- ---------- ------- --------- ---------- --------
AAA_INDX 64 64.00 22.13 41.88 34.57 PERMANENT LOCAL ONLINE
AUDIT_DATA 64 32.00 2.94 29.06 9.18 PERMANENT LOCAL ONLINE
STRG_DATA 64 37.00 12.13 24.88 32.77 PERMANENT LOCAL ONLINE
SYSAUX 64 1,060.00 935.81 124.19 88.28 PERMANENT LOCAL ONLINE
LCREP_DATA 64 266.00 183.31 82.69 68.91 PERMANENT LOCAL ONLINE
AUDIT_INDX 64 32.00 3.50 28.50 10.94 PERMANENT LOCAL ONLINE
STRG_INDX 64 37.00 19.13 17.88 51.69 PERMANENT LOCAL ONLINE
SYSTEM 64 726.00 663.94 62.06 91.45 PERMANENT LOCAL ONLINE
AAA_DATA 64 64.00 13.50 50.50 21.09 PERMANENT LOCAL ONLINE
TRUTH_DATA 64 28,480.00 9,219.94 19,260.06 32.37 PERMANENT LOCAL ONLINE
TRUTH_INDX 64 1,162.00 1,078.31 83.69 92.80 PERMANENT LOCAL ONLINE
UNDO 64 542.00 50.31 491.69 9.28 UNDO LOCAL ONLINE
LCREP_INDX 64 266.00 181.44 84.56 68.21 PERMANENT LOCAL ONLINE
TEMP 1,024 1,024 288.00 202.00 86.00 70.14 TEMPORARY LOCAL ONLINE
Redo Log Files
GROUP# Status MEMBER Megabytes
------- ---------- ------------------------------------------------------- ---------
1 CURRENT /u02/oradata/truth/redo1a.log 100
2 INACTIVE /u02/oradata/truth/redo2a.log 100
3 INACTIVE /u02/oradata/truth/redo3a.log 100
Control Files
Status NAME IS_ BLOCK_SIZE FILE_SIZE_BLKS CON_ID
---------- ------------------------------------------------------------ --- ---------- -------------- ----------
/u04/oradata/truth/control01.ctl NO 16384 1236 0
/u02/oradata/truth/control02.ctl NO 16384 1236 0
/u03/oradata/truth/control03.ctl NO 16384 1236 0
SQL>
Best,
Ken Chando
(Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Davis, Alexander
Sent: Friday, June 03, 2016 4:27 PM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: check database on d2lseutsh049 in lab please
Database team:
Please check out the health of the database on d2lseutsh049 in the lab. It seems to be running, but the application is unable to start. It would appear that something catastrophic has happened to the database, as the ‘u’ volumes are using way too little space:
032:
/dev/mapper/OpswareVG00-u01Vol
576G 241G 307G 44% /u01
/dev/mapper/OpswareVG00-u02Vol
74G 49G 23G 69% /u02
/dev/mapper/OpswareVG00-u03Vol
74G 23G 48G 32% /u03
/dev/mapper/OpswareVG00-u04Vol
74G 66G 4.7G 94% /u04
049:
/dev/mapper/opswareVG-u01vol
692G 17G 641G 3% /u01
/dev/mapper/opswareVG-u02vol
200G 26G 164G 14% /u02
/dev/mapper/opswareVG-u03vol
200G 776M 189G 1% /u03
/dev/mapper/opswareVG-u04vol
200G 6.5G 184G 4% /u04
Thanks,
Alex
DBA team:
Do we have oracle backups to the local filesystem on 049?
Thanks,
Alex
From: Griffin, Brad
Sent: Monday, June 06, 2016 10:39 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
DC2-STAR TEAM <dc2starteam@hpe.com>
Cc: Ignatz, Bryan <bryan.ignatz@hpe.com>
Subject: RE: hpsa down in lab
Alex,
I have a full system backup from Friday 5/27. I can’t see where the Oracle backups were ever configured on 049. I recall Oracle backups being configured on 032, but not 049. Is it possible that the Oracle backups could have been taken to a local filesystem on the server?
Thanks,
Brad
From: Davis, Alexander
Sent: Monday, June 06, 2016 10:04 AM
To: DC2-STAR TEAM <dc2starteam@hpe.com>
Cc: Griffin, Brad <brad.griffin@hpe.com>;
Ignatz, Bryan <bryan.ignatz@hpe.com>
Subject: hpsa down in lab
HPSA is currently down in the lab. Last Friday, I found the app down and numerous segfault errors in the system log. The app could not be restarted. The server appears to be missing most of the data in the /u0x oracle volumes.
It will probably need to be either restored from backup or completely rebuilt. As all the managed hosts in the lab are pointed to the down dc2_lab facility core, you will be unable to use HPSA to interact with them until this issue is resolved.
Brad, can you see when the last successful backup was for d2lseutsh049? I would need both system and oracle backups.
Thanks,
Alex
I have not seen any recent(for 2016-06-03) error message from the database site. The last error we got was on 2016-05-30.
Might be checking with the O/S support team can give more insight.
Also, if you do have a snapshot of disk utilization size prior to the 06/03/2016 incident, that could also be of help. See all the error messages till date I got for the truth database in 049 server below:
oracle@d2lseutsh049.localdomain[truth]#adrci
ADRCI: Release 12.1.0.2.0 - Production on Mon Jun 6 14:20:43 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u01/app/oracle"
adrci>show homes
ADR Homes:
diag/tnslsnr/d2lseutsh049/listener
diag/rdbms/truth/truth
adrci>set homepath diag/rdbms/truth/truth
adrci>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-%'"
ADR Home = /u01/app/oracle/diag/rdbms/truth/truth:
*************************************************************************
Output the results to file: /tmp/alert_17095_1397_truth_1.ado
2016-04-04 18:00:00.521000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_11944.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-04-11 18:00:02.671000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_2287.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-04-18 18:00:00.247000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_9471.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-04-25 18:00:03.850000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_30540.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-05-02 18:00:01.573000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_15223.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-05-09 18:00:01.273000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_21438.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-05-16 18:00:03.446000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_5026.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-05-23 18:00:03.632000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_3149.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
2016-05-30 18:00:01.840000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/truth/truth/trace/truth_j000_22298.trc:
ORA-12012: error on auto execute of job "OPSWARE_ADMIN"."OPSWARE_ADMIN_SYSTEM_STATS"
ORA-20001: An error was encountered - -20000 -ERROR- ORA-20000: Unable to gather system statistics : insufficient privileges while running gather_opsware_admin_sys_stats
ORA-06512: at "OPSWARE_ADMIN.GATHER_OPSWARE_ADMIN_SYS_STATS", line 12
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Davis, Alexander
Sent: Monday, June 06, 2016 10:25 AM
To: Chando, Kenneth <kenneth.chando@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: check database on d2lseutsh049 in lab please
No concern about the difference in actual disk utilization?
049:
/dev/mapper/opswareVG-u01vol
692G 17G 641G 3% /u01
/dev/mapper/opswareVG-u02vol
200G 26G 164G 14% /u02
/dev/mapper/opswareVG-u03vol
200G 776M 189G 1% /u03
/dev/mapper/opswareVG-u04vol
200G 6.5G 184G 4% /u04
032:
/dev/mapper/OpswareVG00-u01Vol
576G 253G 295G 47% /u01
/dev/mapper/OpswareVG00-u02Vol
74G 49G 23G 69% /u02
/dev/mapper/OpswareVG00-u03Vol
74G 23G 48G 32% /u03
/dev/mapper/OpswareVG00-u04Vol
74G 66G 4.7G 94% /u04
From: Chando, Kenneth
Sent: Monday, June 06, 2016 10:07 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: check database on d2lseutsh049 in lab please
Hi Alex,
From the database perspective, there seems to be no issue of concern. I can login to it and it’s open.
Also checking on the size of the tablespaces and data files, there isn’t any discrepancy. See below:
SQL> @sh_tsdf.sql
June 6, 2016 Datafiles used by TRUTH database
===================================
Size Used Aut
File Name Tablespace (Mb) (in Mb) Used % Xtn Status
------------------------------------------------------- --------------- ---------- ---------- ------- --- ----------
/u02/oradata/truth/aaa_data02.dbf AAA_DATA 32.00 9.50 29.69 YES ONLINE
/u03/oradata/truth/aaa_data01.dbf AAA_DATA 32.00 4.00 12.50 YES ONLINE
/u04/oradata/truth/aaa_indx01.dbf AAA_INDX 32.00 6.13 19.14 YES ONLINE
/u03/oradata/truth/aaa_indx02.dbf AAA_INDX 32.00 16.00 50.00 YES ONLINE
/u03/oradata/truth/audit_data01.dbf AUDIT_DATA 32.00 2.94 9.18 YES ONLINE
/u02/oradata/truth/audit_indx01.dbf AUDIT_INDX 32.00 3.50 10.94 YES ONLINE
/u04/oradata/truth/lcrep_data01.dbf LCREP_DATA 261.00 178.31 68.32 YES ONLINE
/u03/oradata/truth/lcrep_data02.dbf LCREP_DATA 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/lcrep_indx01.dbf LCREP_INDX 133.00 49.00 36.84 YES ONLINE
/u04/oradata/truth/lcrep_indx02.dbf LCREP_INDX 133.00 132.44 99.58 YES ONLINE
/u04/oradata/truth/strg_data01.dbf STRG_DATA 32.00 7.13 22.27 YES ONLINE
/u02/oradata/truth/strg_data02.dbf STRG_DATA 5.00 5.00 100.00 YES ONLINE
/u03/oradata/truth/strg_indx02.dbf STRG_INDX 5.00 5.00 100.00 YES ONLINE
/u02/oradata/truth/strg_indx01.dbf STRG_INDX 32.00 14.13 44.14 YES ONLINE
/u04/oradata/truth/sysaux01.dbf SYSAUX 1,060.00 935.81 88.28 YES ONLINE
/u04/oradata/truth/system01.dbf SYSTEM 726.00 663.94 91.45 YES SYSTEM
/u04/oradata/truth/temp01.dbf TEMP 128.00 128.00 100.00 YES ONLINE
/u03/oradata/truth/temp02.dbf TEMP 160.00 74.00 46.25 YES ONLINE
/u04/oradata/truth/truth_data02.dbf TRUTH_DATA 4,000.00 3,999.94 100.00 YES ONLINE
/u02/oradata/truth/truth_data01.dbf TRUTH_DATA 4,000.00 4,000.00 100.00 YES ONLINE
/u02/oradata/truth/truth_data03.dbf TRUTH_DATA 20,480.00 1,220.00 5.96 YES ONLINE
/u02/oradata/truth/truth_indx02.dbf TRUTH_INDX 645.00 636.00 98.60 YES ONLINE
/u03/oradata/truth/truth_indx01.dbf TRUTH_INDX 517.00 442.31 85.55 YES ONLINE
/u02/oradata/truth/undo02.dbf UNDO 389.00 23.69 6.09 YES ONLINE
/u04/oradata/truth/undo01.dbf UNDO 153.00 26.63 17.40 YES ONLINE
June 6, 2016 Tablespace used by db_name database
===================================
Initial Next
Extent Extent Total Size Used Free Extent
Name in (KB) in (KB) (in Mb) (in Mb) (in Mb) Used % Type Management Status
--------------- ------- ------- ---------- ---------- ---------- ------- --------- ---------- --------
AAA_INDX 64 64.00 22.13 41.88 34.57 PERMANENT LOCAL ONLINE
AUDIT_DATA 64 32.00 2.94 29.06 9.18 PERMANENT LOCAL ONLINE
STRG_DATA 64 37.00 12.13 24.88 32.77 PERMANENT LOCAL ONLINE
SYSAUX 64 1,060.00 935.81 124.19 88.28 PERMANENT LOCAL ONLINE
LCREP_DATA 64 266.00 183.31 82.69 68.91 PERMANENT LOCAL ONLINE
AUDIT_INDX 64 32.00 3.50 28.50 10.94 PERMANENT LOCAL ONLINE
STRG_INDX 64 37.00 19.13 17.88 51.69 PERMANENT LOCAL ONLINE
SYSTEM 64 726.00 663.94 62.06 91.45 PERMANENT LOCAL ONLINE
AAA_DATA 64 64.00 13.50 50.50 21.09 PERMANENT LOCAL ONLINE
TRUTH_DATA 64 28,480.00 9,219.94 19,260.06 32.37 PERMANENT LOCAL ONLINE
TRUTH_INDX 64 1,162.00 1,078.31 83.69 92.80 PERMANENT LOCAL ONLINE
UNDO 64 542.00 50.31 491.69 9.28 UNDO LOCAL ONLINE
LCREP_INDX 64 266.00 181.44 84.56 68.21 PERMANENT LOCAL ONLINE
TEMP 1,024 1,024 288.00 202.00 86.00 70.14 TEMPORARY LOCAL ONLINE
Redo Log Files
GROUP# Status MEMBER Megabytes
------- ---------- ------------------------------------------------------- ---------
1 CURRENT /u02/oradata/truth/redo1a.log 100
2 INACTIVE /u02/oradata/truth/redo2a.log 100
3 INACTIVE /u02/oradata/truth/redo3a.log 100
Control Files
Status NAME IS_ BLOCK_SIZE FILE_SIZE_BLKS CON_ID
---------- ------------------------------------------------------------ --- ---------- -------------- ----------
/u04/oradata/truth/control01.ctl NO 16384 1236 0
/u02/oradata/truth/control02.ctl NO 16384 1236 0
/u03/oradata/truth/control03.ctl NO 16384 1236 0
SQL>
Best,
Ken Chando
(Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
From: Davis, Alexander
Sent: Friday, June 03, 2016 4:27 PM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: check database on d2lseutsh049 in lab please
Database team:
Please check out the health of the database on d2lseutsh049 in the lab. It seems to be running, but the application is unable to start. It would appear that something catastrophic has happened to the database, as the ‘u’ volumes are using way too little space:
032:
/dev/mapper/OpswareVG00-u01Vol
576G 241G 307G 44% /u01
/dev/mapper/OpswareVG00-u02Vol
74G 49G 23G 69% /u02
/dev/mapper/OpswareVG00-u03Vol
74G 23G 48G 32% /u03
/dev/mapper/OpswareVG00-u04Vol
74G 66G 4.7G 94% /u04
049:
/dev/mapper/opswareVG-u01vol
692G 17G 641G 3% /u01
/dev/mapper/opswareVG-u02vol
200G 26G 164G 14% /u02
/dev/mapper/opswareVG-u03vol
200G 776M 189G 1% /u03
/dev/mapper/opswareVG-u04vol
200G 6.5G 184G 4% /u04
Thanks,
Alex
Let me see what support says – whether just the schemas or also the tablespaces. I don’t want to have to rebuild this from scratch and I’m sure you don’t either.
From: Abdalla, Omer
Sent: Monday, June 06, 2016 11:56 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
Griffin, Brad <brad.griffin@hpe.com>;
Chando, Kenneth <kenneth.chando@hpe.com>
Subject: RE: hpsa down in lab
If by “empty out the truth instance on 049” you mean dropping the truth and any other app owned schemas then yes we can do that. If you want the tablespaces dropped as well please let us know.
BTW I am having the same old issue of not being able to connect to 32 from jump server with Access Key denied error message
Thanks,
Omer
From: Davis, Alexander
Sent: Monday, June 06, 2016 11:33 AM
To: Griffin, Brad <brad.griffin@hpe.com>;
Abdalla, Omer <omer.abdalla@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: hpsa down in lab
Brad:
OK that’s fine. We should try to get it working again. Does 049 have any OS backup issues?
I have to asses if we have any disk corruption on 049. If not, the base OS is probably good and we can focus on the DB restore.
DBA team:
Is there a way to empty out the truth instance on 049 so we’re back to how it was when you set it up for the HPSA software install? If we can do that, I can go through the procedure we used to load the export of data from 032 to 049 again, and in theory, it will replicate everything back from 032.
From: Griffin, Brad
Sent: Monday, June 06, 2016 11:21 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
Abdalla, Omer <omer.abdalla@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: hpsa down in lab
Oracle on 032 was being backed up successfully at one point, but it looks like the script started failing over a month ago. Unfortunately, since it is the lab, we only retain backups for 1 month, so anything over a month ago would no longer be available.
Brad
From: Davis, Alexander
Sent: Monday, June 06, 2016 11:17 AM
To: Abdalla, Omer <omer.abdalla@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Cc: Griffin, Brad <brad.griffin@hpe.com>
Subject: RE: hpsa down in lab
I have an email from Tue 1/26/2016 1:57 PM to Brad to initiate netbackup backups on 049. I was under the impression this would include oracle, but if it doesn’t I guess there is no backup.
Do we know if 032 oracle is being backed up?
From: Abdalla, Omer
Sent: Monday, June 06, 2016 11:08 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
DC2 DATABASE SUPPORT <dc2db@hpe.com>
Cc: Griffin, Brad <brad.griffin@hpe.com>
Subject: RE: hpsa down in lab
Alex,
I checked earlier with Ken and we did not see any configured backups for this environment either locally or to Netbackup. We typically do not backup our own lab servers locally specially oracle due to limited space given to us in the lab. But since this is not a DBA server I would think any backup requirement need to be relayed by the app owner (whether it is local or using Netbackup) so proper scripts can be generated and Netbackup policies created and tested. In production that takes place through the use of Work Orders.
Thanks,
Omer
From: Davis, Alexander
Sent: Monday, June 06, 2016 10:44 AM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Cc: Griffin, Brad <brad.griffin@hpe.com>
Subject: FW: hpsa down in lab
DBA team:
Do we have oracle backups to the local filesystem on 049?
Thanks,
Alex
From: Griffin, Brad
Sent: Monday, June 06, 2016 10:39 AM
To: Davis, Alexander <alexander.davis@hpe.com>;
DC2-STAR TEAM <dc2starteam@hpe.com>
Cc: Ignatz, Bryan <bryan.ignatz@hpe.com>
Subject: RE: hpsa down in lab
Alex,
I have a full system backup from Friday 5/27. I can’t see where the Oracle backups were ever configured on 049. I recall Oracle backups being configured on 032, but not 049. Is it possible that the Oracle backups could have been taken to a local filesystem on the server?
Thanks,
Brad
From: Davis, Alexander
Sent: Monday, June 06, 2016 10:04 AM
To: DC2-STAR TEAM <dc2starteam@hpe.com>
Cc: Griffin, Brad <brad.griffin@hpe.com>;
Ignatz, Bryan <bryan.ignatz@hpe.com>
Subject: hpsa down in lab
HPSA is currently down in the lab. Last Friday, I found the app down and numerous segfault errors in the system log. The app could not be restarted. The server appears to be missing most of the data in the /u0x oracle volumes.
It will probably need to be either restored from backup or completely rebuilt. As all the managed hosts in the lab are pointed to the down dc2_lab facility core, you will be unable to use HPSA to interact with them until this issue is resolved.
Brad, can you see when the last successful backup was for d2lseutsh049? I would need both system and oracle backups.
Thanks,
Alex
Hi Omer,
The 049 server rmanbackup script was point to the IWMSD ORACLE_SID instead of truth and it’s OH was still 11.2.0
I had to modify script (rmanbackup_truth.sh) so that ORACLE_SID=truth and $ORACLE_HOME=12.1.0
Also, the database was in NOARCHIVELOG mode. Had to put it in Archivelog mode and tested backup and it was successful on 049.
Here is the dbora file for 049 server. It’s pointing to 12.1.0 OH: vi /etc/init.d/dbora
Old rmanbackup script
032 server =>no oradata, backup nor truth directory exist. The backup script has oradata and truth directories as part of its backup path
For rman_disk_backup.sh
For rmanbackup.sh =>Has IWMSD as its SID
On 032 server, there seems to have been deleted the oradata/backup/truth directory and the backup script is pointing this this location.
Did try to recreate these directories but got prompted to login with my oracle password. Did try our regular oracle password and it didn’t take it. I also tried Password1 to no avail. See:
Also, I couldn’t view the dbora script in /etc/init.d since couldn’t sudo to root for password reasons. See new modified rmanbackup script:
032 is in archvielog mode. Once the oradata, backup and truth directories are recreated, rmanbackup_truth.sh should run successfully.
Best,
Ken Chando
( Office Phone: (919) 424-5394
( Cell Phone: (434) 265-4134
Email : Kenneth.Chando@hpe.com
Thank you for your feedback |Recognition@hp
I did the SQL commands below myself. Hopefully that’s it.
From: Davis, Alexander
Sent: Thursday, June 09, 2016 9:11 AM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: RE: requesting help cleaning up the db on d2lseutsh049
Omer/Ken:
Support recommends running the following from the standalone db setup guide to make sure all the users and permissions are in place:
5. Create the Database User opsware_admin
Create the database user 'opsware_admin' with the following privileges.
SQL> create user opsware_admin identified by opsware_admin
default tablespace truth_data temporary tablespace temp
SA Oracle Setup for the Model Repository — Standalone Version 17
quota unlimited on truth_data;
SQL> grant alter session to opsware_admin with admin option;
SQL> grant create procedure to opsware_admin with admin option;
SQL> grant create public synonym to opsware_admin with admin option;
SQL> grant create sequence to opsware_admin with admin option;
SQL> grant create session to opsware_admin with admin option;
SQL> grant create table to opsware_admin with admin option;
SQL> grant create trigger to opsware_admin with admin option;
SQL> grant create type to opsware_admin with admin option;
SQL> grant create view to opsware_admin with admin option;
SQL> grant delete any table to opsware_admin with admin option;
SQL> grant drop public synonym to opsware_admin with admin option;
SQL> grant select any table to opsware_admin with admin option;
SQL> grant select_catalog_role to opsware_admin with admin option;
SQL> grant query rewrite to opsware_admin with admin option;
SQL> grant restricted session to opsware_admin with admin option;
SQL> grant execute on dbms_utility to opsware_admin with grant option;
SQL> grant analyze any to opsware_admin;
SQL> grant insert, update, delete, select on sys.aux_stats$ to opsware_admin;
SQL> grant gather_system_statistics to opsware_admin;
SQL> grant create job to opsware_admin with admin option;
SQL> grant create any directory to opsware_admin;
SQL> grant drop any directory to opsware_admin;
SQL> grant alter system to opsware_admin;
SQL> grant create role to opsware_admin;
SQL> grant create user to opsware_admin;
SQL> grant alter user to opsware_admin;
SQL> grant drop user to opsware_admin;
SQL> grant create profile to opsware_admin;
SQL> grant alter profile to opsware_admin;
Once we do this please re-run rerun.sql. Then I will try the model repository setup script again for the secondary core.
I think if we find we just need to start from scratch the db setup scripts are still in /u01/app/oracle/admin/truth/scripts/
From: Abdalla, Omer
Sent: Wednesday, June 08, 2016 4:37 PM
To: Davis, Alexander <alexander.davis@hpe.com>
Subject: RE: requesting help cleaning up the db on d2lseutsh049
From: Davis, Alexander
Sent: Wednesday, June 08, 2016 2:44 PM
To: DC2 DATABASE SUPPORT <dc2db@hpe.com>
Subject: requesting help cleaning up the db on d2lseutsh049
Importance: High
Could someone from the oracle side (preferably in Raleigh today) execute the following as oracle on d2lseutsh049:
(This is from a support case excerpt where they discussed how to fix the sort of situation I’m in on d2lseutsh049 – the software install is OK but the data in the tables is invalid)
Now we are assuming that there are some residue of the old installation and you just want to clean up the objects in step 3 and reinstall a different version of SA. This step just requires that you cleanup all the objects that were created in step 3.
-- Drop All the users
DROP USER AAA CASCADE;
DROP USER TRUTH CASCADE;
DROP USER LCREP CASCADE;
DROP USER GCADMIN CASCADE;
DROP USER AAA_USER CASCADE;
DROP USER SPIN CASCADE;
DROP USER TWIST CASCADE;
DROP USER OPSWARE_PUBLIC_VIEWS CASCADE;
DROP USER VAULT CASCADE;
-- Drop all the Roles
DROP ROLE DATA_OWNER;
DROP ROLE DATA_USER;
DROP ROLE TRUTH_MOD;
DROP ROLE TRUTH_RO;
DROP ROLE TRUTH_API;
DROP ROLE LCREP_RO;
DROP ROLE LCREP_MOD;
DROP ROLE AAA_ADMIN;
DROP ROLE AAA_READER;
DROP ROLE AAA_WRITER;
DROP ROLE AAA_API;
DROP ROLE GCADMIN_ROLE;
-- Drop the Profile
DROP PROFILE OPSWARE_PUBLIC_VIEWS_PRF;
-- Drop all the public Synonyms
Run the following query to generate the list of synonyms to drop. This will give a bunch of delete statements ( if there are any synonyms left undeleted).
Run all the individual deletes that the select creates.
SELECT 'DROP PUBLIC SYNONYM "' || synonym_name || '";'
FROM SYS.dba_synonyms
WHERE owner = 'PUBLIC'
AND table_owner IN ('AAA', 'TRUTH', 'LCREP', 'GCADMIN', 'AAA_USER', 'SPIN', 'TWIST', 'OPSWARE_PUBLIC_VIEWS', 'VAULT');
Best,
Ken Chando
AMAG QUIZZES
1. How would you approach database performance: By identifying bottlenecks and fixing them
"2. How do you force the optimizer to use a new plan: By first enabling baseline capture using : alter session set optimizer_capture_sql_plan_baselines = true;
3. Difference between local and global index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions while A local index is a one-to-one mapping between a index partition and a table partition.
4. What is the difference between DB file sequential read and DB File Scattered Read?: db file sequential read wait event has three parameters: file#, first block#, and block count while db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return
"5. Difference between nested loop joins and hash joins: Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can
"6. What factors do you consider when creating indexes on tables? How do you select the column for an index?:• Non-key columns are defined in the INCLUDE clause of the CREATE INDEX statement.
• Non-key columns can only be defined on non-clustered indexes on tables or indexed views.
"7. If you were involved at the early stages of database development and coding, what are some of the measures you would suggest for optimal performance?
1. Get candid feedback from users. Determine the performance project's scope and subsequent performance goals, as well as performance goals for the future. This process is key in future capacity planning.
2. Get a full set of operating system, database, and application statistics from the system when the performance is both good and bad. If these are not available, then get whatever is available. Missing statistics are analogous to missing evidence at a crime scene: They make detectives work harder and it is more time-consuming.
3. Sanity-check the operating systems of all systems involved with user performance. By sanity-checking the operating system, you look for hardware or operating system resources that are fully utilized. List any over-used resources as symptoms for analysis later. In addition, check that all hardware shows no errors or diagnostics.
4. Check for the top ten most common mistakes with Oracle, and determine if any of these are likely to be the problem. List these as symptoms for later analysis. These are included because they represent the most likely problems. ADDM automatically detects and reports nine of these top ten issues. See Chapter 6, ""Automatic Performance Diagnostics"" and ""Top Ten Mistakes Found in Oracle Systems"".
5. Build a conceptual model of what is happening on the system using the symptoms as clues to understand what caused the performance problems. See ""A Sample Decision Process for Performance Conceptual Modeling"".
6. Propose a series of remedy actions and the anticipated behavior to the system, then apply them in the order that can benefit the application the most. ADDM produces recommendations each with an expected benefit. A golden rule in performance work is that you only change one thing at a time and then measure the differences. Unfortunately, system downtime requirements might prohibit such a rigorous investigation method. If multiple changes are applied at the same time, then try to ensure that they are isolated so that the effects of each change can be independently validated."
8. Is creating an index online possible?: YES
"9. What is the difference between Redo, Rollback and Undo?:Redo log files record changes to the database as a result of transactions and internal Oracle server actions,undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle.Undo
What is Row Chaining and Row Migration?
"10. How to find out background processes?: select sid, process, program from v$session s join v$bgprocess using (paddr)
where s.status = 'ACTIVE' and rownum < 5;"
11. How to find background processes from OS:$ ps -ef|grep ora_|grep SID
"12. How do you troubleshoot connectivity issues?: Verify path to TNS_ADMIN is set correctly and that all the connection identifier(SIDs) exists in the tnsnames.ora file
13. Why are bind variables important?:Bind variables have a huge impact on the stress in the shared pool Can you force literals to be converted into bind variables?: YES
14. What is adaptive cursor sharing? It allows the optimizer to generate a set of plans that are optimal for different sets of bind values
15. In Data Pump, if you restart a job in Data Pump, how it will know from where to resume?: By attaching the name of the job to be resumed. That is: expdp system/manager attach="Job_Name"
Terms of Use.........................................................................................................................................
Copyright Notice..................................................................................................................................
Disclaimer Notice...............................................................................................................................5
LAB Machine usage guidelines............................................................................................................. 6
How to identify Operating System (os) commands and Database commands in this Guide................. 6
Installation of Oracle Database Server Software..................................................................................... 7
Download software............................................................................................................................. 7
Obtain platform specific guide for installation................................................................................... 10
Oracle Database Server Software - Installation.................................................................................. 12
Introduction................................................................................................................................... 12
Installation..................................................................................................................................... 14
Oracle Examples software..................................................................................................................... 28
Introduction..................................................................................................................................... 28
Installation....................................................................................................................................... 28
Central Inventory Verification........................................................................................................... 35
Database Creation................................................................................................................................. 36
Introduction..................................................................................................................................... 36
Manual Method................................................................................................................................. 37
How to check database event logfile (called alert logfile)................................................................... 40
How to check the “background processes” started as part of the instance startup............................. 41
Environment variables for administering Oracle databases............................................................ 41
Database Creation using “dbca” tool (Database Configuration Assistant)............................................. 43
Tablespaces and Datafiles...................................................................................................................... 82
Online REDO LOG Files and Archived Redo log Files.............................................................................. 87
Controlfiles........................................................................................................................................... 91
2 Oracle Instance (Memory Architecture and Background Processes information)................................. 94
Database Users/Schemas..................................................................................................................... 95
SQL*Plus Program............................................................................................................................. 100
Oracle user-managed full database offline backup (COLD BACKUP)................................................... 106
Database Restore from the COLD Backup........................................................................................... 109
Create new database using the COLD Backup (Cloning).................................................................. 109
Oracle Networking............................................................................................................................. 117
Rebuild source database from COLD Backup (Source database media failure scenario)....................... 139
Oracle user-managed full database online backup (HOT BACKUP)...................................................... 145
Database Restore and Recovery from the HOT Backup....................................................................... 148
Create new database from HOT Backup using CANCEL based recovery (Cloning)............................ 148
Extra LAB - Create new database from HOT Backup using TIME based recovery (Cloning)................... 157
RMAN Backups.................................................................................................................................. 165
Flash Recovery Area (renamed as Fast Recovery Area since 11g R2).............................................. 165
RMAN Full Database Backup........................................................................................................... 167
RMAN Full Database Backup – Compression option........................................................................ 168
RMAN Metadata (source database’s controlfile)......................................................................... 169
Server Parameter File (spfile) and pfile............................................................................................... 170
RMAN Configuration settings and Controlfile Auto Backups............................................................. 171
FlashBack Database feature (Available in Enterprise Edition only)...................................................... 172
Extra Lab - RMAN Full Database Backup in custom locations................................................................ 172
RMAN Archive log backups alone........................................................................................................ 173
RMAN backup (CATALOG mode).......................................................................................................... 175
Create RMAN CATALOG schema..................................................................................................... 175
FULL DATABASE BACKUP using RMAN catalog schema............................................................... 176
RMAN Metadata (from CATALOG schema)................................................................................. 177
Database CLONING from RMAN backup – DUPLICATE Command......................................................... 178
Extra LAB............................................................................................................................................ 183
Extra LAB - Rebuild source database from HOT Backup using CANCEL based recovery (Source database media failure scenario)................................................................................................................... 183
Data Pump Export.............................................................................................................................. 191
3 Table Mode.................................................................................................................................... 191
Schema Mode................................................................................................................................ 192
Full database mode........................................................................................................................ 193
DATA PUMP IMPORT.......................................................................................................................... 195
Table mode import......................................................................................................................... 195
Schema mode import..................................................................................................................... 196
Database Links................................................................................................................................... 198
Oracle Database Releases and Upgrades............................................................................................ 206
History of Oracle databases versions.............................................................................................. 206
FAQ................................................................................................................................................ 207
Back port patch.............................................................................................................................. 208
Patchsets are cumulative................................................................................................................ 208
How to apply interim patches (one-off patches)............................................................................. 209
About opatch utility.................................................................................................................... 209
Interim patch apply mechanism using opatch utility................................................................... 211
How to apply CPU patches (quarterly security patches).................................................................. 213
Upgrade Overview from one major version to another major version............................................. 213
Real World Jargon and first things to do when you join a company..................................................... 215
MISCELLANY....................................................................................................................................... 225
Database wide ERROR Tracing............................................................................................................ 229
Database Security............................................................................................................................... 237
Performance Tuning........................................................................................................................... 243
V$ performance views.................................................................................................................... 243
TRACING database sessions........................................................................................................... 251
SQL statement Execution Plan (EXPLAIN PLAN) and Optimizer Statistics......................................... 255
DBMS_JOBS (JOB_QUEUE_PROCESSES init parameter)...................................................................... 259
References......................................................................................................................................... 260
Automatic Storage Management (ASM)............................................................................................. 261
Additional Information....................................................................................................................... 288
How to drop a database................................................................................................................. 288
INDEX Monitoring (To identify UNWANTED indexes in the database)............................................... 290
4 Oracle Internal Exceptions – ORA-00600 and ORA-07445 error codes............................................. 294
APPENDIX........................................................................................................................................... 295
Home Assignment 1......................................................................................................................... 295
Home Assignment 2......................................................................................................................... 295
Home Assignment 3......................................................................................................................... 296
Home Assignment 4......................................................................................................................... 296
Home Assignment 5......................................................................................................................... 296
Reference Material............................................................................................................................. 297
Linux Commands and Shell Scripting............................................................................................... 297
Oracle DBA self-study Reading Material.......................................................................................... 298
Oracle Data Dictionary objects (Partial list only)................................................................................. 300
Oracle database initialization parameters (partial list only)............................................................. 301
Oracle data dictionary packages (partial list only)........................................................................... 302
Install WinSCP………………………………………………………………………………………………………………………………..177
PROJECT ASSIGNMENTS……………………………………………………… 296
REAL APPLICATION CLUSTER (RAC)
a
ORACLE 12c (CDB and PDB)
A
MISCELLANEOUS SCRIPTS
A
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ scripts
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ sql
SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 22 01:10:10 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select name from v$database;
NAME
---------
TAMSP1
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> connect system/Toast2u_22
Connected.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:10:57
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2asenpnp001.dc2.dhs.gov)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 07-SEP-2017 02:26:59
Uptime 14 days 22 hr. 43 min. 57 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2asenpnp001/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2asenpnp001.dc2.dhs.gov)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "TAMSP1" has 1 instance(s).
Instance "TAMSP1", status READY, has 1 handler(s) for this service...
Service "TAMSP1XDB" has 1 instance(s).
Instance "TAMSP1", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ clear
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ tnsping TAMSP1
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:11:34
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.1.0/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = d2asenpnp001.dc2.dhs.gov)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = d2asenpnp001-dr)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TAMSP1)))
OK (0 msec)
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ clear
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ last -x reboot
reboot system boot 2.6.32-696.3.2.e Wed Jul 26 01:35 - 01:11 (57+23:36)
reboot system boot 2.6.32-696.3.1.e Wed Jun 28 01:22 - 01:30 (28+00:08)
reboot system boot 2.6.32-696.el6.x Wed May 3 01:57 - 01:17 (55+23:19)
reboot system boot 2.6.32-642.15.1. Wed Mar 29 02:23 - 01:52 (34+23:29)
reboot system boot 2.6.32-642.15.1. Wed Mar 29 01:47 - 02:18 (00:30)
reboot system boot 2.6.32-642.13.1. Wed Feb 1 02:56 - 01:42 (55+22:46)
reboot system boot 2.6.32-642.13.1. Wed Feb 1 02:27 - 02:51 (00:24)
reboot system boot 2.6.32-642.11.1. Wed Dec 21 02:42 - 02:22 (41+23:40)
reboot system boot 2.6.32-642.6.2.e Fri Nov 18 02:26 - 02:37 (33+00:10)
reboot system boot 2.6.32-642.4.2.e Fri Oct 21 01:06 - 02:21 (28+01:15)
reboot system boot 2.6.32-642.1.1.e Thu Sep 29 02:00 - 01:01 (21+23:00)
wtmp begins Wed May 11 19:30:38 2016
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:12:43
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2asenpnp001.dc2.dhs.gov)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 07-SEP-2017 02:26:59
Uptime 14 days 22 hr. 45 min. 44 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2asenpnp001/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2asenpnp001.dc2.dhs.gov)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "TAMSP1" has 1 instance(s).
Instance "TAMSP1", status READY, has 1 handler(s) for this service...
Service "TAMSP1XDB" has 1 instance(s).
Instance "TAMSP1", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TAMST %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[kenneth.chando@d2aseprnp012 ~]$ sudo su - oracle
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ scripts
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ clear
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:14:58
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2aseprnp012.dc2.dhs.gov)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 31-AUG-2017 01:42:58
Uptime 21 days 23 hr. 32 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aseprnp012/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2aseprnp012.dc2.dhs.gov)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "TAMST" has 1 instance(s).
Instance "TAMST", status READY, has 1 handler(s) for this service...
Service "TAMSTXDB" has 1 instance(s).
Instance "TAMST", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ sql
SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 22 01:15:11 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select name from v$database;
NAME
---------
TAMST
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> connect system/Toast2u_22
Connected.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ tnsping TAMST
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:16:11
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.1.0/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = d2aseprnp012.dc2.dhs.gov)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TAMST)))
OK (0 msec)
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$ last -x reboot
reboot system boot 2.6.32-696.3.2.e Fri Jul 21 01:33 - 01:16 (62+23:43)
reboot system boot 2.6.32-696.3.1.e Fri Jun 23 01:14 - 01:28 (28+00:14)
reboot system boot 2.6.32-696.el6.x Fri Apr 28 01:52 - 01:09 (55+23:16)
reboot system boot 2.6.32-642.15.1. Fri Mar 24 01:24 - 01:47 (35+00:23)
reboot system boot 2.6.32-642.13.1. Fri Feb 24 02:31 - 01:19 (27+22:48)
reboot system boot 2.6.32-642.11.1. Fri Dec 16 02:27 - 02:26 (69+23:59)
reboot system boot 2.6.32-642.6.2.e Fri Nov 11 02:04 - 02:22 (35+00:18)
reboot system boot 2.6.32-642.4.2.e Fri Oct 14 01:06 - 01:59 (28+00:53)
reboot system boot 2.6.32-642.1.1.e Tue Sep 27 02:01 - 01:01 (16+22:59)
&&&& After TAMSP1 patching by UNIX &&&
SQL> select name,version,status,log_mode,open_mode,flashback_on from v$database,v$instance;
NAME VERSION STATUS LOG_MODE OPEN_MODE
--------- ----------------- ------------ ------------ --------------------
FLASHBACK_ON
------------------
TAMSP1 12.1.0.2.0 OPEN ARCHIVELOG READ WRITE
YES
SQL> set linesize 250 pagesize 2000
SQL> /
NAME VERSION STATUS LOG_MODE OPEN_MODE FLASHBACK_ON
--------- ----------------- ------------ ------------ -------------------- ------------------
TAMSP1 12.1.0.2.0 OPEN ARCHIVELOG READ WRITE YES
wtmp begins Mon May 16 13:51:33 2016
oracle@d2aseprnp012.dc2.dhs.gov[TAMST]$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TAMS DR TAMSP2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ ssh D1ASEDRNP001
WARNING: THIS IS A U.S. DEPARTMENT OF HOMELAND SECURITY COMPUTER SYSTEM. THIS
COMPUTER SYSTEM, INCLUDING ALL RELATED EQUIPMENT, NETWORKS AND NETWORK DEVICES
(SPECIFICALLY INCLUDING INTERNET ACCESS), ARE PROVIDED ONLY FOR AUTHORIZED U.S.
GOVERNMENT USE. DHS COMPUTER SYSTEMS MAY BE MONITORED FOR ALL LAWFUL PURPOSES,
INCLUDING TO ENSURE THAT THEIR USE IS AUTHORIZED, FOR MANAGEMENT OF THE SYSTEM,
TO FACILITATE PROTECTION AGAINST UNAUTHORIZED ACCESS, AND TO VERIFY SECURITY
PROCEDURES, SURVIVABILITY AND OPERATIONAL SECURITY. MONITORING INCLUDES ACTIVE
ATTACKS BY AUTHORIZED DHS ENTITIES TO TEST OR VERIFY THE SECURITY OF THIS
SYSTEM. DURING MONITORING, INFORMATION MAY BE EXAMINED, RECORDED, COPIED AND
USED FOR AUTHORIZED PURPOSES. ALL INFORMATION, INCLUDING PERSONAL INFORMATION,
PLACED ON OR SENT OVER THIS SYSTEM MAY BE MONITORED. USE OF THIS DHS COMPUTER
SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS
SYSTEM. UNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION. EVIDENCE OF
UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE,
CRIMINAL OR OTHER ADVERSE ACTION. USE OF THIS SYSTEM CONSTITUTES CONSENT TO
MONITORING FOR THESE PURPOSES.
oracle@d1asedrnp001's password:
Last login: Mon Aug 21 17:21:35 2017 from 10.237.129.150
WARNING: THIS IS A U.S. DEPARTMENT OF HOMELAND SECURITY COMPUTER SYSTEM. THIS
COMPUTER SYSTEM, INCLUDING ALL RELATED EQUIPMENT, NETWORKS AND NETWORK DEVICES
(SPECIFICALLY INCLUDING INTERNET ACCESS), ARE PROVIDED ONLY FOR AUTHORIZED U.S.
GOVERNMENT USE. DHS COMPUTER SYSTEMS MAY BE MONITORED FOR ALL LAWFUL PURPOSES,
INCLUDING TO ENSURE THAT THEIR USE IS AUTHORIZED, FOR MANAGEMENT OF THE SYSTEM,
TO FACILITATE PROTECTION AGAINST UNAUTHORIZED ACCESS, AND TO VERIFY SECURITY
PROCEDURES, SURVIVABILITY AND OPERATIONAL SECURITY. MONITORING INCLUDES ACTIVE
ATTACKS BY AUTHORIZED DHS ENTITIES TO TEST OR VERIFY THE SECURITY OF THIS
SYSTEM. DURING MONITORING, INFORMATION MAY BE EXAMINED, RECORDED, COPIED AND
USED FOR AUTHORIZED PURPOSES. ALL INFORMATION, INCLUDING PERSONAL INFORMATION,
PLACED ON OR SENT OVER THIS SYSTEM MAY BE MONITORED. USE OF THIS DHS COMPUTER
SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS
SYSTEM. UNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION. EVIDENCE OF
UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE,
CRIMINAL OR OTHER ADVERSE ACTION. USE OF THIS SYSTEM CONSTITUTES CONSENT TO
MONITORING FOR THESE PURPOSES.
oracle@d1asedrnp001[TAMSP2]#
oracle@d1asedrnp001[TAMSP2]# pwd
/u01/app/oracle/home
oracle@d1asedrnp001[TAMSP2]# scripts
oracle@d1asedrnp001[TAMSP2]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:22:17
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d1asedrnp001-dr)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 07-SEP-2017 02:00:43
Uptime 14 days 23 hr. 21 min. 34 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d1asedrnp001/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.79.203.214)(PORT=1521)))
Services Summary...
Service "TAMSP2" has 1 instance(s).
Instance "TAMSP2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
oracle@d1asedrnp001[TAMSP2]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 22 01:22:35 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select name from v$database;
NAME
---------
TAMSP1
SQL> select status from v$instance;
STATUS
------------
MOUNTED
SQL> set linesize 250 pagesize 2000
SQL> select name,status,version,log_mode,open_mode,flashback_on from v$database,v$instance;
NAME STATUS VERSION LOG_MODE OPEN_MODE FLASHBACK_ON
--------- ------------ ----------------- ------------ -------------------- ------------------
TAMSP1 MOUNTED 12.1.0.2.0 ARCHIVELOG MOUNTED NO
SQL>
SQL> connect system/Toast2u_22
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
Warning: You are no longer connected to ORACLE.
SQL>
oracle@d1asedrnp001[TAMSP2]# last -x reboot
reboot system boot 2.6.32-696.3.2.e Wed Jul 26 01:31 - 01:27 (57+23:56)
reboot system boot 2.6.32-696.3.1.e Wed Jun 28 01:21 - 01:26 (28+00:04)
reboot system boot 2.6.32-696.el6.x Wed May 3 02:08 - 01:16 (55+23:08)
reboot system boot 2.6.32-642.15.1. Wed Mar 29 01:47 - 02:03 (35+00:16)
reboot system boot 2.6.32-642.13.1. Wed Feb 1 02:26 - 01:42 (55+23:15)
reboot system boot 2.6.32-642.11.1. Wed Dec 21 02:51 - 02:22 (41+23:30)
reboot system boot 2.6.32-642.6.2.e Fri Nov 18 02:29 - 02:46 (33+00:16)
reboot system boot 2.6.32-642.4.2.e Fri Oct 21 01:06 - 02:24 (28+01:18)
reboot system boot 2.6.32-642.1.1.e Thu Sep 29 01:47 - 01:01 (21+23:13)
reboot system boot 2.6.32-573.22.1. Tue May 10 13:20 - 01:42 (141+12:22)
reboot system boot 2.6.32-573.22.1. Tue May 10 11:57 - 13:13 (01:16)
reboot system boot 2.6.32-573.22.1. Mon May 9 18:30 - 11:51 (17:20)
reboot system boot 2.6.32-573.22.1. Mon May 9 16:09 - 18:24 (02:15)
reboot system boot 2.6.32-573.18.1. Mon May 9 15:01 - 16:04 (01:03)
reboot system boot 2.6.32-573.18.1. Mon May 9 12:51 - 14:56 (02:05)
reboot system boot 2.6.32-573.18.1. Fri May 6 13:15 - 14:56 (3+01:41)
reboot system boot 2.6.32-573.18.1. Mon Apr 4 16:25 - 16:58 (00:32)
reboot system boot 2.6.32-573.18.1. Mon Apr 4 15:59 - 16:03 (00:04)
reboot system boot 2.6.32-573.18.1. Fri Apr 1 00:29 - 15:54 (3+15:25)
reboot system boot 2.6.32-573.18.1. Thu Mar 31 19:20 - 00:23 (05:02)
reboot system boot 2.6.32-573.18.1. Wed Mar 30 18:16 - 19:15 (1+00:59)
reboot system boot 2.6.32-573.18.1. Tue Mar 29 20:13 - 18:11 (21:58)
reboot system boot 2.6.32-573.18.1. Tue Mar 29 18:54 - 20:08 (01:13)
reboot system boot 2.6.32-573.18.1. Mon Mar 28 18:36 - 18:49 (1+00:13)
reboot system boot 2.6.32-573.18.1. Mon Mar 28 15:38 - 18:31 (02:52)
reboot system boot 2.6.32-573.18.1. Mon Mar 28 14:58 - 15:33 (00:35)
reboot system boot 2.6.32-573.18.1. Fri Mar 18 18:27 - 14:53 (9+20:26)
wtmp begins Fri Mar 18 18:27:29 2016
oracle@d1asedrnp001[TAMSP2]#
SQL> connect system/Toast2u@TAMSP2
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
Warning: You are no longer connected to ORACLE.
SQL> connect system/Quake.Q2Y2010?
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
SQL> exit
oracle@d1asedrnp001[TAMSP2]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 22-SEP-2017 01:36:59
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d1asedrnp001-dr)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 07-SEP-2017 02:00:43
Uptime 14 days 23 hr. 36 min. 16 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d1asedrnp001/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.79.203.214)(PORT=1521)))
Services Summary...
Service "TAMSP2" has 1 instance(s).
Instance "TAMSP2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
oracle@d1asedrnp001[TAMSP2]#
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CBP CRQ000000042615 %%%%%%%%%%%%%%%%%%%%%%%%%%%%% 26/27/24/31 %%%%%%%%%%%%%%%%%%%%%%%
D1ASEDRCB031; D1ASEDRCB032; D1ASEDRCB021; D1ASEDRCB023; D1ASEDRCB024; D2ASEPRCB023; D2ACLPRCB026; D2ACLPRCB027; D2ASEPRCB031; D2ASEPRCB032; D2ASEPRCB021; D2ASEPRCB022; D2ASEPRCB027; D2ASEPRCB028; D2ASEPRCB029; Server Services
========================
[kenneth.chando@d2aseprcb029 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:11:45 EDT 2017
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# clear
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# last -x reboot
reboot system boot 3.10.0-514.el7.x Thu Sep 21 22:10 - 22:15 (00:04)
wtmp begins Fri Sep 1 05:41:12 2017
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:15:20
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2aseprcb029)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:10:34
Uptime 0 days 0 hr. 4 min. 46 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aseprcb029/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2aseprcb029)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "IWMSTR" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
Service "IWMSTRXDB" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:15:23 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> set linesize 250 pagesize 2000
SQL> select name,status,version,log_mode,open_mode,flashback_on from v$database,v$instance;
NAME STATUS VERSION LOG_MODE OPEN_MODE FLASHBACK_ON
--------- ------------ ----------------- ------------ -------------------- ------------------
IWMSTR OPEN 12.1.0.2.0 ARCHIVELOG READ WRITE RESTORE POINT ONLY
[kenneth.chando@d2aseprcb029 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:11:45 EDT 2017
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# clear
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# last -x reboot
reboot system boot 3.10.0-514.el7.x Thu Sep 21 22:10 - 22:15 (00:04)
wtmp begins Fri Sep 1 05:41:12 2017
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:15:20
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2aseprcb029)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:10:34
Uptime 0 days 0 hr. 4 min. 46 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aseprcb029/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2aseprcb029)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "IWMSTR" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
Service "IWMSTRXDB" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:15:23 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> set linesize 250 pagesize 2000
SQL> select name,status,version,log_mode,open_mode,flashback_on from v$database,v$instance;
NAME STATUS VERSION LOG_MODE OPEN_MODE FLASHBACK_ON
--------- ------------ ----------------- ------------ -------------------- ------------------
IWMSTR OPEN 12.1.0.2.0 ARCHIVELOG READ WRITE RESTORE POINT ONLY
SQL> connect system/Toast2u_22
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect system/Quake.Q2Y2010?
ERROR:
ORA-01017: invalid username/password; logon denied
SQL> exit
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# tnsping IWMSTR
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:17:30
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.1.0/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = d2aseprcb029)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = IWMSTR)))
OK (0 msec)
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# cat /etc/oratab
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
IWMSTR:/u01/app/oracle/product/12.1.0:N
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:17:48
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d2aseprcb029)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:10:34
Uptime 0 days 0 hr. 7 min. 14 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aseprcb029/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d2aseprcb029)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "IWMSTR" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
Service "IWMSTRXDB" has 1 instance(s).
Instance "IWMSTR", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aseprcb029.cbp.dc2.dhs.gov[IWMSTR]#
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CBP 26 is a cluster => cat /etc/oratab =+ASM =====
[kenneth.chando@d2aclprcb026 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:17:37 EDT 2017
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# cat /etc/oratab
#Backup file is /u01/app/12.1.0/grid/srvm/admin/oratab.bak.d2aclprcb026 line added by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/12.1.0/grid:N # line added by Agent
IWMSP:/u01/app/oracle/product/12.1.0:N # line added by Agent
-MGMTDB:/u01/app/12.1.0/grid:N # line added by Agent
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# crsctl stat res -t
[kenneth.chando@d2aclprcb026 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:17:37 EDT 2017
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# cat /etc/oratab
#Backup file is /u01/app/12.1.0/grid/srvm/admin/oratab.bak.d2aclprcb026 line added by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/12.1.0/grid:N # line added by Agent
IWMSP:/u01/app/oracle/product/12.1.0:N # line added by Agent
-MGMTDB:/u01/app/12.1.0/grid:N # line added by Agent
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.DATA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.FRA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2aclprcb026 STABLE
ora.asm
ONLINE ONLINE d2aclprcb026 Started,STABLE
ora.net1.network
ONLINE ONLINE d2aclprcb026 STABLE
ora.ons
ONLINE ONLINE d2aclprcb026 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE d2aclprcb026 169.254.22.193 192.1
68.15.74,STABLE
ora.cvu
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb026.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb027.vip
1 ONLINE INTERMEDIATE d2aclprcb026 FAILED OVER,STABLE
ora.iwmsp.db
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
2 ONLINE OFFLINE STABLE
ora.mgmtdb
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
ora.oc4j
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
--------------------------------------------------------------------------------
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.DATA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.FRA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2aclprcb026 STABLE
ora.asm
ONLINE ONLINE d2aclprcb026 Started,STABLE
ora.net1.network
ONLINE ONLINE d2aclprcb026 STABLE
ora.ons
ONLINE ONLINE d2aclprcb026 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE d2aclprcb026 169.254.22.193 192.1
68.15.74,STABLE
ora.cvu
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb026.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb027.vip
1 ONLINE INTERMEDIATE d2aclprcb026 FAILED OVER,STABLE
ora.iwmsp.db
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
2 ONLINE OFFLINE STABLE
ora.mgmtdb
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
ora.oc4j
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
--------------------------------------------------------------------------------
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# scripts
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:24:45
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:19:10
Uptime 0 days 0 hr. 5 min. 35 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aclprcb026/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.239.65.216)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.239.65.218)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=d2aclprcb026.cbp.dc2.dhs.gov)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/12.1.0/admin/IWMSP/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "IWMSP" has 1 instance(s).
Instance "IWMSP1", status READY, has 1 handler(s) for this service...
Service "IWMSPXDB" has 1 instance(s).
Instance "IWMSP1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "d2aclprcb026a1" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# goasm
oracle@d2aclprcb026.cbp.dc2.dhs.gov[+ASM1]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:25:01
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:19:10
Uptime 0 days 0 hr. 5 min. 51 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2aclprcb026/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.239.65.216)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.239.65.218)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=d2aclprcb026.cbp.dc2.dhs.gov)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/12.1.0/admin/IWMSP/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "IWMSP" has 1 instance(s).
Instance "IWMSP1", status READY, has 1 handler(s) for this service...
Service "IWMSPXDB" has 1 instance(s).
Instance "IWMSP1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "d2aclprcb026a1" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2aclprcb026.cbp.dc2.dhs.gov[+ASM1]# exit
[kenneth.chando@d2aclprcb026 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:23:11 EDT 2017
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# scripts
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:25:17 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> set linesize 250 pagesize 2000
SQL> select status,name,version,log_mode,open_mode,flashback_on from gv$database,gv$instance;
STATUS NAME VERSION LOG_MODE OPEN_MODE FLASHBACK_ON
------------ --------- ----------------- ------------ -------------------- ------------------
OPEN IWMSP 12.1.0.2.0 ARCHIVELOG READ WRITE NO
SQL>
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# srvctl status database -d IWMSP
Instance IWMSP1 is running on node d2aclprcb026
Instance IWMSP2 is running on node d2aclprcb027
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]#
%%%%%%%%%%%%%%%%% CBP 27 has changed status from INTERMEDIATE(Failed Over) to ONLINE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# srvctl status database -d IWMSP
Instance IWMSP1 is running on node d2aclprcb026
Instance IWMSP2 is running on node d2aclprcb027
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
ora.DATA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
ora.FRA_DG.dg
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
ora.asm
ONLINE ONLINE d2aclprcb026 Started,STABLE
ONLINE ONLINE d2aclprcb027 Started,STABLE
ora.net1.network
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
ora.ons
ONLINE ONLINE d2aclprcb026 STABLE
ONLINE ONLINE d2aclprcb027 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2aclprcb027 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE d2aclprcb026 169.254.22.193 192.1
68.15.74,STABLE
ora.cvu
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb026.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.d2aclprcb027.vip
1 ONLINE ONLINE d2aclprcb027 STABLE
ora.iwmsp.db
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
2 ONLINE ONLINE d2aclprcb027 Open,STABLE
ora.mgmtdb
1 ONLINE ONLINE d2aclprcb026 Open,STABLE
ora.oc4j
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2aclprcb027 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2aclprcb026 STABLE
--------------------------------------------------------------------------------
oracle@d2aclprcb026.cbp.dc2.dhs.gov[IWMSP1]#
This U.S. Government System is subject to monitoring. Not authorized for Classified Information. Video recording restricted to authorized users. Not to be used to formally transact agency business or to document the activities of the Organization.
[?9/?21/?2017 3:08 PM] Andrews III, Anthony (CTR):
Ken
[?9/?21/?2017 3:08 PM] Chando, Kenneth (CTR):
Hi Tony
[?9/?21/?2017 3:08 PM] Andrews III, Anthony (CTR):
you verifying DBs tonight?
[?9/?21/?2017 3:09 PM] Chando, Kenneth (CTR):
yep, at 9pm and 10pm respectively for NPPD and CBP
[?9/?21/?2017 3:09 PM] Andrews III, Anthony (CTR):
ok
the nppd server is listed, do you know the CBP databases to check?
I will be done with nppd reboot at 9:15PM
[?9/?21/?2017 3:10 PM] Chando, Kenneth (CTR):
The RFC is 42615. I would assume it should be in the CI relationship tab
but will verify ...
[?9/?21/?2017 3:10 PM] Andrews III, Anthony (CTR):
CBP should be ready for you at 10:15 - 10:30 or so
[?9/?21/?2017 3:11 PM] Chando, Kenneth (CTR):
ok. Thanks for the heads up Tony
[?9/?21/?2017 3:17 PM] Chando, Kenneth (CTR):
Tony, from the relationship tab, there are 10 DC2 servers for CBP and 5 DC1 servers. Does that number sounds right to you?
[?9/?21/?2017 3:18 PM] Andrews III, Anthony (CTR):
that is correct
[?9/?21/?2017 3:18 PM] Chando, Kenneth (CTR):
D1ASEDRCB031; D1ASEDRCB032; D1ASEDRCB021; D1ASEDRCB023; D1ASEDRCB024; D2ASEPRCB023; D2ACLPRCB026; D2ACLPRCB027; D2ASEPRCB031; D2ASEPRCB032; D2ASEPRCB021; D2ASEPRCB022; D2ASEPRCB027; D2ASEPRCB028; D2ASEPRCB029
ok...Will be checking them tonight
[?9/?21/?2017 3:18 PM] Andrews III, Anthony (CTR):
no
you just have to check four
you don't know the CBP database servers?
[?9/?21/?2017 3:19 PM] Chando, Kenneth (CTR):
yep..
I do, I definitely will go through out database lists and do the ones that we support
[?9/?21/?2017 3:20 PM] Andrews III, Anthony (CTR):
ok
[?9/?21/?2017 9:29 PM] Chando, Kenneth (CTR):
Hi Tony, are you done yet with NPPD patching? Was there suppose to be any reboot?
[?9/?21/?2017 9:30 PM] Andrews III, Anthony (CTR):
it's rebooting
[?9/?21/?2017 9:30 PM] Chando, Kenneth (CTR):
just saw DR going down...wanted to make sure
[?9/?21/?2017 9:33 PM] Andrews III, Anthony (CTR):
done
[?9/?21/?2017 9:34 PM] Chando, Kenneth (CTR):
ok, let me take a look
[?9/?21/?2017 9:44 PM] Chando, Kenneth (CTR):
Hi Tony, which command would be best to verify reboot status for the servers. Would this be ok last -x reboot?
[?9/?21/?2017 9:44 PM] Andrews III, Anthony (CTR):
what are you trying to verify?
[?9/?21/?2017 9:46 PM] Chando, Kenneth (CTR):
Just checked with that command for TAMSP1 and TAMSP2 and last date shows 2016. Was just trying to see if the reboot today could show. I verified all database resources and it looks well. But when I tried to test remote connectivity to D1ASEDRNP001 from within the database, I get this error: ORA-01033: ORACLE initialization or shutdown in progress
[?9/?21/?2017 9:47 PM] Andrews III, Anthony (CTR):
system was rebooted and has only been up 17 min
[root@d2aseprnp012 ~]# uptime
01:47:09 up 17 min, 1 user, load average: 0.10, 0.07, 0.09
[?9/?21/?2017 9:48 PM] Andrews III, Anthony (CTR):
the system is completely booted into multiuser mode
[?9/?21/?2017 9:49 PM] Chando, Kenneth (CTR):
wtmp begins Fri Mar 18 18:27:29 2016
oracle@d1asedrnp001[TAMSP2]# uptime
01:47:11 up 58 days, 16 min, 1 user, load average: 0.15, 0.04, 0.01
oracle@d1asedrnp001[TAMSP2]#
[?9/?21/?2017 9:50 PM] Andrews III, Anthony (CTR):
why are you on that system
????
[?9/?21/?2017 9:50 PM] Chando, Kenneth (CTR):
It's part of the CI listed in the RFC
[?9/?21/?2017 9:51 PM] Andrews III, Anthony (CTR):
I asked you earlier if you knew which system we were doing.
2aseprnp012
d2aseprnp012
rfc43079
[?9/?21/?2017 9:52 PM] Chando, Kenneth (CTR):
EOC had 3 systems listed for TAMS (PROD,TEST and DR). So you guys did just PROD
yep...check admin2 tab on the RFC
[?9/?21/?2017 9:52 PM] Andrews III, Anthony (CTR):
pre-prod
I don't know what you are looking at but my task is:
NPPD FPS TAMS Pre-Prod:
D2ASEPRNP012
[?9/?21/?2017 9:56 PM] Andrews III, Anthony (CTR):
click on the tasks tab, not admin2
and look at your task. you new here or what?
[?9/?21/?2017 9:57 PM] Chando, Kenneth (CTR):
All set sir!!! Thanks...!!! Might be ...:)
done with CBP yet?
[?9/?21/?2017 9:58 PM] Andrews III, Anthony (CTR):
same thing for CBP
[?9/?21/?2017 9:58 PM] Chando, Kenneth (CTR):
ok sir, hang-on...
[?9/?21/?2017 9:58 PM] Andrews III, Anthony (CTR):
haven't started CBP yet
CBP has not sent the start email yet
[?9/?21/?2017 9:58 PM] Chando, Kenneth (CTR):
got you. Will be waiting for your confirmation
ok
[?9/?21/?2017 10:13 PM] Andrews III, Anthony (CTR):
d2aseprcb029 is ready for you to check
[?9/?21/?2017 10:13 PM] Chando, Kenneth (CTR):
ok
[?9/?21/?2017 10:18 PM] Chando, Kenneth (CTR):
d2aseprcb029 is good to go
[?9/?21/?2017 10:18 PM] Andrews III, Anthony (CTR):
ok
two minutes and you can check two more
[?9/?21/?2017 10:19 PM] Chando, Kenneth (CTR):
ok
[?9/?21/?2017 10:21 PM] Andrews III, Anthony (CTR):
I am pretty sure the DR DB is not started but don't know if you check it or not. I did reboot it. d1asedrcb024
d2aclprcb026 is ready to be checked as well
[?9/?21/?2017 10:21 PM] Chando, Kenneth (CTR):
Not yet...
ok, let me check these now
[?9/?21/?2017 10:27 PM] Andrews III, Anthony (CTR):
d2aclprcb027 is up now, may take several minutes before the RAC service starts
[?9/?21/?2017 10:30 PM] Chando, Kenneth (CTR):
yep, DB resources in the cluster nodes 26/27 are all good to go
[?9/?21/?2017 10:30 PM] Andrews III, Anthony (CTR):
you are finished then
[?9/?21/?2017 10:31 PM] Chando, Kenneth (CTR):
Awesome!!!
%%%%%%%%%%%%%%%%%%%CBP DR (24) does not have ASM disk mounted automatically, do mount them. Otherwise, you wouldn't be able to connect to ORACLE database. See crsctl stat res -t %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[kenneth.chando@d1asedrcb024 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:14:38 EDT 2017
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# cat /etc/oratab
#Backup file is /u01/app/12.1.0/grid/srvm/admin/oratab.bak.d1asedrcb024 line added by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
IWMSDR:/u01/app/oracle/product/12.1.0:N # line added by Agent
+ASM:/u01/app/12.1.0/grid:N # line added by Agent
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.FRA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.LISTENER.lsnr
ONLINE INTERMEDIATE d1asedrcb024 Not All Endpoints Re
gistered,STABLE
ora.asm
ONLINE ONLINE d1asedrcb024 Started,STABLE
ora.ons
OFFLINE OFFLINE d1asedrcb024 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.iwmsdr.db
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:35:07
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:14:49
Uptime 0 days 0 hr. 20 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d1asedrcb024/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d1asedrcb024.cbp.dc1.dhs.gov)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "IWMSDR" has 1 instance(s).
Instance "IWMSDR", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# goasm
oracle@d1asedrcb024.cbp.dc1.dhs.gov[+ASM]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:35:49
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=d1asedrcb024)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:14:49
Uptime 0 days 0 hr. 21 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d1asedrcb024/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d1asedrcb024.cbp.dc1.dhs.gov)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "IWMSDR" has 1 instance(s).
Instance "IWMSDR", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
oracle@d1asedrcb024.cbp.dc1.dhs.gov[+ASM]# exit
[kenneth.chando@d1asedrcb024 ~]$ sudo su - oracle
Last login: Thu Sep 21 22:34:07 EDT 2017 on pts/0
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:36:01 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL> set linesize 250 pagesize 2000
SQL> select name,version,status,log_mode,open_mode,flashback_on from gv$instance,gv$database;
select name,version,status,log_mode,open_mode,flashback_on from gv$instance,gv$database
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
SQL> ed
Wrote file afiedt.buf
1* select name,version,status,log_mode,open_mode,flashback_on from v$instance,v$database
SQL> /
select name,version,status,log_mode,open_mode,flashback_on from v$instance,v$database
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
SQL> select status from v$database;
select status from v$database
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
SQL> exit
Disconnected
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.FRA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.LISTENER.lsnr
ONLINE INTERMEDIATE d1asedrcb024 Not All Endpoints Re
gistered,STABLE
ora.asm
ONLINE ONLINE d1asedrcb024 Started,STABLE
ora.ons
OFFLINE OFFLINE d1asedrcb024 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.iwmsdr.db
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]#
See iwmsdr.db (Database is offline)/ Disk Monitoring is OFFLINE => Fix this...
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.iwmsdr.db
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------
=========================================================================================== IWMSDR (DR) database is idle
ora.evmd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.iwmsdr.db
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2017 22:48:57
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 21-SEP-2017 22:14:49
Uptime 0 days 0 hr. 34 min. 8 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d1asedrcb024/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=d1asedrcb024.cbp.dc1.dhs.gov)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "IWMSDR" has 1 instance(s).
Instance "IWMSDR", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# alog
Archived Log entry 9758 added for thread 1 sequence 24137 ID 0x3a25ee59 dest 1:
Thu Sep 21 13:06:10 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8076.1472.955285569
Thu Sep 21 13:06:10 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24135.1469.955285553
Thu Sep 21 13:06:11 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24136.1470.955285559
Thu Sep 21 13:06:12 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24137.1471.955285569
Media Recovery Waiting for thread 1 sequence 24138
Thu Sep 21 13:45:03 2017
RFS[756]: Selected log 9 for thread 1 sequence 24138 dbid 975565660 branch 934818782
Thu Sep 21 13:45:05 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24138 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Media Recovery Waiting for thread 2 sequence 8077
Thu Sep 21 13:45:06 2017
Archived Log entry 9759 added for thread 1 sequence 24138 ID 0x3a25ee59 dest 1:
Thu Sep 21 13:47:02 2017
RFS[754]: Possible network disconnect with primary database
Thu Sep 21 13:47:08 2017
RFS[751]: Possible network disconnect with primary database
Thu Sep 21 14:18:00 2017
RFS[756]: Selected log 9 for thread 1 sequence 24139 dbid 975565660 branch 934818782
Thu Sep 21 14:18:03 2017
Archived Log entry 9760 added for thread 1 sequence 24139 ID 0x3a25ee59 dest 1:
Thu Sep 21 14:29:13 2017
RFS[755]: Possible network disconnect with primary database
Thu Sep 21 14:37:00 2017
RFS[756]: Selected log 9 for thread 1 sequence 24140 dbid 975565660 branch 934818782
Thu Sep 21 14:37:03 2017
RFS[760]: Assigned to RFS process (PID:32315)
RFS[760]: Selected log 12 for thread 2 sequence 8077 dbid 975565660 branch 934818782
Thu Sep 21 14:37:03 2017
Archived Log entry 9761 added for thread 1 sequence 24140 ID 0x3a25ee59 dest 1:
Thu Sep 21 14:37:03 2017
Archived Log entry 9762 added for thread 2 sequence 8077 ID 0x3a25ee59 dest 1:
Thu Sep 21 14:37:04 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8077.1476.955291023
Thu Sep 21 14:37:04 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24138.1473.955287905
Thu Sep 21 14:37:05 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24139.1474.955289883
Thu Sep 21 14:37:07 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24140.1475.955291023
Media Recovery Waiting for thread 1 sequence 24141
Thu Sep 21 15:05:47 2017
RFS[758]: Possible network disconnect with primary database
Thu Sep 21 15:05:47 2017
RFS[757]: Possible network disconnect with primary database
Thu Sep 21 15:06:20 2017
RFS[759]: Possible network disconnect with primary database
Thu Sep 21 15:08:15 2017
RFS[756]: Selected log 9 for thread 1 sequence 24141 dbid 975565660 branch 934818782
Thu Sep 21 15:08:18 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24141 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Thu Sep 21 15:08:18 2017
Archived Log entry 9763 added for thread 1 sequence 24141 ID 0x3a25ee59 dest 1:
Thu Sep 21 15:08:19 2017
Media Recovery Waiting for thread 2 sequence 8078
Thu Sep 21 15:40:36 2017
RFS[756]: Selected log 9 for thread 1 sequence 24142 dbid 975565660 branch 934818782
Thu Sep 21 15:40:41 2017
Archived Log entry 9764 added for thread 1 sequence 24142 ID 0x3a25ee59 dest 1:
RFS[756]: Selected log 9 for thread 1 sequence 24143 dbid 975565660 branch 934818782
Thu Sep 21 15:47:15 2017
RFS[761]: Assigned to RFS process (PID:8246)
RFS[761]: Selected log 12 for thread 2 sequence 8078 dbid 975565660 branch 934818782
Thu Sep 21 15:47:15 2017
Error ORA-235 occurred during an un-locked control file transaction. This
error can be ignored. The control file transaction will be retried.
Thu Sep 21 15:47:15 2017
Archived Log entry 9765 added for thread 1 sequence 24143 ID 0x3a25ee59 dest 1:
Thu Sep 21 15:47:15 2017
Archived Log entry 9766 added for thread 2 sequence 8078 ID 0x3a25ee59 dest 1:
Thu Sep 21 15:47:16 2017
Error ORA-235 occurred during an un-locked control file transaction. This
error can be ignored. The control file transaction will be retried.
Thu Sep 21 15:47:16 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8078.1480.955295235
Thu Sep 21 15:47:16 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24141.1477.955292899
Thu Sep 21 15:47:17 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24142.1478.955294841
Thu Sep 21 15:47:20 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24143.1479.955295235
Media Recovery Waiting for thread 1 sequence 24144
Thu Sep 21 16:32:25 2017
RFS[762]: Assigned to RFS process (PID:13362)
RFS[762]: Selected log 9 for thread 1 sequence 24144 dbid 975565660 branch 934818782
Thu Sep 21 16:32:33 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24144 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Thu Sep 21 16:32:33 2017
Archived Log entry 9767 added for thread 1 sequence 24144 ID 0x3a25ee59 dest 1:
Thu Sep 21 16:32:33 2017
Media Recovery Waiting for thread 2 sequence 8079
Thu Sep 21 16:37:16 2017
RFS[760]: Possible network disconnect with primary database
Thu Sep 21 17:05:01 2017
RFS[763]: Assigned to RFS process (PID:17066)
RFS[763]: Selected log 9 for thread 1 sequence 24145 dbid 975565660 branch 934818782
Thu Sep 21 17:05:07 2017
Archived Log entry 9768 added for thread 1 sequence 24145 ID 0x3a25ee59 dest 1:
Thu Sep 21 17:47:26 2017
RFS[761]: Possible network disconnect with primary database
Thu Sep 21 17:47:26 2017
RFS[756]: Possible network disconnect with primary database
Thu Sep 21 17:59:30 2017
RFS[763]: Selected log 9 for thread 1 sequence 24146 dbid 975565660 branch 934818782
Thu Sep 21 17:59:32 2017
RFS[764]: Assigned to RFS process (PID:23023)
RFS[764]: Selected log 12 for thread 2 sequence 8079 dbid 975565660 branch 934818782
Thu Sep 21 17:59:34 2017
Archived Log entry 9769 added for thread 2 sequence 8079 ID 0x3a25ee59 dest 1:
Thu Sep 21 17:59:34 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8079.1483.955303175
Thu Sep 21 17:59:34 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24144.1481.955297953
Thu Sep 21 17:59:36 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24145.1482.955299907
Thu Sep 21 17:59:39 2017
Archived Log entry 9770 added for thread 1 sequence 24146 ID 0x3a25ee59 dest 1:
Thu Sep 21 17:59:39 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24146.1484.955303177
Media Recovery Waiting for thread 1 sequence 24147
Thu Sep 21 18:30:51 2017
RFS[763]: Selected log 9 for thread 1 sequence 24147 dbid 975565660 branch 934818782
Thu Sep 21 18:30:56 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24147 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Media Recovery Waiting for thread 2 sequence 8080
Thu Sep 21 18:31:00 2017
Archived Log entry 9771 added for thread 1 sequence 24147 ID 0x3a25ee59 dest 1:
Thu Sep 21 18:32:46 2017
RFS[762]: Possible network disconnect with primary database
Thu Sep 21 19:47:20 2017
RFS[765]: Assigned to RFS process (PID:2823)
RFS[765]: Selected log 9 for thread 1 sequence 24148 dbid 975565660 branch 934818782
Thu Sep 21 19:47:21 2017
RFS[766]: Assigned to RFS process (PID:2825)
RFS[766]: Selected log 12 for thread 2 sequence 8080 dbid 975565660 branch 934818782
Thu Sep 21 19:47:22 2017
Archived Log entry 9772 added for thread 2 sequence 8080 ID 0x3a25ee59 dest 1:
Thu Sep 21 19:47:22 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8080.1486.955309643
Media Recovery Waiting for thread 1 sequence 24148 (in transit)
Thu Sep 21 19:47:23 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24148 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Thu Sep 21 19:47:35 2017
Archived Log entry 9773 added for thread 1 sequence 24148 ID 0x3a25ee59 dest 1:
Thu Sep 21 19:47:36 2017
Media Recovery Waiting for thread 1 sequence 24149
Thu Sep 21 19:59:36 2017
RFS[764]: Possible network disconnect with primary database
Thu Sep 21 20:27:03 2017
RFS[766]: Selected log 12 for thread 2 sequence 8081 dbid 975565660 branch 934818782
Thu Sep 21 20:27:09 2017
Archived Log entry 9774 added for thread 2 sequence 8081 ID 0x3a25ee59 dest 1:
Thu Sep 21 20:31:17 2017
RFS[763]: Possible network disconnect with primary database
Thu Sep 21 21:47:44 2017
RFS[765]: Possible network disconnect with primary database
Thu Sep 21 22:01:43 2017
RFS[767]: Assigned to RFS process (PID:10533)
RFS[767]: Selected log 9 for thread 1 sequence 24149 dbid 975565660 branch 934818782
Thu Sep 21 22:01:45 2017
Recovery of Online Redo Log: Thread 1 Group 9 Seq 24149 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_9.532.949601067
Thu Sep 21 22:01:45 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_2_seq_8081.1488.955312029
Thu Sep 21 22:01:46 2017
Archived Log entry 9775 added for thread 1 sequence 24149 ID 0x3a25ee59 dest 1:
Thu Sep 21 22:01:46 2017
Media Recovery Waiting for thread 2 sequence 8082
Thu Sep 21 22:04:25 2017
RFS[768]: Assigned to RFS process (PID:10865)
RFS[768]: Selected log 9 for thread 1 sequence 24150 dbid 975565660 branch 934818782
Thu Sep 21 22:04:27 2017
Archived Log entry 9776 added for thread 1 sequence 24150 ID 0x3a25ee59 dest 1:
RFS[768]: Selected log 9 for thread 1 sequence 24151 dbid 975565660 branch 934818782
Thu Sep 21 22:06:56 2017
Archived Log entry 9777 added for thread 1 sequence 24151 ID 0x3a25ee59 dest 1:
Thu Sep 21 22:06:56 2017
RFS[769]: Assigned to RFS process (PID:11128)
RFS[769]: Selected log 12 for thread 2 sequence 8082 dbid 975565660 branch 934818782
Thu Sep 21 22:07:01 2017
Recovery of Online Redo Log: Thread 2 Group 12 Seq 8082 Reading mem 0
Mem# 0: +FRA_DG/IWMSDR/ONLINELOG/group_12.821.949601071
Thu Sep 21 22:07:01 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24149.1489.955317705
Thu Sep 21 22:07:01 2017
Archived Log entry 9778 added for thread 2 sequence 8082 ID 0x3a25ee59 dest 1:
Thu Sep 21 22:07:02 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24150.1490.955317867
Thu Sep 21 22:07:04 2017
Media Recovery Log +FRA_DG/IWMSDR/ARCHIVELOG/2017_09_21/thread_1_seq_24151.1491.955318015
Media Recovery Waiting for thread 1 sequence 24152
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:49:12 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
Since it's a DR for CBP cluster, database is not mounted(it's in IDLE status) but STABLE while ora.FRA_DG.dg is offline, listener doesn't have all ENDPOINTS REGISTERED as a result (But that's ok =>STABLE)
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 21 22:49:12 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL> exit
Disconnected
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.FRA_DG.dg
ONLINE OFFLINE d1asedrcb024 STABLE
ora.LISTENER.lsnr
ONLINE INTERMEDIATE d1asedrcb024 Not All Endpoints Re
gistered,STABLE
ora.asm
ONLINE ONLINE d1asedrcb024 Started,STABLE
ora.ons
OFFLINE OFFLINE d1asedrcb024 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE d1asedrcb024 STABLE
ora.iwmsdr.db
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------
oracle@d1asedrcb024.cbp.dc1.dhs.gov[IWMSDR]#
[?9/?21/?2017 10:21 PM] Andrews III, Anthony (CTR):
I am pretty sure the DR DB is not started but don't know if you check it or not. I did reboot it. d1asedrcb024
d2aclprcb026 is ready to be checked as well
[?9/?21/?2017 10:21 PM] Chando, Kenneth (CTR):
Not yet...
ok, let me check these now
[?9/?21/?2017 10:27 PM] Andrews III, Anthony (CTR):
d2aclprcb027 is up now, may take several minutes before the RAC service starts
[?9/?21/?2017 10:30 PM] Chando, Kenneth (CTR):
yep, DB resources in the cluster nodes 26/27 are all good to go
[?9/?21/?2017 10:30 PM] Andrews III, Anthony (CTR):
you are finished then
[?9/?21/?2017 10:31 PM] Chando, Kenneth (CTR):
Awesome!!!
[?9/?21/?2017 10:41 PM] Chando, Kenneth (CTR):
Hi Tony, the Diskmon and DB is offline for d1asedrcb024 (IWMSDR)....
[?9/?21/?2017 10:41 PM] Andrews III, Anthony (CTR):
that's fine
This U.S. Government System is subject to monitoring. Not authorized for Classified Information. Video recording restricted to authorized users. Not to be used to formally transact agency business or to document the activities of the Organization.
[?9/?21/?2017 10:02 PM] Bowers, Bryan (CTR):
Hey Kenneth, could you email the EOC or I.M when you are finished with your task for RFC 43079? thanks!
[?9/?21/?2017 10:02 PM] Chando, Kenneth (CTR):
I'm done with RFC 43079 DB verification for: D2ASEPRNP012. All looks good
[?9/?21/?2017 10:04 PM] Bowers, Bryan (CTR):
Sounds great!!
[?9/?21/?2017 10:25 PM] Bowers, Bryan (CTR):
Also could you I.M me when you finish your task for RFC 42615
[?9/?21/?2017 10:55 PM] Chando, Kenneth (CTR):
For RFC 42615, the DB Check-out for d2aseprcb029,d2aclprcb026/d2aclprcb027 and d1asedrcb024 are completed. All looks good.
%%%%%%%%%%%%%% KNOW how to FAIL OVER a RAC database using Oracle Command LINE %%%%%%%%%%%%%%%% e.g say from 106 to 107 node and vice versa 107 node to 106 %%%%%%
[kenneth.chando@d2iclprhq106 ~]$ sudo su - oracle
oracle@d2iclprhq106[IDMP1]# cat /etc/oratab
#Backup file is /u01/app/oracle/product/12.1.0/srvm/admin/oratab.bak.d2iclprhq106 line added by Agent
#
%% BRUCE CBP DR (CBP IWMSDR) %%%%%%%%%%%%%%% Said, ASM was restarted and disk got mounted %%%%%%%%%%%%%%%%% for DR above. That fixed the issue %%%%%%%%%%%%
This U.S. Government System is subject to monitoring. Not authorized for Classified Information. Video recording restricted to authorized users. Not to be used to formally transact agency business or to document the activities of the Organization.
[?9/?18/?2017 10:56 AM] Franklin, Bruce (CTR):
Ken, you have 2 DB verification tasks for Thursday evening, both for RHEL ISVM patching
[?9/?18/?2017 10:56 AM] Chando, Kenneth (CTR):
ok Bruce. Thanks!
43019[?9/?20/?2017 2:52 PM] Franklin, Bruce (CTR):
Mr. Chando, good afternoon
[?9/?20/?2017 2:52 PM] Chando, Kenneth (CTR):
good afternoon Bruce
[?9/?20/?2017 2:53 PM] Franklin, Bruce (CTR):
just want to remind you that the on-call has mandatory attendance for the Problem Mgmt Call
[?9/?20/?2017 2:53 PM] Chando, Kenneth (CTR):
yep, I'm about to dial-in
[?9/?20/?2017 2:53 PM] Franklin, Bruce (CTR):
even though we don't have any open problems
ok
thanks Ken
[?9/?20/?2017 2:53 PM] Chando, Kenneth (CTR):
YW!!!
[?9/?20/?2017 2:53 PM] Franklin, Bruce (CTR):
and try not to snore while napping on the call
[?9/?20/?2017 2:54 PM] Chando, Kenneth (CTR):
hahaha...:)
[?9/?20/?2017 2:54 PM] Franklin, Bruce (CTR):
it is pretty boring
[?9/?20/?2017 2:54 PM] Chando, Kenneth (CTR):
I will...:)
[?9/?20/?2017 2:54 PM] Franklin, Bruce (CTR):
:)
[?9/?22/?2017 8:08 AM] Chando, Kenneth (CTR):
GM Bruce
[?9/?22/?2017 8:08 AM] Franklin, Bruce (CTR):
yes Ken
[?9/?22/?2017 8:09 AM] Chando, Kenneth (CTR):
Just wanted to say the DB checkout went out good yesterday after the Unix patching
[?9/?22/?2017 8:09 AM] Franklin, Bruce (CTR):
except DR is down
disks are not online
on the phone with Chris Bishop
[?9/?22/?2017 8:09 AM] Chando, Kenneth (CTR):
I realized that the IWMSDR database is idle and DBMon status is offline on DR server: d1asedrcb024
Since it was DR, I wanted to reach out to you first. It should be started?
[?9/?22/?2017 8:11 AM] Franklin, Bruce (CTR):
don't touch it
[?9/?22/?2017 8:11 AM] Chando, Kenneth (CTR):
last night, I did point it out to Tony and he said it was ok
ok
[?9/?22/?2017 8:12 AM] Franklin, Bruce (CTR):
as i said i am working with Chris Bishop to resolve
[?9/?22/?2017 9:00 AM] Franklin, Bruce (CTR):
issue resolved; DR is back online
[?9/?22/?2017 9:01 AM] Chando, Kenneth (CTR):
thanks for this update. How was it resolved if you don't mind...?
[?9/?22/?2017 9:03 AM] Franklin, Bruce (CTR):
had to restart asm and mount the disks
[?9/?22/?2017 9:05 AM] Franklin, Bruce (CTR):
if this happens in the future, please call me
don't take the word of the UNIX Admin that everything is ok; they are not DBAs
[?9/?22/?2017 9:06 AM] Chando, Kenneth (CTR):
ok. Just to build my knowledge, did Unix team try to restart the cluster stack or it was something we had to fix from our end? ASM was online but the disks were not mounted.
this was after they did their patching and rebooted the server...
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/12.1.0/grid:N # line added by Agent
IDMP:/u01/app/oracle/product/12.1.0:N # line added by Agent
oracle@d2iclprhq106[IDMP1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
ONLINE ONLINE d2iclprhq106 STABLE
ora.FRADG.dg
ONLINE ONLINE d2iclprhq106 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2iclprhq106 STABLE
ora.LISTENER2.lsnr
ONLINE ONLINE d2iclprhq106 STABLE
ora.OCRDG.dg
ONLINE ONLINE d2iclprhq106 STABLE
ora.asm
ONLINE ONLINE d2iclprhq106 Started,STABLE
ora.net1.network
ONLINE ONLINE d2iclprhq106 STABLE
ora.net2.network
ONLINE ONLINE d2iclprhq106 STABLE
ora.ons
ONLINE ONLINE d2iclprhq106 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.d2iclprhq106.vip
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.d2iclprhq106_2.vip
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.d2iclprhq107.vip
1 ONLINE INTERMEDIATE d2iclprhq106 FAILED OVER,STABLE
ora.d2iclprhq107_2.vip
1 ONLINE INTERMEDIATE d2iclprhq106 FAILED OVER,STABLE
ora.idmp.db
1 ONLINE ONLINE d2iclprhq106 Open,STABLE
2 ONLINE OFFLINE STABLE
ora.oc4j
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2iclprhq106 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2iclprhq106 STABLE
--------------------------------------------------------------------------------
oracle@d2iclprhq106[IDMP1]#
%%%%%%%%%%%%% DB Link script%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
oracle@d2iclprhq106[IDMP1]# vi sh_db_links.sql
col owner format a15
col link format a35
col userid format a10
col password format a20
col connection format a15
set pages 9999 lines 120
SELECT u.name owner, l.name link, l.userid, l.password, l.host connection
FROM sys.link$ l, sys.user$ u
WHERE l.owner# = u.user#
ORDER BY u.name;
~
~
~
~
~
~
~
~
~
~
"sh_db_links.sql" [dos] 13L, 300C
-------------------------------------Cron job -------------------------------------------
oracle@D2CSEVPHQ004[EAIRP]# crontab -l
# auditing EAIRP
0 10 * * 1-6 /u01/app/oracle/scripts/audit/archive_audit.sh EAIRP > /u01/app/oracle/scripts/audit/logs/archive_audit_EAIRP.log 2>&1
0 * * * 1-6 /u01/app/oracle/scripts/audit/hourly_archive_audit.sh EAIRP > /u01/app/oracle/scripts/audit/logs/hourly_archive_audit_EAIRP.log 2>&1
0 21 * * 6 /u01/app/oracle/scripts/do_cleanup.sh > /u01/app/oracle/scripts/do_cleanup.log 2>&1
0 22 * * 6 /u01/app/oracle/scripts/audit/purge_audit.sh EAIRP > /u01/app/oracle/scripts/audit/logs/purge_audit_EAIRP.log 2>&1
0 23 * * 1-6 /u01/app/oracle/scripts/rmanbackup_EAIRP_disk.sh > /u01/app/oracle/scripts/rmanbackup_EAIRP_disk.log 2>&1
0 00 * * 7 /u01/app/oracle/scripts/delete_archlogs.sh EAIRP > /u01/app/oracle/scripts/delete_archlogs.log 2>&1
#30 20 04 04 2 /u01/app/oracle/scripts/cr_restpnt_wo3689914.sh > /u01/app/oracle/scripts/cr_restpnt_wo3689914.out 2>&1
vi /u01/app/oracle/scripts/cr_restpnt_wo3689914.sh
#!/bin/ksh
#cr_restpnt_wo368914.sh
# create restore point for work order 368914
# Created by: Kenneth Chando
# Last update: 03/30/2017
. $HOME/.profile
umask 022
export ORAENV_ASK=NO
export ORACLE_SID=EAIRP
cd /u01/app/oracle/scripts
sqlplus -s '/ as sysdba' <<EOF
set echo off
set feedback off
set pagesize 0
set head off
set veri off
set lines 120
spool cr_restpnt_wo368914.log
CREATE RESTORE POINT wo368914 GUARANTEE FLASHBACK DATABASE;
spool off
exit
EOF
~
~
~
~
~
~
~
~
~
"/u01/app/oracle/scripts/cr_restpnt_wo3689914.sh" 31L, 468C
--------------------------------------------------------------Scp'ing--------------------------
CPU/OJVM
scp /u01/app/oracle/patches
OEM
EAIR(T)
[OMER] scp p22342358_121050_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/oem_agent/patches
OJVM=>oracle@d2aclprsh154[D2GSSP1]# scp p24917972_121020_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/patches/.
scp p22342358_121050_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/oem_agent/patches
EAIR(P)
[OMER] scp /u01/app/oracle/patches
[BRUCE] scp p22291127_121020_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/product/12.1.0/patches/spuapr2016
OPatch
(BASST)=>scp p6880880_111000_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/oem_agent/patches/.
oracle@d2aclprsh154[D2GSSP1]# scp p24917972_121020_Linux-x86-64.zip oracle@10.238.125.136:/u01/app/oracle/patches/.
--------------------------------------------------GOOD ONES below ----------------------------------------------------------------------------------
CPU
===
scp p24732082_121020_Linux-x86-64.zip oracle@10.238.125.136:/u01/app/oracle/patches/.
CPU(EAIRT)=>oracle@d2aclprsh154[D2GSSP1]# scp p24732082_121020_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/patches/.
CPU(EAIRP)=>scp p24732082_121020_Linux-x86-64.zip oracle@10.238.125.73:/u01/app/oracle/patches/.
CPU(BASSD)=>oracle@d2aclprsh154[D2GSSP1]# scp p24732082_121020_Linux-x86-64.zip oracle@10.232.11.38:/u01/app/oracle/patches/.
CPU(BASSP)=>oracle@d2aclprsh154[D2GSSP1]# scp p24732082_121020_Linux-x86-64.zip oracle@10.232.10.102:/u01/app/oracle/patches/.
CPU(BASST)=>oracle@d2aclprsh154[D2GSSP1]# scp p24732082_121020_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/patches/.
OJVM
=====
scp p24917972_121020_Linux-x86-64.zip oracle@10.232.11.38:/u01/app/oracle/patches/.
OJVM(BASSD)=>oracle@d2aclprsh154[D2GSSP1]# scp p24917972_121020_Linux-x86-64.zip oracle@10.232.11.38:/u01/app/oracle/patches/.
OJVM(BASSP)=>oracle@d2aclprsh154[D2GSSP1]# scp p24917972_121020_Linux-x86-64.zip oracle@10.232.10.102:/u01/app/oracle/patches/.
OJVM(BASST)=>oracle@d2aclprsh154[D2GSSP1]# scp p24917972_121020_Linux-x86-64.zip oracle@10.232.139.38:/u01/app/oracle/patches/.
OEM
=====
OPatch(All)=> 1st: Rename OPatch in /u01/app/oracle/product/12.1.0 to "OPatch_old1" ,then do:
oracle@d2aclprsh154[D2GSSP1]#scp p6880880_111000_Linux-x86-64.zip oracle@servername:/u01/app/oracle/oem_agent/patches
[OEM GENERIC]=>oracle@d2aclprsh154[D2GSSP1]#scp p25104978_121050_Generic.zip oracle@servername:/u01/app/oracle/oem_agent/patches
[OEM EAIRT]=>scp p25104978_121050_Generic.zip oracle@10.238.125.136:/u01/app/oracle/oem_agent/patches
[OEM EAIRP]=>oracle@d2aclprsh154[D2GSSP1]# scp p25104978_121050_Generic.zip oracle@10.238.125.73:/u01/app/oracle/oem_agent/patches
[OEM BASSD]=>oracle@d2aclprsh154[D2GSSP1]# scp p25104978_121050_Generic.zip oracle@10.232.11.38:/u01/app/oracle/oem_agent/patches
[OEM BASSP]=>oracle@d2aclprsh154[D2GSSP1]# scp p25104978_121050_Generic.zip oracle@10.232.10.102:/u01/app/oracle/oem_agent/patches
[OEM BASST]=>oracle@d2aclprsh154[D2GSSP1]# scp p25104978_121050_Generic.zip oracle@10.232.139.38:/u01/app/oracle/oem_agent/patches
-----------------------------------------------------------FLASHBACK database--------------------
=============OMER ============================== restore EAIRP database to 2:00 PM (14:00 EST – 19:00 UTC) on 2/1/2017 ====
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')";
ALTER DATABASE OPEN RESETLOGS;
============APP DBA mistakenly included data into PROD instead of TEST database================================== SUCCESSFUL IMPLEMENTATION BELOW =>Remote connection,OPEN,listener are all up on EAIRP after FLASHBACK task completed=======
SQL> FLASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
SQL> ALTER DATABASE OPEN RESETLOGS;
Database altered.
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL> select name from v$database;
NAME
---------
EAIRP
========================================================================================================================================================
To roll back tables, we need to have row movement enabled on the database server. check that using:
sql> select row_movement from dba_tables; (If enable,=>roll back of tables can work. If disabled, then it can't) =>next option is flashback dbase to time
=========== CHECK your nls time format ========= => Server Time = UTC =>UTC-5 = EST ===========================
SQL> select sysdate from dual;
SYSDATE
---------
01-FEB-17
=============================== FLASHBACK database to Time ====================================================
shutdown immediate;
startup mount;
flashback database to timestamp TO_TIMESTAMP('2017-01-17 19:40:23[UTC](UTC-5=EST)' 'YYYY-MM-DD HH:MM:SS');
============================================== 4m OMER =========================================================================
FLASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP('2015-08-11 01:00:00', 'YYYY-MM-DD HH24:MI:SS'); =>Whatever EST time he gives, add +5 to it(server time)
============ OMER's GUIDE to FLASHBACK ==========================================================================
*** login to database
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
*** In another session, Login to d2iclprhq117
SQL> SHUTDOWN IMMEDIATE
*** Go back to d2iclprhq116 session and execute the following commands
SQL> FLASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP('2015-08-11 01:00:00', 'YYYY-MM-DD HH24:MI:SS');
SQL> ALTER DATABASE OPEN RESETLOGS;
============================================================================================================================== SUCCESSFULLY RAN ======================================================
FLASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS');
===================================== EXACTLY RAN:
SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> SHUTDOWN IMMEDIATE;
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3651
Additional information: -1604062721
Process ID: 0
Session ID: 0 Serial number: 0
SQL> startup mount;
ORACLE instance started.
Total System Global Area 645922816 bytes
Fixed Size 2927720 bytes
Variable Size 444597144 bytes
Database Buffers 192937984 bytes
Redo Buffers 5459968 bytes
Database mounted.
SQL> FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"; FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
*
ERROR at line 1:
ORA-38724: Invalid option to the FLASHBACK DATABASE command.
SQL> FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"; FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
*
ERROR at line 1:
ORA-38724: Invalid option to the FLASHBACK DATABASE command.
SQL> ed
Wrote file afiedt.buf
1* FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
SQL> /
FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
*
ERROR at line 1:
ORA-38724: Invalid option to the FLASHBACK DATABASE command.
SQL> ed
Wrote file afiedt.buf
1* FLASHBACK DATABASE TO TIME TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')
SQL> /
FLASHBACK DATABASE TO TIME TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')
*
ERROR at line 1:
ORA-38724: Invalid option to the FLASHBACK DATABASE command.
SQL> ed
Wrote file afiedt.buf
1* FLASHBACK DATABASE to time "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
SQL> /
FLASHBACK DATABASE to time "TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS')"
*
ERROR at line 1:
ORA-38724: Invalid option to the FLASHBACK DATABASE command.
SQL> FLASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP('2017-02-01 19:00:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
---------------FLASHBACK TABLE----------------------------------------------------
[kenneth.chando@D2CSEVPHQ004 ~]$ sudo su - oracle
oracle@D2CSEVPHQ004[EAIRP]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Fri Dec 16 17:11:57 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 77250
Next log sequence to archive 77252
Current log sequence 77252
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES
SQL>
&&&&&&&&&& Checking the TABLES and counting their ROWS &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
oracle@D2CSEVPHQ004[EAIRP]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Fri Dec 16 17:11:57 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 77250
Next log sequence to archive 77252
Current log sequence 77252
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES
SQL> select count(*) from EA.EA_TRM_PRODUCT_IMPLEMENTATION;
COUNT(*)
----------
3293
SQL> select count(*) from EA.PRODUCT_VERSION;
COUNT(*)
----------
4447
SQL> select count(*) from EA.EA_TRM_HARDWARE_MODEL;
COUNT(*)
----------
321
SQL> select count(*) from EA.EA_TRM_MANUFACTURER;
COUNT(*)
----------
982
SQL> select count(*) from EA.EA_TRM_PRODUCT;
COUNT(*)
----------
3806
SQL> select count(*) from EA.EA_TRM_PRODUCT_INSERTION_MAP;
COUNT(*)
----------
3292
SQL> select count(*) from EA.EA_TRM_INSERTIONS;
COUNT(*)
----------
1284
SQL>
&&&&&&&&&&&&&&&&&& Getting Time and Oldest SCN, flashback time &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
SQL> select oldest_flashback_scn as old_scn,to_char(oldest_flashback_time, 'dd-mm-yyyy:hh24:mi:ss') as old_time,retention_target,estimated_flashback_size,flashback_size from v$flashback_database_log;
=================================================================
SQL> select oldest_flashback_scn as old_scn,to_char(oldest_flashback_time, 'dd-mm-yyyy:hh24:mi:ss') as old_time,retention_target,estimated_flashback_size,flashback_size from v$flashback_database_log;
OLD_SCN OLD_TIME RETENTION_TARGET ESTIMATED_FLASHBACK_SIZE
---------- ------------------- ---------------- ------------------------
FLASHBACK_SIZE
--------------
827351491 14-12-2016:16:56:11 1440 4837933056
6815744000
==============================================================================
SQL> define _editor=vi
SQL> set linesize 250 pagesize 2000
SQL> /
OLD_SCN OLD_TIME RETENTION_TARGET ESTIMATED_FLASHBACK_SIZE FLASHBACK_SIZE
---------- ------------------- ---------------- ------------------------ --------------
827351491 14-12-2016:16:56:11 1440 4837687296 6815744000
&&&&&&&&&&&&& DESCRIBING last TABLE above to see the COLUMNS/structure &&&&&&&&&&&&&&&&&&&
SQL> desc EA.EA_TRM_INSERTIONS
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------
INSERTION_ID NOT NULL NUMBER
INSERTION_NAME VARCHAR2(4000)
COMMENTS VARCHAR2(4000)
CREATED_BY VARCHAR2(4000)
CREATED_DATETIME DATE
UPDATED_BY VARCHAR2(4000)
UPDATED_DATETIME DATE
DR_ID NUMBER
URL VARCHAR2(4000)
AUDIT_TYPE_ID NUMBER
==========================
SQL> select to_char(sysdate,'dd-mm-yyyy:hh24:mi:ss') from dual;
TO_CHAR(SYSDATE,'DD
-------------------
16-12-2016:17:35:31
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
828549582
&&&&&&& VERY IMPORTANT NOTEs BELOW &&&&&&&&&&&&& After last step above&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
SQL> startup mount>flashback database to scn>alter database open resetlogs; verify: select count(*) from last_table; (rows shd be less/greater than current table)
SQL> select value from v$parameter where name='recyclebin';
VALUE
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
on
&&&&& OTHER OPTION to flashback below &&&&&&&&&&&&&&&&&&&&&&&&&&&&
SQL> drop table t1_old>flashback table t1_old to before drop;
-------------------------FLASHBACK Log views ----------------------------------------------
Check Flashback logs view(script)
================================
select log# as "Log No", thread# as "Thread No",sequence# as "Sequence No", name, bytes/1024/1024 as "Size GB",first_change# as "First Chg No",first_time from
v$flashback_database_logfile order by first_time;
SQL>
----------------------------RMAN backup script-----------------------------------------------------
#!/bin/ksh
*. /home/oracle/.profile
. /u01/app/oracle/home/.profile
ORACLE_SID=IWMSD
export ORACLE_SID
ORACLE_HOME=/u01/app/oracle/product/12.1.0
export ORACLE_HOME
/u01/app/oracle/product/12.1.0/bin/rman target / log=/u01/app/oracle/scripts/rmanbackup_IWMSD_disk.log << EOF
DELETE NOPROMPT BACKUPSET;
backup as compressed backupset device type disk format '/u01/oradata/backup/IWMSD/db_%d_%I_%s_%p.bkup' tag daily_backup database;
backup device type disk format '/u01/oradata/backup/IWMSD/cf_%d_%u.bkup' tag weekly_backup current controlfile;
allocate channel for maintenance type disk;
release channel;
EXIT;
EOF
=================== KEN's modified script to included compression on EAIRT =============================
backup as compressed backupset device type disk format '/u01/oradata/backup/EAIRT/db_%d_%I_%s_%p.bkup' tag daily_backup database;
--------------------------------------Database Growth Script-----------------------------------------------------------
create or replace procedure SYS.db_space_hist_proc as
begin
-- Delete old records...
delete from db_space_hist where timestamp >SYSDATE + 364;
-- Insert current utilization values...
insert into db_space_hist
select sysdate, total_space,
total_space-nvl(free_space,0) used_space,
nvl(free_space,0) free_space,
((total_space - nvl(free_space,0)) / total_space)*100 pct_num_db_files
from (select sum(bytes)/1024/1024 free_space
from sys.DBA_FREE_SPACE ) FREE,
(select sum(bytes)/1024/1024 total_space,
count(*) num_db_files
from sys.DBA_DATA_FILES) FULL;
commit;
RESULTS
------
SQL> select sysdate, total_space,
2 total_space-nvl(free_space,0) used_space,
nvl(free_space,0) free_space,
((total_space - nvl(free_space,0)) / total_space)*100 pct_num_db_files
from (select sum(bytes)/1024/1024 free_space
3 4 5 6 from sys.DBA_FREE_SPACE ) FREE,
(select sum(bytes)/1024/1024 total_space,
7 8 count(*) num_db_files
from sys.DBA_DATA_FILES) FULL; 9
SYSDATE TOTAL_SPACE USED_SPACE FREE_SPACE PCT_NUM_DB_FILES
--------- ----------- ---------- ---------- ----------------
11-SEP-16 60008.0625 31555.1875 28452.875 52.5849131
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
OR TRY
=====
COLUMN month FORMAT a75
COLUMN growth FORMAT 999,999,999,999,999
SELECT
TO_CHAR(creation_time,'RRRR-MM') "Month",
SUM(bytes/1024/1024) "growth in MB"
FROM sys.v_$datafile
GROUP BY TO_CHAR(creation_time,'RRRR-MM')
ORDER BY TO_CHAR(creation_time,'RRRR-MM');
RESULTS
-------
SQL> SQL> SELECT
2 TO_CHAR(creation_time,'RRRR-MM') "Month",
SUM(bytes/1024/1024) "growth in MB"
FROM sys.v_$datafile
3 4 5 GROUP BY TO_CHAR(creation_time,'RRRR-MM')
ORDER BY TO_CHAR(creation_time,'RRRR-MM'); 6
Month growth in MB
--------------------------------------------------------------------------- ------------
2011-09 18618.75
2013-08 41389.3125
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXTRACTING data (i.e. 2-years ago => Use char.....)
-----------------------------------------------------
SQL> select (to_char(sysdate,'YYYY'))-2 from dual;
RESULTS
-------
(TO_CHAR(SYSDATE,'YYYY'))-2
---------------------------
2014
===================================================
select To_Char(stage_end_date,'yy') Years,
SUBPRODUCT product,
sum(OFFER_COUNT) SumCount,
sum(offer_amount)SumAmount
from stage_amt
where offer_amount !=0
and to_number(To_char(stage_end_date,'YYYY')) between
to_number(To_char(sysdate,'YYYY'))-2 and to_number(to_char(sysdate,'YYYY'))
group by to_char(stage_end_date,'yy'),
SUBPRODUCT
order by years asc;
RESULTS
-------
SQL> select To_Char(stage_end_date,'yy') Years,
2 SUBPRODUCT product,
sum(OFFER_COUNT) SumCount,
3 4 sum(offer_amount)SumAmount
from stage_amt
where offer_amount !=0
5 6 7 and to_number(To_char(stage_end_date,'YYYY')) between
to_number(To_char(sysdate,'YYYY'))-2 and to_number(to_char(sysdate,'YYYY'))
8 9 group by to_char(stage_end_date,'yy'),
10 SUBPRODUCT
11 order by years asc;
from stage_amt
*
ERROR at line 5:
ORA-00942: table or view does not exist
======================================================================================================================================================================
COLUMN month FORMAT a75
COLUMN growth FORMAT 999,999,999,999,999
SELECT
TO_CHAR(creation_time,'RRRR-MM') "Month",
SUM(bytes/1024/1024) "growth in MB"
FROM sys.v_$datafile
where TO_CHAR(creation_time,'RRRR-MM') !=0 and SUM(bytes/1024/1024)
between to_char( sysdate, -2) and to_char( sysdate)
GROUP BY TO_CHAR(creation_time,'RRRR-MM')
ORDER BY TO_CHAR(creation_time,'RRRR-MM');
---------------------------------------------------------------Calculate Database uptime script-----------------------------------------------
1. Calculate Dbase Uptime
------------------------
SQL> set linesize 1000 pagesize 2000
SQL> define _editor=vi
select host_name,instance_name,TO_CHAR(startup_time,'DD-MM-YYYY HH24:mi:ss') startup_time,FLOOR(sysdate-startup_time) days from sys.v_$instance;
------------------------------
SQL> ed
Wrote file afiedt.buf
1 select host_name,instance_name,TO_CHAR(startup_time,'DD-MM-YYYY HH24:mi:ss') startup_time,FLOOR(sysdate-startup_time) days
2* from sys.v_$instance
SQL> /
HOST_NAME INSTANCE_NAME STARTUP_TIME DAYS
---------------------------------------------------------------- ---------------- ------------------- ----------
D2CSEVNHQ004 EAIRT 03-01-2017 19:37:16 9
=> EAIRT database has been 9-days up
------------------------------------------ RAC tablespace ----------------------------------------
SQL> @sh_tsdf.sql
January 14, 2017 Datafiles used by IDMP database
===================================
Size Used Aut
File Name Tablespace (Mb) (in Mb) Used % Xtn Status
------------------------------------------------------- --------------- ---------- ---------- ------- --- ----------
+DATADG/idmp/datafile/ais_dat.308.783451743 AIS_DAT 641.13 582.63 90.88 YES ONLINE
+DATADG/idmp/datafile/ais_idx.271.783452221 AIS_IDX 500.00 1.00 0.20 YES ONLINE
+DATADG/idmp/tempfile/ais_tmp.309.783451835 AIS_TMP 100.00 7.00 7.00 YES ONLINE
+DATADG/idmp/datafile/bi_biplatform.327.784093483 BI_BIPLATFORM 64.00 2.00 3.13 YES ONLINE
+DATADG/idmp/tempfile/bi_ias_temp.285.784093485 BI_IAS_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/bi_mds.272.784093481 BI_MDS 100.00 5.06 5.06 YES ONLINE
+DATADG/idmp/tempfile/dba_temp.345.900620195 DBA_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/dba_test.346.900626363 DBA_TEST 10.00 1.00 10.00 YES ONLINE
+DATADG/idmp/datafile/dev_apm.324.783300145 DEV_APM 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/tempfile/dev_apm_temp.315.783300155 DEV_APM_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/dev_ias_oif.298.783300147 DEV_IAS_OIF 60.00 1.00 1.67 YES ONLINE
+DATADG/idmp/datafile/dev_oim.283.783300161 DEV_OIM 150.00 110.38 73.58 YES ONLINE
+DATADG/idmp/datafile/dev_oim_lob.319.783300153 DEV_OIM_LOB 500.00 26.31 5.26 YES ONLINE
+DATADG/idmp/tempfile/dev_oim_temp.294.783300163 DEV_OIM_TEMP 100.00 5.00 5.00 YES ONLINE
+DATADG/idmp/datafile/hsindb_data.307.795904523 HSINDB_DATA 1,371.13 1,254.00 91.46 YES ONLINE
+DATADG/idmp/datafile/iamp_oim.322.783371743 IAMP_OIM 150.00 1.00 0.67 YES ONLINE
+DATADG/idmp/datafile/iamp_oim_lob.289.783371733 IAMP_OIM_LOB 500.00 1.00 0.20 YES ONLINE
+DATADG/idmp/datafile/iamr2_ess.295.890669851 IAMR2_ESS 100.00 1.13 1.13 YES ONLINE
+DATADG/idmp/datafile/iamr2_hsin_report.348.902173881 IAMR2_HSIN_REPO 4,309.81 4,104.56 95.24 YES ONLINE
RT
+DATADG/idmp/tempfile/iamr2_hsin_report_temp.347.902173 IAMR2_HSIN_REPO 150.00 104.00 69.33 YES ONLINE
867 RT_TEMP
+DATADG/IDMP/DATAFILE/iamr2_ias_iau.359.922713039 IAMR2_IAS_IAU 30,720.00 30,668.06 99.83 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_ias_iau.355.917876009 IAMR2_IAS_IAU 30,720.00 30,671.13 99.84 YES ONLINE
+DATADG/idmp/datafile/iamr2_ias_iau.320.890669859 IAMR2_IAS_IAU 32,760.00 32,700.13 99.82 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_ias_iau.363.930076075 IAMR2_IAS_IAU 30,720.00 30,671.44 99.84 YES ONLINE
+DATADG/idmp/datafile/iamr2_ias_iau.350.906775485 IAMR2_IAS_IAU 30,720.00 30,662.25 99.81 YES ONLINE
+DATADG/idmp/datafile/iamr2_ias_iauoes.338.890679821 IAMR2_IAS_IAUOE 60.00 1.19 1.98 YES ONLINE
S
+DATADG/idmp/datafile/iamr2_ias_oif.334.890669863 IAMR2_IAS_OIF 60.00 1.00 1.67 YES ONLINE
+DATADG/idmp/datafile/iamr2_ias_opss.275.890669853 IAMR2_IAS_OPSS 170.00 136.94 80.55 YES ONLINE
+DATADG/idmp/datafile/iamr2_ias_orasdpm.314.890669619 IAMR2_IAS_ORASD 300.00 106.63 35.54 YES ONLINE
PM
+DATADG/idmp/tempfile/iamr2_ias_temp.274.890669849 IAMR2_IAS_TEMP 100.00 13.00 13.00 YES ONLINE
+DATADG/idmp/datafile/iamr2_mds.337.890679819 IAMR2_MDS 750.00 708.69 94.49 YES ONLINE
+DATADG/idmp/datafile/iamr2_oam.281.890669845 IAMR2_OAM 350.00 299.50 85.57 YES ONLINE
+DATADG/idmp/tempfile/iamr2_oam_temp.318.890669849 IAMR2_OAM_TEMP 100.00 15.00 15.00 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim.282.890669843 IAMR2_OIM 32,350.00 32,350.00 100.00 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim.349.906775027 IAMR2_OIM 9,120.00 7,442.81 81.61 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_arch_data.336.890669867 IAMR2_OIM_ARCH_ 1,024.00 1.56 0.15 YES ONLINE
DATA
+DATADG/idmp/datafile/iamr2_oim_lob.351.906776249 IAMR2_OIM_LOB 30,720.00 30,720.00 100.00 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.361.928263481 IAMR2_OIM_LOB 12,424.00 8,581.00 69.07 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.356.915028203 IAMR2_OIM_LOB 30,720.00 30,668.00 99.83 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_lob.354.913555861 IAMR2_OIM_LOB 30,720.00 30,141.00 98.12 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_lob.332.890669861 IAMR2_OIM_LOB 32,767.98 32,766.98 100.00 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.360.923345365 IAMR2_OIM_LOB 30,024.00 26,179.00 87.19 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_lob.352.906776397 IAMR2_OIM_LOB 30,720.00 30,720.00 100.00 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_lob.343.897601605 IAMR2_OIM_LOB 30,720.00 30,720.00 100.00 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.362.929127087 IAMR2_OIM_LOB 12,424.00 10,885.00 87.61 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.358.917566185 IAMR2_OIM_LOB 30,720.00 30,662.00 99.81 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.267.920562331 IAMR2_OIM_LOB 30,720.00 30,659.00 99.80 YES ONLINE
+DATADG/idmp/datafile/iamr2_oim_lob.344.900604705 IAMR2_OIM_LOB 30,720.00 30,720.00 100.00 YES ONLINE
+DATADG/IDMP/DATAFILE/iamr2_oim_lob.364.931877453 IAMR2_OIM_LOB 4,824.00 4,801.00 99.52 YES ONLINE
+DATADG/idmp/tempfile/iamr2_oim_temp.335.890669865 IAMR2_OIM_TEMP 300.00 296.00 98.67 YES ONLINE
+DATADG/idmp/datafile/iamr2_soainfra.288.890669855 IAMR2_SOAINFRA 6,198.00 5,892.69 95.07 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.280.785365405 IAM_IAS_IAU 32,760.00 32,705.00 99.83 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.273.843835757 IAM_IAS_IAU 30,720.00 24,201.94 78.78 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.301.847551441 IAM_IAS_IAU 17,924.00 11,779.13 65.72 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.266.830630011 IAM_IAS_IAU 30,720.00 25,326.00 82.44 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.311.854478931 IAM_IAS_IAU 4,124.00 1,019.00 24.71 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau.326.823436583 IAM_IAS_IAU 30,720.00 30,657.00 99.79 YES ONLINE
+DATADG/idmp/datafile/iam_ias_iau_ndx.333.842813777 IAM_IAS_IAU_NDX 500.00 1.00 0.20 YES ONLINE
+DATADG/idmp/datafile/iam_ias_oif.297.783352379 IAM_IAS_OIF 60.00 1.00 1.67 YES ONLINE
+DATADG/idmp/datafile/iam_ias_orasdpm.293.785365403 IAM_IAS_ORASDPM 300.00 1.56 0.52 YES ONLINE
+DATADG/idmp/tempfile/iam_ias_temp.321.785365415 IAM_IAS_TEMP 5,120.00 5,060.00 98.83 YES ONLINE
+DATADG/idmp/datafile/iam_mds.304.785365419 IAM_MDS 300.00 245.75 81.92 YES ONLINE
+DATADG/idmp/datafile/iam_oam.287.785365413 IAM_OAM 500.00 449.06 89.81 YES ONLINE
+DATADG/idmp/tempfile/iam_oam_temp.279.785365413 IAM_OAM_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/iam_oim.353.910203851 IAM_OIM 500.00 1.00 0.20 YES ONLINE
+DATADG/idmp/datafile/iam_oim.328.785365411 IAM_OIM 32,750.00 32,750.00 100.00 YES ONLINE
+DATADG/idmp/datafile/iam_oim.331.861018931 IAM_OIM 32,750.00 11,463.63 35.00 YES ONLINE
+DATADG/idmp/datafile/iam_oim_lob.269.785365407 IAM_OIM_LOB 1,500.00 1,241.88 82.79 YES ONLINE
+DATADG/idmp/datafile/iam_oim_ndx.270.842813773 IAM_OIM_NDX 500.00 1.00 0.20 YES ONLINE
+DATADG/idmp/tempfile/iam_oim_temp.317.785365401 IAM_OIM_TEMP 100.00 53.00 53.00 YES ONLINE
+DATADG/idmp/datafile/iam_soainfra.341.893285907 IAM_SOAINFRA 20,480.00 16,963.06 82.83 YES ONLINE
+DATADG/idmp/datafile/iam_soainfra.342.893437851 IAM_SOAINFRA 32,767.00 1,938.00 5.91 YES ONLINE
+DATADG/idmp/datafile/iam_soainfra.340.893250277 IAM_SOAINFRA 20,480.00 20,480.00 100.00 YES ONLINE
+DATADG/idmp/datafile/iam_soainfra.291.785365417 IAM_SOAINFRA 20,480.00 20,480.00 100.00 YES ONLINE
+DATADG/idmp/tempfile/ias_temp.284.783355473 IAS_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/idm_ias_oif.276.783443899 IDM_IAS_OIF 149.50 142.38 95.23 YES ONLINE
+DATADG/idmp/tempfile/idm_ias_temp.312.783443897 IDM_IAS_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/leasing.323.786913789 LEASING 32.00 1.25 3.91 YES ONLINE
+DATADG/idmp/datafile/oes_apm.278.783626103 OES_APM 100.00 9.00 9.00 YES ONLINE
+DATADG/idmp/tempfile/oes_apm_temp.300.783626099 OES_APM_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/tempfile/oes_ias_temp.292.783626101 OES_IAS_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/oes_mds.313.783626101 OES_MDS 100.00 3.13 3.13 YES ONLINE
+DATADG/idmp/datafile/olts_attrstore.316.785373701 OLTS_ATTRSTORE 346.00 297.00 85.84 YES ONLINE
+DATADG/idmp/datafile/olts_battrstore.268.785373699 OLTS_BATTRSTORE 0.98 0.48 48.80 YES ONLINE
+DATADG/idmp/datafile/olts_ct_store.310.785373703 OLTS_CT_STORE 582.00 549.31 94.38 YES ONLINE
+DATADG/idmp/datafile/olts_default.290.785373697 OLTS_DEFAULT 681.00 639.06 93.84 YES ONLINE
+DATADG/idmp/datafile/olts_svrmgstore.286.785373703 OLTS_SVRMGSTORE 11.00 4.81 43.75 YES ONLINE
+DATADG/idmp/datafile/prod_ias_iau.339.890880655 PROD_IAS_IAU 4,324.00 4,321.00 99.93 YES ONLINE
+DATADG/idmp/datafile/prod_ias_iau.261.834089553 PROD_IAS_IAU 32,767.98 32,320.73 98.64 YES ONLINE
+DATADG/idmp/datafile/prod_ias_iau.330.875215251 PROD_IAS_IAU 32,767.00 29,536.50 90.14 YES ONLINE
+DATADG/idmp/tempfile/prod_ias_temp.277.834089557 PROD_IAS_TEMP 100.00 91.00 91.00 YES ONLINE
+DATADG/idmp/datafile/prod_oam.265.834089555 PROD_OAM 200.00 161.81 80.91 YES ONLINE
+DATADG/idmp/tempfile/prod_oam_temp.262.834089551 PROD_OAM_TEMP 100.00 1.00 1.00 YES ONLINE
+DATADG/idmp/datafile/prod_tfa7.306.865875943 PROD_TFA7 1,774.38 1,689.88 95.24 YES ONLINE
+DATADG/idmp/datafile/sysaux.257.780519175 SYSAUX 3,330.00 3,065.75 92.06 YES ONLINE
+DATADG/idmp/datafile/system.256.780519173 SYSTEM 5,250.00 1,100.81 20.97 YES SYSTEM
+DATADG/idmp/tempfile/temp.263.780519333 TEMP 1,238.00 1,238.00 100.00 YES ONLINE
+DATADG/idmp/datafile/undotbs1.258.780519175 UNDOTBS1 24,615.00 1,739.00 7.06 YES ONLINE
+DATADG/idmp/datafile/undotbs2.264.780519437 UNDOTBS2 24,650.00 297.38 1.21 YES ONLINE
+DATADG/idmp/datafile/users.259.780519175 USERS 5.00 1.88 37.50 YES ONLINE
January 14, 2017 Tablespace used by db_name database
===================================
Initial Next
Extent Extent Total Size Used Free Extent
Name in (KB) in (KB) (in Mb) (in Mb) (in Mb) Used % Type Management Status
--------------- ------- ------- ---------- ---------- ---------- ------- --------- ---------- --------
IAM_IAS_IAU 64 ########## ########## 21,279.94 85.52 PERMANENT LOCAL ONLINE
IAM_MDS 64 300.00 245.75 54.25 81.92 PERMANENT LOCAL ONLINE
OES_APM 64 100.00 9.00 91.00 9.00 PERMANENT LOCAL ONLINE
PROD_IAS_IAU 64 69,858.98 66,178.23 3,680.75 94.73 PERMANENT LOCAL ONLINE
SYSAUX 64 3,330.00 3,065.75 264.25 92.06 PERMANENT LOCAL ONLINE
UNDOTBS1 64 24,615.00 1,739.00 22,876.00 7.06 UNDO LOCAL ONLINE
DEV_OIM_LOB 64 500.00 26.31 473.69 5.26 PERMANENT LOCAL ONLINE
DEV_OIM 64 150.00 110.38 39.63 73.58 PERMANENT LOCAL ONLINE
IAMP_OIM 64 150.00 1.00 149.00 0.67 PERMANENT LOCAL ONLINE
HSINDB_DATA 64 1,371.13 1,254.00 117.13 91.46 PERMANENT LOCAL ONLINE
IAMR2_IAS_IAUOE 64 60.00 1.19 58.81 1.98 PERMANENT LOCAL ONLINE
S
IDM_IAS_OIF 64 149.50 142.38 7.13 95.23 PERMANENT LOCAL ONLINE
AIS_IDX 64 500.00 1.00 499.00 0.20 PERMANENT LOCAL ONLINE
LEASING 64 32.00 1.25 30.75 3.91 PERMANENT LOCAL ONLINE
IAM_OIM_NDX 64 500.00 1.00 499.00 0.20 PERMANENT LOCAL ONLINE
PROD_TFA7 64 1,774.38 1,689.88 84.50 95.24 PERMANENT LOCAL ONLINE
IAMR2_ESS 64 100.00 1.13 98.88 1.13 PERMANENT LOCAL ONLINE
IAMR2_IAS_OPSS 64 170.00 136.94 33.06 80.55 PERMANENT LOCAL ONLINE
IAMR2_OIM_LOB 64 ########## ########## 10,001.00 97.04 PERMANENT LOCAL ONLINE
IAMR2_HSIN_REPO 64 4,309.81 4,104.56 205.25 95.24 PERMANENT LOCAL ONLINE
RT
USERS 64 5.00 1.88 3.13 37.50 PERMANENT LOCAL ONLINE
IAM_OIM_LOB 64 1,500.00 1,241.88 258.13 82.79 PERMANENT LOCAL ONLINE
IAM_OIM 64 66,000.00 44,214.63 21,785.38 66.99 PERMANENT LOCAL ONLINE
OLTS_SVRMGSTORE 64 11.00 4.81 6.19 43.75 PERMANENT LOCAL ONLINE
AIS_DAT 64 641.13 582.63 58.50 90.88 PERMANENT LOCAL ONLINE
OES_MDS 64 100.00 3.13 96.88 3.13 PERMANENT LOCAL ONLINE
PROD_OAM 64 200.00 161.81 38.19 80.91 PERMANENT LOCAL ONLINE
IAM_IAS_IAU_NDX 64 500.00 1.00 499.00 0.20 PERMANENT LOCAL ONLINE
IAMR2_OIM 64 41,470.00 39,792.81 1,677.19 95.96 PERMANENT LOCAL ONLINE
IAMR2_IAS_OIF 64 60.00 1.00 59.00 1.67 PERMANENT LOCAL ONLINE
SYSTEM 64 5,250.00 1,100.81 4,149.19 20.97 PERMANENT LOCAL ONLINE
OLTS_BATTRSTORE 64 0.98 0.48 0.50 48.80 PERMANENT LOCAL ONLINE
IAM_IAS_ORASDPM 64 300.00 1.56 298.44 0.52 PERMANENT LOCAL ONLINE
IAM_IAS_OIF 64 60.00 1.00 59.00 1.67 PERMANENT LOCAL ONLINE
OLTS_CT_STORE 64 582.00 549.31 32.69 94.38 PERMANENT LOCAL ONLINE
DEV_APM 64 100.00 1.00 99.00 1.00 PERMANENT LOCAL ONLINE
IAMR2_MDS 64 750.00 708.69 41.31 94.49 PERMANENT LOCAL ONLINE
OLTS_DEFAULT 64 681.00 639.06 41.94 93.84 PERMANENT LOCAL ONLINE
OLTS_ATTRSTORE 64 346.00 297.00 49.00 85.84 PERMANENT LOCAL ONLINE
IAM_OAM 64 500.00 449.06 50.94 89.81 PERMANENT LOCAL ONLINE
IAM_SOAINFRA 64 94,207.00 59,861.06 34,345.94 63.54 PERMANENT LOCAL ONLINE
BI_BIPLATFORM 64 64.00 2.00 62.00 3.13 PERMANENT LOCAL ONLINE
IAMR2_SOAINFRA 64 6,198.00 5,892.69 305.31 95.07 PERMANENT LOCAL ONLINE
IAMR2_OIM_ARCH_ 64 1,024.00 1.56 1,022.44 0.15 PERMANENT LOCAL ONLINE
DATA
UNDOTBS2 64 24,650.00 298.38 24,351.63 1.21 UNDO LOCAL ONLINE
BI_MDS 64 100.00 5.06 94.94 5.06 PERMANENT LOCAL ONLINE
DEV_IAS_OIF 64 60.00 1.00 59.00 1.67 PERMANENT LOCAL ONLINE
IAMP_OIM_LOB 64 500.00 1.00 499.00 0.20 PERMANENT LOCAL ONLINE
IAMR2_IAS_ORASD 64 300.00 106.63 193.38 35.54 PERMANENT LOCAL ONLINE
PM
IAMR2_OAM 64 350.00 299.50 50.50 85.57 PERMANENT LOCAL ONLINE
IAMR2_IAS_IAU 64 ########## ########## 267.00 99.83 PERMANENT LOCAL ONLINE
DBA_TEST 64 10.00 1.00 9.00 10.00 PERMANENT LOCAL ONLINE
OES_APM_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
TEMP 1,024 1,024 1,238.00 1,238.00 0.00 100.00 TEMPORARY LOCAL ONLINE
IDM_IAS_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
OES_IAS_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
BI_IAS_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
DBA_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
IAMR2_IAS_TEMP 1,024 1,024 100.00 13.00 87.00 13.00 TEMPORARY LOCAL ONLINE
DEV_APM_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
AIS_TMP 1,024 1,024 100.00 7.00 93.00 7.00 TEMPORARY LOCAL ONLINE
PROD_OAM_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
IAMR2_HSIN_REPO 1,024 1,024 150.00 104.00 46.00 69.33 TEMPORARY LOCAL ONLINE
RT_TEMP
PROD_IAS_TEMP 1,024 1,024 100.00 91.00 9.00 91.00 TEMPORARY LOCAL ONLINE
IAMR2_OIM_TEMP 1,024 1,024 300.00 296.00 4.00 98.67 TEMPORARY LOCAL ONLINE
IAM_IAS_TEMP 1,024 1,024 5,120.00 5,060.00 60.00 98.83 TEMPORARY LOCAL ONLINE
IAMR2_OAM_TEMP 1,024 1,024 100.00 15.00 85.00 15.00 TEMPORARY LOCAL ONLINE
IAM_OIM_TEMP 1,024 1,024 100.00 53.00 47.00 53.00 TEMPORARY LOCAL ONLINE
IAS_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
DEV_OIM_TEMP 1,024 1,024 100.00 5.00 95.00 5.00 TEMPORARY LOCAL ONLINE
IAM_OAM_TEMP 1,024 1,024 100.00 1.00 99.00 1.00 TEMPORARY LOCAL ONLINE
Redo Log Files
GROUP# Status MEMBER Megabytes
------- ---------- ------------------------------------------------------- ---------
10 INACTIVE +DATADG/idmp/onlinelog/group_10.305.828989761 500
10 INACTIVE +FRADG/idmp/onlinelog/group_10.4405.828989765 500
11 CURRENT +DATADG/idmp/onlinelog/group_11.296.828989775 500
11 CURRENT +FRADG/idmp/onlinelog/group_11.7776.828989777 500
12 INACTIVE +DATADG/idmp/onlinelog/group_12.325.828989789 500
12 INACTIVE +FRADG/idmp/onlinelog/group_12.8973.828989791 500
13 INACTIVE +DATADG/idmp/onlinelog/group_13.302.828989803 500
13 INACTIVE +FRADG/idmp/onlinelog/group_13.8903.828989805 500
14 CURRENT +DATADG/idmp/onlinelog/group_14.329.828989819 500
14 CURRENT +FRADG/idmp/onlinelog/group_14.3032.828989823 500
15 INACTIVE +DATADG/idmp/onlinelog/group_15.303.828989835 500
15 INACTIVE +FRADG/idmp/onlinelog/group_15.4556.828989837 500
Control Files
Status NAME IS_ BLOCK_SIZE FILE_SIZE_BLKS CON_ID
---------- ------------------------------------------------------------ --- ---------- -------------- ----------
+DATADG/idmp/controlfile/current.260.780519325 NO 16384 4476 0
+FRADG/idmp/controlfile/current.256.780519325 YES 16384 4476 0
SQL>
=========================
---------- ------------------------------------------------------------ --- ---------- -------------- ----------
+DATADG/idmp/controlfile/current.260.780519325 NO 16384 4476 0
+FRADG/idmp/controlfile/current.256.780519325 YES 16384 4476 0
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@d2iclprhq107[IDMP2]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-rootvol
16G 8.5G 6.3G 58% /
/dev/mapper/vg00-tmpvol
4.4G 1.9G 2.3G 46% /tmp
/dev/mapper/vg00-homevol
16G 490M 15G 4% /home
/dev/mapper/vg00-varvol
16G 4.2G 11G 29% /var
/dev/cciss/c0d0p1 494M 33M 437M 7% /boot
tmpfs 32G 15G 18G 46% /dev/shm
/dev/mapper/vg01-lvol0
101G 66G 31G 69% /u01
oracle@d2iclprhq107[IDMP2]# ll sh_as8
ls: sh_as8: No such file or directory
oracle@d2iclprhq107[IDMP2]# ll sh_a*
-rw-r--r-- 1 oracle oinstall 270 Feb 7 2012 sh_active_locks.sql
-rw-r--r-- 1 oracle oinstall 465 Feb 7 2012 sh_active_sessions.sql
-rw-r--r-- 1 oracle oinstall 630 Feb 7 2012 sh_actwaits.sql
-rw-r--r-- 1 oracle oinstall 3499 Feb 7 2012 sh_all_sessions2.sql
-rw-r--r-- 1 oracle oinstall 665 Feb 7 2012 sh_all_sessions.sql
-rw-r--r-- 1 oracle oinstall 549 Feb 7 2012 sh_arch_hist.sql
-rw-r--r-- 1 oracle oinstall 138 Apr 4 2013 sh_asm_usage.sql
oracle@d2iclprhq107[IDMP2]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 14 15:33:17 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> @sh_asm_usage.sql
NAME TOTAL_MB FREE_MB
------------------------------ ------------ ------------
FRADG 307,239 90,702
OCRDG 12,288 11,985
DATADG 1,613,898 612,802
SQL> ALTER TABLESPACE "IAMR2_IAS_IAU" ADD DATAFILE '+DATADG' SIZE 30G AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED
2 ;
Tablespace altered.
SQL>
======================================= Another 30G to be added tablespace "IAMR2_IAS_IAU" ====== do the following: check usage size on Primary node(1)
oracle@d2iclprhq106[IDMP1]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 14 19:43:59 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> @sh_ts_usage.sql
TABLESPACE_NAME TOTAL_MB USED_MB USED_PERCENT
------------------------------ ----------- ----------- ------------
IAMR2_IAS_IAU 188,415.97 176,036.00 93
SQL> @sh_asm_usage.sql
NAME TOTAL_MB FREE_MB
------------------------------ ------------ ------------
FRADG 307,239 73,496
OCRDG 12,288 11,985
DATADG 1,613,898 582,080
SQL> @sh_ts_usage.sql
SP2-0310: unable to open file "sh_ts_usage.sql"
================================================================================================
oracle@d2iclprhq106[IDMP1]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 14 19:43:59 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> @sh_ts_usage.sql
TABLESPACE_NAME TOTAL_MB USED_MB USED_PERCENT
------------------------------ ----------- ----------- ------------
IAMR2_IAS_IAU 188,415.97 176,036.00 93
SQL> @sh_asm_usage.sql
NAME TOTAL_MB FREE_MB
------------------------------ ------------ ------------
FRADG 307,239 73,022
OCRDG 12,288 11,985
DATADG 1,613,898 582,080
=================================================================== 4m OEM ============================
ALTER TABLESPACE "IAMR2_IAS_IAU" ADD DATAFILE '+DATADG' SIZE 30G AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED;
-----------------
SQL> ALTER TABLESPACE "IAMR2_IAS_IAU" ADD DATAFILE '+DATADG' SIZE 30G AUTOEXTEND ON NEXT 8M MAXSIZE UNLIMITED;
Tablespace altered.
SQL> @sh_asm_usage.sql
NAME TOTAL_MB FREE_MB
------------------------------ ------------ ------------
FRADG 307,239 73,213
OCRDG 12,288 11,985
DATADG 1,613,898 551,348
-------------------------------------------------------------- On 106(instance 1) => If @sh_ts_usage.sql = 0 =>The ts %usage is <90% threshold-------
SQL> @sh_ts_usage.sql
no rows selected
SQL>
SQL>
SQL>
-----------------------------------------------------RAC Delete Obsolete------------------------
PROJECT: DISASTER RECOVERY
rman>list backup;
rman> allocate channel for maintenance type 'sbt_tape';
rman> delete force obsolete;
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
backup piece handle=c-592569219-20160822-00 RECID=30252 STAMP=920509497
deleted backup piece
backup piece handle=al_45667_1_920510910 RECID=30254 STAMP=920510911
deleted backup piece
backup piece handle=al_45668_1_920510911 RECID=30253 STAMP=920510911
deleted backup piece
backup piece handle=al_45669_1_920510936 RECID=30255 STAMP=920510937
deleted backup piece
backup piece handle=c-592569219-20160822-01 RECID=30256 STAMP=920510946
deleted backup piece
backup piece handle=bk_45673_1_920593816 RECID=30260 STAMP=920593816
deleted backup piece
backup piece handle=bk_45674_1_920593816 RECID=30259 STAMP=920593816
deleted backup piece
backup piece handle=bk_45675_1_920595181 RECID=30261 STAMP=920595181
deleted backup piece
backup piece handle=c-592569219-20160823-00 RECID=30262 STAMP=920595799
deleted backup piece
backup piece handle=al_45677_1_920597111 RECID=30263 STAMP=920597111
deleted backup piece
backup piece handle=al_45678_1_920597111 RECID=30264 STAMP=920597111
deleted backup piece
backup piece handle=al_45679_1_920597127 RECID=30266 STAMP=920597127
deleted backup piece
backup piece handle=al_45680_1_920597129 RECID=30265 STAMP=920597129
deleted backup piece
backup piece handle=c-592569219-20160823-01 RECID=30267 STAMP=920597147
deleted backup piece
backup piece handle=bk_45684_1_920680214 RECID=30271 STAMP=920680215
deleted backup piece
backup piece handle=bk_45685_1_920680214 RECID=30270 STAMP=920680214
deleted backup piece
backup piece handle=bk_45686_1_920681549 RECID=30272 STAMP=920681552
deleted backup piece
backup piece handle=c-592569219-20160824-00 RECID=30273 STAMP=920682128
deleted backup piece
backup piece handle=al_45688_1_920683604 RECID=30274 STAMP=920683604
deleted backup piece
backup piece handle=al_45689_1_920683604 RECID=30275 STAMP=920683604
deleted backup piece
backup piece handle=c-592569219-20160824-01 RECID=30276 STAMP=920683621
deleted backup piece
backup piece handle=bk_45695_1_920787240 RECID=30280 STAMP=920787240
deleted backup piece
backup piece handle=bk_45696_1_920787241 RECID=30279 STAMP=920787241
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%DISASTER Restore Recover Resetlogs from Controlfile%%%%%%%%%%%%%%%%%%%%%%%%%%
12/15/2016 files
=================
db_EAIRP_1008763476_1330_1.bkup <=========== [12/15/2016]
log_EAIRP_1008763476_1332_1.bkup <=========== [12/15/2016]
cf_EAIRP_9mrnik8r.bkup <=========== [12/15/2016]
===========================================================================
oracle@D2CSEVPHQ004[EAIRP]# cd /u01/oradata/backup/EAIRP/keep
Copying the 3-files (control, datafiles and log) to default backup location from /keep directory
=================================================================================================
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/cf_EAIRP_9mrnik8r.bkup /u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/db_EAIRP_1008763476_1330_1.bkup /u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/log_EAIRP_1008763476_1332_1.bkup /u01/oradata/backup/EAIRP
Shutdown immediate
Startup nomount;
quit
RMAN>connect target /
SPOOLING & RESTORING:
========
RMAN> spool log to /u01/eairprestore.txt'
RMAN>RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';<====[12/15'sbackup]
RMAN>list backup summary;
RMAN>restore database;
RMAN>recover database;
RMAN>'sql>alter database open';
RMAN>spool off
RMAN>exit
&&&&&&&&&&&&&&&&&&&&&&&& EXECUTION &&&&&&&&&&&& OF &&&&&&&&&&&&&&&&&&& RESTORE &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
PRIOR to restore: VERIFYING current control file/backups(date)
================
connected to target database: EAIRP (DBID=1008763476)
RMAN>list backup summary;
using target database control file instead of recovery catalog
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1336 B F A DISK 17-DEC-161 1 NO DAILY_BACKUP
1337 B F A DISK 17-DEC-16 1 1 NO TAG20161217T002434
1338 B A A DISK 17-DEC-16 1 1 NO DAILY_BACKUP
1339 B F A DISK 17-DEC-16 1 1 NO TAG20161217T002454
1341 B F A DISK 17-DEC-16 1 1 NO TAG20161217T002500
&&&&&&&&&&&&&&&& COPYING files to defaul directory(backup)datafile on 12/15/2016's backup &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
120M 97M 18M 85% /var/log/audit
/dev/mapper/vg01-vg01--u01
373G 283G 72G 80% /u01
oracle@D2CSEVPHQ004[EAIRP]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 19 23:36:18 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@D2CSEVPHQ004[EAIRP]# cd /u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]# ll
total 46469604
-rw-r----- 1 oracle oinstall 21951660032 Dec 17 00:24 db_EAIRP_1008763476_1336_1.bkup
-rw-r----- 1 oracle oinstall 1520809984 Dec 17 00:24 log_EAIRP_1008763476_1338_1.bkup
drwxr-xr-x 2 oracle oinstall 4096 Dec 17 17:39 keep
-rw-r----- 1 oracle oinstall 21938257920 Dec 17 17:47 db_EAIRP_1008763476_1323_1.bkup
-rw-r----- 1 oracle oinstall 2150883328 Dec 17 17:48 log_EAIRP_1008763476_1326_1.bkup
-rw-r----- 1 oracle oinstall 11616256 Dec 17 17:57 cf_EAIRP_9grnfvtv.bkup
-rw-r----- 1 oracle oinstall 11616256 Dec 18 19:11 cf_EAIRP_9srnldaq.bkup
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/cf_EAIRP_9mrnik8r.bkup /u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/db_EAIRP_1008763476_1330_1.bkup/u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]# cp /u01/oradata/backup/EAIRP/keep/log_EAIRP_1008763476_1332_1.bkup/u01/oradata/backup/EAIRP
oracle@D2CSEVPHQ004[EAIRP]#
oracle@D2CSEVPHQ004[EAIRP]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 19 23:49:55 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL>Startup nomount;
ORACLE instance started.
Total System Global Area 645922816 bytes
Fixed Size 2927720 bytes
Variable Size 444597144 bytes
Database Buffers 192937984 bytes
Redo Buffers 5459968 bytes
SQL>
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
SQL> Startup nomount;
ORACLE instance started.
Total System Global Area 645922816 bytes
Fixed Size 2927720 bytes
Variable Size 444597144 bytes
Database Buffers 192937984 bytes
Redo Buffers 5459968 bytes
SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@D2CSEVPHQ004[EAIRP]# rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Mon Dec 19 23:50:57 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: EAIRP (not mounted)
RMAN>spool log to '/u01/app/oracle/scripts/eairprestore.txt';
RMAN>RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN> list backup summary;
RMAN>RUN{alter database mount;}
RMAN>
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& KEN's REDOING trial all via RMAN prompt &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Total System Global Area 645922816 bytes
Fixed Size 2927720 bytes
Variable Size 444597144 bytes
Database Buffers 192937984 bytes
Redo Buffers 5459968 bytes
SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@D2CSEVPHQ004[EAIRP]# rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Mon Dec 19 23:50:57 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: EAIRP (not mounted)
RMAN>spool log to /u01/eairprestore.txt';
RMAN> spool off;
RMAN> spool log to ./u01/app/oracle/scripts/eairprestore.txt.
RMAN> spool off
RMAN> spool log to '/u01/app/oracle/scripts/eairprestore.txt';
RMAN>RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN>list backup summary;
RMAN>RUN{alter database mount;}
RMAN> list backup summary;
RMAN>RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN>RUN{shutdown immediate;}
RMAN>RUN{startup nomount;}
RMAN>RUN{RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/cf_EAIRP_9mrnik8r.bkup';} [Not worked!!]
==========
RMAN>spool log to '/u01/app/oracle/scripts/eairprestore.txt';
RMAN>RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN>list backup summary;
RMAN> RUN{alter database mount;}
RMAN> list backup summary;
RMAN> RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN> RUN{shutdown immediate;}
RMAN> RUN{startup nomount;}<<============================================================[Database not mounted]
RMAN> RUN{RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';}
RMAN> list backup summary;
RMAN> RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/9mrnik8r.bkup';
RMAN> list backup summary;
RMAN> RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/cf_EAIRP_9mrnik8r.bkup';
RMAN> list backup summary;
RMAN> RUN{startup mount;}
=================================
RMAN>
database is already started
database mounted
released channel: ORA_DISK_1
RMAN>
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1323 B F A DISK 14-DEC-16 1 1 NO DAILY_BACKUP
1324 B F A DISK 14-DEC-16 1 1 NO TAG20161214T230436
1325 B A A DISK 14-DEC-16 1 1 NO DAILY_BACKUP
1326 B A A DISK 14-DEC-16 1 1 NO DAILY_BACKUP
1327 B F A DISK 14-DEC-16 1 1 NO TAG20161214T230533
1328 B F A DISK 14-DEC-16 1 1 NO DAILY_BACKUP
1329 B F A DISK 14-DEC-16 1 1 NO TAG20161214T230537
1330 B F A DISK 15-DEC-16 1 1 NO DAILY_BACKUP
1331 B F A DISK 15-DEC-16 1 1 NO TAG20161215T230438
1332 B A A DISK 15-DEC-16 1 1 NO DAILY_BACKUP
1333 B F A DISK 15-DEC-16 1 1 NO TAG20161215T230457
RMAN> RESTORE CONTROLFILE from '/u01/oradata/backup/EAIRP/cf_EAIRP_9mrnik8r.bkup';
*********************************************************************************************************************
RMAN> restore database;
RMAN>
Starting restore at 20-DEC-16
Starting implicit crosscheck backup at 20-DEC-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
Crosschecked 11 objects
Finished implicit crosscheck backup at 20-DEC-16
Starting implicit crosscheck copy at 20-DEC-16
using channel ORA_DISK_1
Finished implicit crosscheck copy at 20-DEC-16
searching for all files in the recovery area
cataloging files...
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Logs of Actual Restore &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
RMAN>
Starting restore at 20-DEC-16
Starting implicit crosscheck backup at 20-DEC-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
Crosschecked 11 objects
Finished implicit crosscheck backup at 20-DEC-16
Starting implicit crosscheck copy at 20-DEC-16
using channel ORA_DISK_1
Finished implicit crosscheck copy at 20-DEC-16
searching for all files in the recovery area
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77396_d5jlgo7b_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77348_d5grzzdk_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77395_d5jkrwrr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77366_d5hkc8rc_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77387_d5jbopvh_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77378_d5j0npss_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77390_d5jg79t7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77381_d5j3kvl6_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77355_d5h2fqjh_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77374_d5hvy6z5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77354_d5h05mdq_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77342_d5gjpmxr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77379_d5j192b7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77356_d5h3os4f_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77352_d5gxolyg_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77351_d5gwn6fl_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77386_d5jb2rkh_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77369_d5hobxs3_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77389_d5jfn2kf_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77346_d5golfny_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77357_d5h4gqjg_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77393_d5jkq9oz_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77400_d5jp8mdl_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77372_d5hrw4o9_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77338_d5gdfkfw_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77380_d5j30m8d_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77343_d5gl0tmz_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77375_d5hwglk2_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77368_d5hn5f48_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77341_d5ghgy09_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77391_d5jhgslv_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77360_d5h8ftnt_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77347_d5gqqqq5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77394_d5jkqxp9_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77362_d5hcjjd8_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77335_d5g97x95_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77382_d5j4qdfr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77365_d5hhfnry_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77361_d5hbqgry_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77349_d5gsvxwb_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77363_d5hftd4x_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77340_d5ggj2bt_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77397_d5jnp1ff_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77399_d5jp579t_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77404_d5jt1zh3_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77383_d5j6kgcz_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77344_d5glvr4v_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77392_d5jkm5m4_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77405_d5jvr992_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77370_d5hp3z8b_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77373_d5hspb60_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77403_d5jrp2pk_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77388_d5jcwnm6_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77401_d5jpc1k9_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77353_d5gzpf1q_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77367_d5hktr1r_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77350_d5gw594v_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77384_d5j73qq7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77336_d5gbj8cc_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77359_d5h7810y_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77402_d5jr710v_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77376_d5hy44gt_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77339_d5gg14rw_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77385_d5j8b6jz_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77345_d5go364y_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77337_d5gbz8w3_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77377_d5hzxtfv_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77364_d5hg9g69_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77371_d5hqr75s_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77358_d5h6rfos_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77398_d5jo51nh_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57278_cmfm82w5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57284_cmfv4cjo_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57283_cmfrs6b3_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57282_cmfrlfvn_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57280_cmfo4bk2_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57279_cmfo0fox_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_05_14/o1_mf_1_57281_cmfphy89_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77295_d5c8ysbr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77272_d59o3cyc_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77297_d5c909f4_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77293_d5c18m73_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77291_d5bws1h8_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77269_d59clcrg_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77298_d5cbjj0c_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77282_d5b52hh5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77292_d5bxqsxl_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77284_d5bd3vcx_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77270_d59h5c6r_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77277_d5b0v3p0_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77279_d5b1m4lm_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77294_d5c5lgws_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77274_d59pzj2j_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77286_d5bksbpw_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77285_d5bjf84v_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77288_d5br92j3_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77300_d5ckq410_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77281_d5b3pyxk_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77289_d5bw6y50_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77283_d5b8m0xg_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77267_d595bz1p_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77271_d59kkbg5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77278_d5b15n16_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77287_d5bnqm11_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77290_d5bwpbdr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77299_d5cg52lc_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77273_d59o772z_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77276_d59y0zg6_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77275_d59tj4m2_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77280_d5b3npsq_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77268_d598vsn4_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77296_d5c8zcc9_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77318_d5f4z5z4_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77330_d5g0cz5b_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77331_d5g3l5vh_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77323_d5fn59jr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77303_d5csy0r5_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77309_d5db6lmd_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77301_d5co9khy_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77310_d5dfqfcj_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77312_d5doj7xd_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77306_d5d4sjqg_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77321_d5fggs3k_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77329_d5fxfbt7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77316_d5dxxsqy_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77308_d5d9pg41_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77317_d5f1gxg8_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77314_d5dr3k54_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77311_d5dkww6h_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77333_d5g75dkb_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77327_d5fxckqs_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77313_d5dqzp17_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77334_d5g867hs_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77315_d5dtfc76_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77326_d5fwv1hy_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77328_d5fxd6sj_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77305_d5d0zz1q_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77302_d5cpf61k_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77332_d5g4rqkf_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77322_d5fl0gl7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77324_d5fosccx_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77320_d5fbwnhr_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77319_d5f71dxq_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77325_d5fsb2y7_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77304_d5cxgvcq_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77307_d5d8dwgj_.arc
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/autobackup/2016_05_14/o1_mf_s_911808213_cmfpyp6k_.bkp
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/autobackup/2016_12_17/o1_mf_s_930788694_d591gq5g_.bkp
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/autobackup/2016_12_17/o1_mf_s_930788700_d591gwvs_.bkp
File Name: /u01/app/oracle/fast_recovery_area/EAIRP/autobackup/2016_12_17/o1_mf_s_930788674_d591g2kz_.bkp
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/EAIRP/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/oradata/EAIRP/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/oradata/EAIRP/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/oradata/EAIRP/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/oradata/EAIRP/apex_owner_01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/oradata/EAIRP/apex_owner_02.dbf
channel ORA_DISK_1: restoring datafile 00007 to /u01/oradata/EAIRP/apex_ts_01.dbf
channel ORA_DISK_1: restoring datafile 00008 to /u01/oradata/EAIRP/dba_test.dbf
channel ORA_DISK_1: reading from backup piece /u01/oradata/backup/EAIRP/db_EAIRP_1008763476_1330_1.bkup
oracle@D2CSEVPHQ004[EAIRP]#
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& RECOVER database &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
RMAN> recover database;
RMAN>
Starting recover at 20-DEC-16
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 77267 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77267_d595bz1p_.arc
archived log for thread 1 with sequence 77268 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77268_d598vsn4_.arc
archived log for thread 1 with sequence 77269 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77269_d59clcrg_.arc
archived log for thread 1 with sequence 77270 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77270_d59h5c6r_.arc
archived log for thread 1 with sequence 77271 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77271_d59kkbg5_.arc
archived log for thread 1 with sequence 77272 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77272_d59o3cyc_.arc
archived log for thread 1 with sequence 77273 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77273_d59o772z_.arc
archived log for thread 1 with sequence 77274 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77274_d59pzj2j_.arc
archived log for thread 1 with sequence 77275 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77275_d59tj4m2_.arc
archived log for thread 1 with sequence 77276 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77276_d59y0zg6_.arc
archived log for thread 1 with sequence 77277 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77277_d5b0v3p0_.arc
archived log for thread 1 with sequence 77278 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77278_d5b15n16_.arc
archived log for thread 1 with sequence 77279 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77279_d5b1m4lm_.arc
archived log for thread 1 with sequence 77280 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77280_d5b3npsq_.arc
archived log for thread 1 with sequence 77281 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77281_d5b3pyxk_.arc
archived log for thread 1 with sequence 77282 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77282_d5b52hh5_.arc
archived log for thread 1 with sequence 77283 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77283_d5b8m0xg_.arc
archived log for thread 1 with sequence 77284 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77284_d5bd3vcx_.arc
archived log for thread 1 with sequence 77285 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77285_d5bjf84v_.arc
archived log for thread 1 with sequence 77286 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77286_d5bksbpw_.arc
archived log for thread 1 with sequence 77287 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77287_d5bnqm11_.arc
archived log for thread 1 with sequence 77288 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77288_d5br92j3_.arc
archived log for thread 1 with sequence 77289 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77289_d5bw6y50_.arc
archived log for thread 1 with sequence 77290 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77290_d5bwpbdr_.arc
archived log for thread 1 with sequence 77291 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77291_d5bws1h8_.arc
archived log for thread 1 with sequence 77292 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77292_d5bxqsxl_.arc
archived log for thread 1 with sequence 77293 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77293_d5c18m73_.arc
archived log for thread 1 with sequence 77294 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77294_d5c5lgws_.arc
archived log for thread 1 with sequence 77295 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77295_d5c8ysbr_.arc
archived log for thread 1 with sequence 77296 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77296_d5c8zcc9_.arc
archived log for thread 1 with sequence 77297 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77297_d5c909f4_.arc
archived log for thread 1 with sequence 77298 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77298_d5cbjj0c_.arc
archived log for thread 1 with sequence 77299 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77299_d5cg52lc_.arc
archived log for thread 1 with sequence 77300 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_17/o1_mf_1_77300_d5ckq410_.arc
archived log for thread 1 with sequence 77301 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77301_d5co9khy_.arc
archived log for thread 1 with sequence 77302 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77302_d5cpf61k_.arc
archived log for thread 1 with sequence 77303 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77303_d5csy0r5_.arc
archived log for thread 1 with sequence 77304 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77304_d5cxgvcq_.arc
archived log for thread 1 with sequence 77305 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77305_d5d0zz1q_.arc
archived log for thread 1 with sequence 77306 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77306_d5d4sjqg_.arc
archived log for thread 1 with sequence 77307 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77307_d5d8dwgj_.arc
archived log for thread 1 with sequence 77308 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77308_d5d9pg41_.arc
archived log for thread 1 with sequence 77309 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77309_d5db6lmd_.arc
archived log for thread 1 with sequence 77310 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77310_d5dfqfcj_.arc
archived log for thread 1 with sequence 77311 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77311_d5dkww6h_.arc
archived log for thread 1 with sequence 77312 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77312_d5doj7xd_.arc
archived log for thread 1 with sequence 77313 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77313_d5dqzp17_.arc
archived log for thread 1 with sequence 77314 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77314_d5dr3k54_.arc
archived log for thread 1 with sequence 77315 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77315_d5dtfc76_.arc
archived log for thread 1 with sequence 77316 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77316_d5dxxsqy_.arc
archived log for thread 1 with sequence 77317 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77317_d5f1gxg8_.arc
archived log for thread 1 with sequence 77318 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77318_d5f4z5z4_.arc
archived log for thread 1 with sequence 77319 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77319_d5f71dxq_.arc
archived log for thread 1 with sequence 77320 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77320_d5fbwnhr_.arc
archived log for thread 1 with sequence 77321 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77321_d5fggs3k_.arc
archived log for thread 1 with sequence 77322 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77322_d5fl0gl7_.arc
archived log for thread 1 with sequence 77323 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77323_d5fn59jr_.arc
archived log for thread 1 with sequence 77324 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77324_d5fosccx_.arc
archived log for thread 1 with sequence 77325 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77325_d5fsb2y7_.arc
archived log for thread 1 with sequence 77326 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77326_d5fwv1hy_.arc
archived log for thread 1 with sequence 77327 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77327_d5fxckqs_.arc
archived log for thread 1 with sequence 77328 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77328_d5fxd6sj_.arc
archived log for thread 1 with sequence 77329 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77329_d5fxfbt7_.arc
archived log for thread 1 with sequence 77330 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77330_d5g0cz5b_.arc
archived log for thread 1 with sequence 77331 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77331_d5g3l5vh_.arc
archived log for thread 1 with sequence 77332 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77332_d5g4rqkf_.arc
archived log for thread 1 with sequence 77333 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77333_d5g75dkb_.arc
archived log for thread 1 with sequence 77334 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_18/o1_mf_1_77334_d5g867hs_.arc
archived log for thread 1 with sequence 77335 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77335_d5g97x95_.arc
archived log for thread 1 with sequence 77336 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77336_d5gbj8cc_.arc
archived log for thread 1 with sequence 77337 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77337_d5gbz8w3_.arc
archived log for thread 1 with sequence 77338 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77338_d5gdfkfw_.arc
archived log for thread 1 with sequence 77339 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77339_d5gg14rw_.arc
archived log for thread 1 with sequence 77340 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77340_d5ggj2bt_.arc
archived log for thread 1 with sequence 77341 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77341_d5ghgy09_.arc
archived log for thread 1 with sequence 77342 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77342_d5gjpmxr_.arc
archived log for thread 1 with sequence 77343 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77343_d5gl0tmz_.arc
archived log for thread 1 with sequence 77344 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77344_d5glvr4v_.arc
archived log for thread 1 with sequence 77345 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77345_d5go364y_.arc
archived log for thread 1 with sequence 77346 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77346_d5golfny_.arc
archived log for thread 1 with sequence 77347 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77347_d5gqqqq5_.arc
archived log for thread 1 with sequence 77348 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77348_d5grzzdk_.arc
archived log for thread 1 with sequence 77349 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77349_d5gsvxwb_.arc
archived log for thread 1 with sequence 77350 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77350_d5gw594v_.arc
archived log for thread 1 with sequence 77351 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77351_d5gwn6fl_.arc
archived log for thread 1 with sequence 77352 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77352_d5gxolyg_.arc
archived log for thread 1 with sequence 77353 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77353_d5gzpf1q_.arc
archived log for thread 1 with sequence 77354 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77354_d5h05mdq_.arc
archived log for thread 1 with sequence 77355 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77355_d5h2fqjh_.arc
archived log for thread 1 with sequence 77356 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77356_d5h3os4f_.arc
archived log for thread 1 with sequence 77357 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77357_d5h4gqjg_.arc
archived log for thread 1 with sequence 77358 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77358_d5h6rfos_.arc
archived log for thread 1 with sequence 77359 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77359_d5h7810y_.arc
archived log for thread 1 with sequence 77360 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77360_d5h8ftnt_.arc
archived log for thread 1 with sequence 77361 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77361_d5hbqgry_.arc
archived log for thread 1 with sequence 77362 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77362_d5hcjjd8_.arc
archived log for thread 1 with sequence 77363 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77363_d5hftd4x_.arc
archived log for thread 1 with sequence 77364 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77364_d5hg9g69_.arc
archived log for thread 1 with sequence 77365 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77365_d5hhfnry_.arc
archived log for thread 1 with sequence 77366 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77366_d5hkc8rc_.arc
archived log for thread 1 with sequence 77367 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77367_d5hktr1r_.arc
archived log for thread 1 with sequence 77368 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77368_d5hn5f48_.arc
archived log for thread 1 with sequence 77369 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77369_d5hobxs3_.arc
archived log for thread 1 with sequence 77370 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77370_d5hp3z8b_.arc
archived log for thread 1 with sequence 77371 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77371_d5hqr75s_.arc
archived log for thread 1 with sequence 77372 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77372_d5hrw4o9_.arc
archived log for thread 1 with sequence 77373 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77373_d5hspb60_.arc
archived log for thread 1 with sequence 77374 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77374_d5hvy6z5_.arc
archived log for thread 1 with sequence 77375 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77375_d5hwglk2_.arc
archived log for thread 1 with sequence 77376 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77376_d5hy44gt_.arc
archived log for thread 1 with sequence 77377 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77377_d5hzxtfv_.arc
archived log for thread 1 with sequence 77378 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77378_d5j0npss_.arc
archived log for thread 1 with sequence 77379 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77379_d5j192b7_.arc
archived log for thread 1 with sequence 77380 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77380_d5j30m8d_.arc
archived log for thread 1 with sequence 77381 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77381_d5j3kvl6_.arc
archived log for thread 1 with sequence 77382 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77382_d5j4qdfr_.arc
archived log for thread 1 with sequence 77383 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77383_d5j6kgcz_.arc
archived log for thread 1 with sequence 77384 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77384_d5j73qq7_.arc
archived log for thread 1 with sequence 77385 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77385_d5j8b6jz_.arc
archived log for thread 1 with sequence 77386 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77386_d5jb2rkh_.arc
archived log for thread 1 with sequence 77387 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77387_d5jbopvh_.arc
archived log for thread 1 with sequence 77388 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77388_d5jcwnm6_.arc
archived log for thread 1 with sequence 77389 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77389_d5jfn2kf_.arc
archived log for thread 1 with sequence 77390 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77390_d5jg79t7_.arc
archived log for thread 1 with sequence 77391 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77391_d5jhgslv_.arc
archived log for thread 1 with sequence 77392 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77392_d5jkm5m4_.arc
archived log for thread 1 with sequence 77393 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77393_d5jkq9oz_.arc
archived log for thread 1 with sequence 77394 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77394_d5jkqxp9_.arc
archived log for thread 1 with sequence 77395 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77395_d5jkrwrr_.arc
archived log for thread 1 with sequence 77396 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77396_d5jlgo7b_.arc
archived log for thread 1 with sequence 77397 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77397_d5jnp1ff_.arc
archived log for thread 1 with sequence 77398 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77398_d5jo51nh_.arc
archived log for thread 1 with sequence 77399 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77399_d5jp579t_.arc
archived log for thread 1 with sequence 77400 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77400_d5jp8mdl_.arc
archived log for thread 1 with sequence 77401 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77401_d5jpc1k9_.arc
archived log for thread 1 with sequence 77402 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77402_d5jr710v_.arc
archived log for thread 1 with sequence 77403 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77403_d5jrp2pk_.arc
archived log for thread 1 with sequence 77404 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77404_d5jt1zh3_.arc
archived log for thread 1 with sequence 77405 is already on disk as file /u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_19/o1_mf_1_77405_d5jvr992_.arc
archived log for thread 1 with sequence 77406 is already on disk as file /u01/oradata/EAIRP/redo02.log
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=77229
channel ORA_DISK_1: reading from backup piece /u01/oradata/backup/EAIRP/log_EAIRP_1008763476_1332_1.bkup
channel ORA_DISK_1: piece handle=/u01/oradata/backup/EAIRP/log_EAIRP_1008763476_1332_1.bkup tag=DAILY_BACKUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
archived log file name=/u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_20/o1_mf_1_77229_d5k09d35_.arc thread=1 sequence=77229
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_20/o1_mf_1_77229_d5k09d35_.arc RECID=20097 STAMP=931049645
unable to find archived log
archived log thread=1 sequence=77230
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/20/2016 00:54:14
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 77230 and starting SCN of 828293077
oracle@D2CSEVPHQ004[EAIRP]#
==================================
SQL> recover database until cancel;
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
SQL>recover database using backup controlfile until cancel;
ORA-00279: change 828293077 generated at 12/15/2016 23:04:40 needed for thread
1
ORA-00289: suggestion :
/u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_20/o1_mf_1_77230_%u_
.arc
ORA-00280: change 828293077 for thread 1 is in sequence #77230
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
SQL> recover database until cancel;
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
SQL>recover database using backup controlfile until cancel;
ORA-00279: change 828293077 generated at 12/15/2016 23:04:40 needed for thread
1
ORA-00289: suggestion :
/u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_20/o1_mf_1_77230_%u_
.arc
ORA-00280: change 828293077 for thread 1 is in sequence #77230
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
Canel
ORA-00308: cannot open archived log 'Canel'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
============================
SQL> recover database until cancel;
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
SQL>recover database using backup controlfile until cancel;
ORA-00279: change 828293077 generated at 12/15/2016 23:04:40 needed for thread
1
ORA-00289: suggestion :
/u01/app/oracle/fast_recovery_area/EAIRP/archivelog/2016_12_20/o1_mf_1_77230_%u_
.arc
ORA-00280: change 828293077 for thread 1 is in sequence #77230
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
Canel
ORA-00308: cannot open archived log 'Canel'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> alter database open resetlogs;
Database altered.
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& SPECIFYING BIGIDY's &&&&&&&&&&&&&&&&&&&&&&&& until cancel =>Inconsistent/Incomplete recover &&&&&&&&&&&
Checking for any block corruption:
=================================
SQL> select * from dba_extents where file_id =6 and 950277 between block_id and block_id+blocks-1;
no rows selected
====================
Hi Pat,
Digging deeper, I could see that the table you mentioned below is the one causing the corruption:
1* select * from dba_extents where file_id =6 and 950277 between block_id and block_id+blocks-1
SQL> /
OWNER
--------------------------------------------------------------------------------------------------------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------------------------------------------------------
PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
-------------------------------------------------------------------------------------------------------------------------------- ------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------------
HLS_EA
FEA_BRM_SERVICE_INVESTMENT_REL
TABLE APEX_OWNER 67 6 950272 196608 24 6
------------------------12c OS Prerequisites------------------------------------------------------------------------------------------------
shmmni
shmall
file-max
ip_local_port_range
rmem_default
rmem_max
wmem_default
wmem_max
aio-max-nr
binutils-2.20.51.0.2
compat-libcap1-1.10
libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3
libgcc-4.4.4 (x86_64)
libstdc++-4.4.4
-------------------RAC----Check ASM Mounted Disks-----------------------------------------------------------------------
select d.inst_id,dg.name dg_name,dg.state dg_state,dg.type d.name,d.DISK_NUMBER dsk_no,d.MOUNT_STATUS,d.HEADER_STATUS,d.MODE_STATUS,d.STATE,.d.PATH,d.FAILGROUP
from GV$ASM_DISK d,gv$asm_diskgroup dg
where dg.group_number(+)=d.group_number and d.inst_id=dg.inst_id
order by d.inst_id,d.group_number;
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
select dg.name dg_name,dg.state dg_state,d.DISK_NUMBER dsk_no,d.MOUNT_STATUS,d.MODE_STATUS,d.STATE,.d.PATH,d.FAILGROUP
from GV$ASM_DISK d,gv$asm_diskgroup dg
------------------------------------------------------------------------------------------------------------------
SQL> desc v$asm_diskgroup
Name Null? Type
----------------------------------------- -------- ----------------------------
GROUP_NUMBER NUMBER
NAME VARCHAR2(30)
SECTOR_SIZE NUMBER
BLOCK_SIZE NUMBER
ALLOCATION_UNIT_SIZE NUMBER
STATE VARCHAR2(11)
TYPE VARCHAR2(6)
TOTAL_MB NUMBER
FREE_MB NUMBER
HOT_USED_MB NUMBER
COLD_USED_MB NUMBER
REQUIRED_MIRROR_FREE_MB NUMBER
USABLE_FILE_MB NUMBER
OFFLINE_DISKS NUMBER
COMPATIBILITY VARCHAR2(60)
DATABASE_COMPATIBILITY VARCHAR2(60)
VOTING_FILES VARCHAR2(1)
CON_ID NUMBER
---------------------------------------------------------------------------------------------------------------------------------------------------------
SQL> desc V$ASM_DISK
Name Null? Type
----------------------------------------- -------- ----------------------------
GROUP_NUMBER NUMBER
DISK_NUMBER NUMBER
COMPOUND_INDEX NUMBER
INCARNATION NUMBER
MOUNT_STATUS VARCHAR2(7)
HEADER_STATUS VARCHAR2(12)
MODE_STATUS VARCHAR2(7)
STATE VARCHAR2(8)
REDUNDANCY VARCHAR2(7)
LIBRARY VARCHAR2(64)
OS_MB NUMBER
TOTAL_MB NUMBER
FREE_MB NUMBER
HOT_USED_MB NUMBER
COLD_USED_MB NUMBER
NAME VARCHAR2(30)
FAILGROUP VARCHAR2(30)
LABEL VARCHAR2(31)
PATH VARCHAR2(256)
UDID VARCHAR2(64)
PRODUCT VARCHAR2(32)
CREATE_DATE DATE
MOUNT_DATE DATE
REPAIR_TIMER NUMBER
READS NUMBER
WRITES NUMBER
READ_ERRS NUMBER
WRITE_ERRS NUMBER
READ_TIMEOUT NUMBER
WRITE_TIMEOUT NUMBER
READ_TIME NUMBER
WRITE_TIME NUMBER
BYTES_READ NUMBER
BYTES_WRITTEN NUMBER
PREFERRED_READ VARCHAR2(1)
HASH_VALUE NUMBER
HOT_READS NUMBER
HOT_WRITES NUMBER
HOT_BYTES_READ NUMBER
HOT_BYTES_WRITTEN NUMBER
COLD_READS NUMBER
COLD_WRITES NUMBER
COLD_BYTES_READ NUMBER
COLD_BYTES_WRITTEN NUMBER
VOTING_FILE VARCHAR2(1)
SECTOR_SIZE NUMBER
FAILGROUP_TYPE VARCHAR2(7)
CON_ID NUMBER
------------------------------------------------------CHECK ASM disk status--------------------------------------------------------------
select MOUNT_STATUS,FAILGROUP,STATE, PATH from V$ASM_DISK;
--------------------------------------------------------------
SQL> /
MOUNT_S FAILGROUP STATE
------- ------------------------------ --------
PATH
------------------------------------------------------------------------------------------------------------------------------------------------------
CACHED DATA2 NORMAL
ORCL:DATA2
CACHED FRA NORMAL
ORCL:FRA
CACHED OCR NORMAL
ORCL:OCR
SQL> l
1* select MOUNT_STATUS,FAILGROUP,STATE, PATH from V$ASM_DISK
SQL>
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
ASM
----
[kenneth.chando@d2iclprhq116 ~]$ sudo su - oracle
oracle@d2iclprhq116[IDMUAT1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.DATADG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.FRADG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER2.lsnr
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.asm
ONLINE ONLINE d2iclprhq116 Started,STABLE
ONLINE ONLINE d2iclprhq117 Started,STABLE
ora.net1.network
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.net2.network
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.ons
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE d2iclprhq116 169.254.117.181 192.
168.196.171,STABLE
ora.cvu
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq116.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq116_2.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq117.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.d2iclprhq117_2.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.idmuat.db
1 ONLINE ONLINE d2iclprhq116 Open,STABLE
2 ONLINE ONLINE d2iclprhq117 Open,STABLE
ora.mgmtdb
1 ONLINE ONLINE d2iclprhq116 Open,STABLE
ora.oc4j
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
--------------------------------------------------------------------------------
oracle@d2iclprhq116[IDMUAT1]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-OCT-2016 02:43:36
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Linux Error: 111: Connection refused
oracle@d2iclprhq116[IDMUAT1]# clear
oracle@d2iclprhq116[IDMUAT1]# goasm
oracle@d2iclprhq116[+ASM1]# lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-OCT-2016 02:43:53
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 27-SEP-2016 01:16:08
Uptime 28 days 1 hr. 27 min. 46 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/d2iclprhq116/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.66.41)(PORT=29484)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.66.35)(PORT=29484)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "IDMUAT" has 1 instance(s).
Instance "IDMUAT1", status READY, has 1 handler(s) for this service...
Service "IDMUATXDB" has 1 instance(s).
Instance "IDMUAT1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "d2iclprhq116a1" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@d2iclprhq116[+ASM1]# sql
SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 25 02:45:15 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> select staus from gv$instance;
select staus from gv$instance
*
ERROR at line 1:
ORA-00904: "STAUS": invalid identifier
SQL> select status from gv$instance;
STATUS
------------
STARTED
STARTED
SQL> select name from gv$database;
select name from gv$database
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server PPA7, instance
d2iclprhq117:+ASM2 (2)
ORA-01507: database not mounted
====================VERIFYING status from asm and database e.g idmuat======================
SQL> connect system/Toast2u_22
Connected.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@d2iclprhq116[IDMUAT1]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.DATADG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.FRADG.dg
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER2.lsnr
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.asm
ONLINE ONLINE d2iclprhq116 Started,STABLE
ONLINE ONLINE d2iclprhq117 Started,STABLE
ora.net1.network
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.net2.network
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
ora.ons
ONLINE ONLINE d2iclprhq116 STABLE
ONLINE ONLINE d2iclprhq117 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE d2iclprhq116 169.254.117.181 192.
168.196.171,STABLE
ora.cvu
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq116.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq116_2.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.d2iclprhq117.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.d2iclprhq117_2.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.idmuat.db
1 ONLINE ONLINE d2iclprhq116 Open,STABLE
2 ONLINE ONLINE d2iclprhq117 Open,STABLE
ora.mgmtdb
1 ONLINE ONLINE d2iclprhq116 Open,STABLE
ora.oc4j
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.scan1.vip
1 ONLINE ONLINE d2iclprhq117 STABLE
ora.scan2.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
ora.scan3.vip
1 ONLINE ONLINE d2iclprhq116 STABLE
--------------------------------------------------------------------------------
oracle@d2iclprhq116[IDMUAT1]#
--------------------Bash_Profile--------------------------from C:\Users\Kenneth.Chando\Documents\KENCHANDO\Files_4m_Old_PC------------------------
oracle@d2aseutsh018.ndc.local[openview]# ls
11.2.0.2_to_bedeleted 11.2.0.3
oracle@d2aseutsh018.ndc.local[openview]# vi ~/.bash_profile
oracle@d2aseutsh018.ndc.local[openview]# vi ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3
ORACLE_SID=openview
ORACLE_DB=openview
PATH=$HOME:/usr/sbin:/usr/proc/bin:/usr/local/bin:/usr/local/sbin:/usr/ccs/bin:$PATH
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$GRID_HOME/bin:$ORACLE_BASE/scripts:$PATH
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH ORACLE_DB
alias scripts='cd /u01/app/oracle/scripts'
ora_db=$( echo "$ORACLE_DB" | tr -s '[:upper:]' '[:lower:]' )
alias alog='tail -200 /u01/app/oracle/diag/rdbms/${ORACLE_DB}/${ORACLE_SID}/trace/alert_${ORACLE_SID}.log'
alias bdump='cd /u01/app/oracle/diag/rdbms/${ORACLE_DB}/${ORACLE_SID}/trace'
alias udump='cd /u01/app/oracle/diag/rdbms/${ORACLE_DB}/${ORACLE_SID}/trace'
alias cdump='cd /u01/app/oracle/diag/rdbms/${ORACLE_DB}/${ORACLE_SID}/cdump'
alias adump='cd /u01/app/oracle/admin/${ORACLE_DB}/adump'
alias admin='cd /u01/app/oracle/admin/${ORACLE_DB}'
alias bkup='cd /u01/app/oracle/backup'
alias media='cd /u01/app/oracle/media'
alias patches='cd /u01/app/oracle/patches'
alias scripts='cd /u01/app/oracle/scripts'
alias home='cd $ORACLE_HOME'
alias pfile='cd $ORACLE_HOME/dbs'
alias p='export PS1="$USER@"`hostname`"[$ORACLE_SID]# "'
alias sql='sqlplus "/ as sysdba"'
export TMOUT=0
PS1="$USER@"`hostname`"[$ORACLE_SID]# "
~
"~/.bash_profile" 35L, 1416C
===============================BIGIDY===================================================
SELECT GRANTEE AS USERNAME, OWNER || ‘.’ || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN(‘ANONYMOUS’,‘MGMT_VIEW’,‘SYS’,‘SYSTEM’,‘APPQOSSYS’,‘XDB’,‘SYSMAN’,‘OLAPSYS’,‘ORDSYS’,‘OWBSYS’,‘MDSYS’,‘EXFSYS’,‘APEX_030200’,‘APEX_PUBLIC_USER’,‘CTXSYS’,‘FLOWS_FILES’,‘OLAPSYS’,‘ORDPLUGINS’,‘ORACLE_OCM’,‘PUBLIC’,‘DBSNMP’,‘DBA’,‘AUDITDB’,‘TSMSYS’,‘DBAUDCON’,‘DBAUDIT’,‘OEM_USR’,‘WMSYS’,ORADBSS’,‘OUTLN’,‘MONITOR’)
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1, 2
$$$$$$$$$$$$$$$ REDUCED ....
BREAK ON USERNAME SKIP 2;
SELECT GRANTEE AS USERNAME, OWNER || ‘.’ || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN (‘ORDPLUGINS’,‘ORACLE_OCM’,‘PUBLIC’,‘DBSNMP’,‘DBA’,‘AUDITDB’,‘TSMSYS’,‘DBAUDCON’,‘DBAUDIT’,‘ORADBSS’,‘OUTLN')
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1, 2
/
%%% QUOTEs fixed ....%%%%%%%%%%%%%%%%%%%%%%
BREAK ON USERNAME SKIP 2;
SELECT GRANTEE AS USERNAME, OWNER || '.' || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN ('ORDPLUGINS','ORACLE_OCM','PUBLIC','DBSNMP','DBA','AUDITDB','TSMSYS','DBAUDCON','DBAUDIT','ORADBSS','OUTLN')
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1,2;
/
################ PERFECTO !!!! #####################################################
BREAK ON USERNAME SKIP 2;
SELECT GRANTEE AS USERNAME, OWNER || '.' || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN ('ANONYMOUS','MGMT_VIEW','SYS','SYSTEM','APPQOSSYS','XDB','SYSMAN','OLAPSYS','ORDSYS','OWBSYS','MDSYS','EXFSYS','APEX_030200','APEX_PUBLIC_USER','CTXSYS','FLOWS_FILES','OLAPSYS','ORDPLUGINS','ORACLE_OCM','PUBLIC','DBSNMP','DBA','AUDITDB','TSMSYS'
,'DBAUDCON','DBAUDIT','OEM_USR','WMSYS','ORADBSS','OUTLN','MONITOR')
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1,2;
########################BIGIDY export script#############################
SAMPLE EXPORT PARFILE:
----------------------
USERID='/ as sysdba'
DIRECTORY=DTPUMP
LOGFILE=DMAXQ21_EXP.log
LOGTIME=ALL
### VERSION=11.2.0
CONTENT=ALL
### SCHEMAS=MAXIMO, EAMINF
FULL=Y
PARALLEL=10
METRICS=Y
COMPRESSION=ALL
COMPRESSION_ALGORITHM=MEDIUM
CLUSTER=N
FLASHBACK_TIME=SYSTIMESTAMP
JOB_NAME=DMAXQ21_EXP_44444
############################BIGIDY check BLOCKED session script#################################################################
set pagesize 14000 linesize 170
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) 'As blocking_status from v$lock l1,v$session s1,v$lock l2,v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request>0
and l1.id1 = l2.id1
and l2.id2 = l2.id2
order by s1.machine,s2.machine;
==============BLOCKING_Session 2=======================
select B.SID,B.SQL_ID,B.USERNAME,B.MACHINE
from v$SESSION B
WHERE B.SID = (Select Distinct blocker from (Select a.sid blocker,'is blocking session',b.sid blockee
from v$lock a,v$lock b
WHERE a.block=1 AND b.request>0
AND a.id1=b.id1
AND a.id2=b.id2));
======================Ken_ORA Session/Blocking==========================================
select username,status,block from v$session,v$lock where status='INACTIVE';
select BLOCKER_SESS_SERIAL#,BLOCKER_SID,STATUS,username from v$session,v$session_blockers,v$lock;
============================(Do select count(*) from VIEWS(i.e. > 1 views e.g. v$database,v$instance,etc) to make sure each view has at least 1-item to intersect each other. If 0-item, then no need
select vs.sid, vs.serial#, vs.username, vs.status, vs.osuser,
to_char(vs.logon_time,'dd-mon-yy hh24:Mi:ss') logtime
from v$session vs
where vs.username is not null
order by 4
/
####################################################################################################################################################
%%%%%%%%%%%%%%%% COMMON TASKS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
For ALL SQL queries:
select count(*) from V$EXECUTION;
=================================================================================================================
NOTIFICATION to other groups for CHANGE from DBAs
==============
Subject: Oracle OJVM and CPU Patch Coordination
All,
RFC 35986 will be implemented tonight 9/3/15 @ 9:00PM. This thread will serve as the notification thread for this change.
Please use this for coordination of events tonight and review the coordination actions below.
I have included contact phone numbers for individual notification of key implementers. If there are any issues please contact me.
9:00PM – Start change
1. IDM Stack Shutdown – Bill Fleming
2. Notify via this thread DC2 Database Support of Stack Shutdown Completion – Bill Fleming
3. Start OJVM and CPU updates – Lionel Charles
4. Complete patching and verify Database is functional - Lionel Charles
5. Notify via this thread and phone call follow up to Bill Fleming of Database Patching Completion – Lionel Charles
6. Start IDM Stack – Bill Fleming
7. Verify Application – Bill Fleming
8. Notify EOC of change completion – Bill Fleming
Bill Fleming – 703-896-0457
Lionel Charles - 240-419-0146
Phillip Sines – 571-247-8942
==================================================================================================================
I have included the two Oracle patching documents:
1) To apply the July 2015 SPU patch
2) To apply the July 2015 OJVM patch
The ICE-BASS environments to be patched are as follows:
Server Database Environment
d2asepric071 BASSP - Production
d2asetsic002 BASST - Test
d2asedvic004 BASSD - Development
============BASS====================================================================================================================================
BASSD: NO FLASHBACK turned ON=>Temporarily turn it ON>create restore point>turn it back off to save space (after your finish task(s) on database.
=================================================================================================================================================
ll /u01/app/oracle/scripts
======================================================================================================================
=======VIEW ARCHIVELOG PATH=========RMAN>crosscheck archivelog all;===================================BACKUP dir===<database_name>/archivelog/autobackup/controlfile02.ctl/flashback/onlinelog/redo01b.log[e.g /u01/app/FRA/BASST/autobackup]/[ll /u01/app/FRA/BASST/archivelog]/[ ll /u01/app/FRA/backup (recover database/controlfile/log file)]
archived log file name=/u01/app/FRA/BASST/archivelog/2015_09_18/o1_mf_1_27239_bz
==============================FIRST STEP in ANY DATABASE/Miscellaneous SQL queries===========================================================
set linesize 250 pagesize 2000
select instance_name,version,status,log_mode,open_mode,flashback_on,database_role from v$instance,v$database;
select to_char(action_time,'DD-MON-YYYY HH:MM:SS AM')patched_on, description,patch_id,action,status,con_id from cdb_registry_sqlpatch;
============================
select username from dba_users where username like 'AB%' (username starts with AB/ '%AB'=>username ends with AB)
=====================================================================================================================
$ORACLE_HOME/OPatch/opatch lsinventory
================
REGISTRY HISTORY:
================
set pages 9999 linesize 250
column action_time format a30
column action format a15
column namespace format a12
column version format a12
column comments format a30
column bundle_series format a14
column comp_name format a12
select instance_name,comp_name,log_mode,database_role,open_mode,flashback_on from v$instance,v$database,dba_registry;
====================================================================
CREATE RESTORE POINT[make sure Flashback_on=YES, archivelog=ON]:
===================================================================
create restore point before_spujul2015_09182015 guarantee flashback database;
============================================================================
*View Restore Point:
===================
set linesize 600 pagesize 1000
column GUA STORAGE_SIZE format a10
column Name format a15
column RESTORE_POINT_TIME format a20
column TIME format a35
select * from v$restore_point;
============================================================================
12c Patch verification:
============================================================================
select to_char(action_time,'DD-MON-YYYY HH:HI:SS AM')patched_on,description,patch_id,action,status,con_id from cdb_registry_sqlpatch;
======================================================================================================================================
12c Check for One-off patch Conflict@patch location by:
=====================================================
$ORACLE_HOME/OPatch/opatch_prereq_CheckConflictAgainstOHWithDetail -ph ./ (standard)
cat $ORACLE_HOME/OPatch/opatchprereqs/opatch/opatch_prereq.xml (BASS)
===============================================================================
(PATCHING: ALWAYS create a GUARANTEE restore POINT before you start PATCHINg)>stop lsnrctl>stop OEM[emctl stop dbconsole]>APPLY Patch(@O/S)>START LISTENER(after patching completed):lsnrctl start>sql:startup>host>cd $ORACLE_HOME/OPatch>./datapatch -verbose (i.e. now patching database)
============================================================================================================================================================================================================================================================================================
APPLYING PATCH : 17027533$ORACLE_HOME/OPatch/opatch napply -skip subset -skip duplicate (See "12c Patching" on HP oneNote)
==========================================================================================================================
VERIFYING Patch Set UPDATE(PSU) on database Server: $ORACLE_HOME/OPatch/opatch lsinventory >After patching completed,Now do:
select to_char(action_time,'DD-MON-YYYY HH:HI:SS AM')patched_on,description,patch_id,action,status,con_id from cdb_registry_sqlpatch;
=======================================================================================================================================
*Go back and drop restore point created(to free space):sql>Drop Restore point before_spujul2015_09182015
=====================================================================================================================
*PLACE back Flashback_on, Archivelog to their default status prior to you doing your RESTORE POINT creation/Patching.
=============================================================================================================================================================
============================================================================================================
ALERT.log
==========
ls -ltra /u01/app/oracle/diag/rdbms/orcl/orcl1/trace and then scroll to the bottom to locate alert.log file
$cat /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/alert_orcl1.log or tail /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/alert_orcl1.log -n 100 (outputs last 100 lines) tail -f /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/alert_orcl1.log (outputs last 10 lines)
PARAMETERS: show parameter audit_trail or $cat /u01/app/oracle/scripts/parameters.txt
==========
DELETING archivelog (RMAN)
==========================
rman target /
crosscheck archivelog all;
allocate channel for maintenance type disk;
delete noprompt archivelog all completed before 'SYSDATE-7';
**NOTE**:Make sure STANDBY is current before you delete archivelogs
========================================================================
Hi Bruce,
Find steps performed for spujul2015 patching in DC2LAB.
A. Steps for Rolling Patch on DC2LAB Cluster[ 10.236.28.165(d2lsenpsh165)/10.236.28.166(d2lsenpsh166)]
See the steps, I followed to do the spujul2015 rolling patch in the DC2LAB below:
1. Download and unzip patch p20803576_112030_Linux-x86-64.zip to primary node(10.236.28.165] from Oracle Support
2. cd $ORACLE_HOME/patches (cd /u01/app/oracle/patches)
3. mkdir spuapr2015
4. cd /u01/app/oracle/patches/spuapr2015 > mkdir patch
5. scp / win scp p20803576_112030_Linux-x86-64.zip to /u01/app/oracle/patches/spuapr2015
6. cd /u01/app/oracle/patches/spuapr2015 > unzip patch p20803576_112030_Linux-x86-64.zip
7. Get count of invalid objects using script sh_invalid_objects.sql from /u01/app/oracle/scripts directory
8. If invalid objects, then run at sql prompt ?/rdbms/admin/utlrp.sql script [i.e. SQL>@?/rdbms/admin/utlrp.sql]
9. Execute sh_invalid_objects script to see if there are any more invalid objects. If none, then proceed to 10 below
10. Create restore point for recovery at sql prompt [i.e. sql> create restore point before_spuapr2015 guarantee flashback database; ]
11. Sudo to root and shut down instance and all nodeapps services on primary (d2lsenpsh165) node:
sudo su –
. .godb
srvctl stop crs
12. Apply the patch on primary (d2lsenpsh165) node as follows:
- Set current directory to the directory where the patch is located and then run OPatch utility by entering the following commands:
cd /u01/app/oracle/patches/spuapr2015/patch#
opatch napply -skip_subset -skip_duplicate
13. Once the patch is applied in primary node (d2lsenpsh165), OPatch will prompt you to apply patch on remote node (d2lsenpsh166)
NOTE: Before you continue patching on remote node(d2lsenpsh166) after the prompt, do the following:
-open a new terminal and login to primary node(d2lsenpsh165) to start another session
-start crs services for primary node(d2lsenpsh165) by running: srvctl start crs
-Verify that the services in primary node is fully operational
14.Login to remote node(d2lsenpsh166) in another session and stop crs services as follows:
sudo su –
cd /u01/app/11.2.0.3/grid/bin
. .godb
srvctl stop crs
With all services in remote node (d2lsenpsh166) still shutdown,
15.Return to patching session window on primary node (d2lsenpsh165) and apply the patch to remote node(d2lsenpsh166) responding to prompts
16.Once patch is applied to remote node(d2lsenpsh166),restart crs services on d2lsenpsh166 node using window in which you stopped crs as follows:
-srvctl start crs
-Allow a couple of minutes for crs to start
-Verify that all services are started
Note: Verify patch applied on either node using OPatch lsinventory
POST spujul2015 PATCH INSTALLATION
==================================
17.Apply post patch script to ONLY one node of cluster. On primary node(d2lsenpsh165) ONLY, run catbundle.sql script to load modified SQL Files into database: As oracle user do:
#cd $ORACLE_HOME/rdbms/admin
#sqlplus /nolog
SQL> connect / as sysdba
SQL> @catbundle.sql cpu apply
SQL> quit
**NOTE**catbundle must only be run on one node of the cluster.
12. Check the log files in $ORACLE_HOME/cfgtoollogs/catbundle for any errors:
catbundle_CPU_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_CPU_<database SID>_GENERATE_<TIMESTAMP>.log
where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS
13. check for invalid objects (run sh_invalid_objects.sql script and compare to same from Step 2)
# scripts
# sql
SQL> @/u01/app/oracle/scripts/sh_invalid_objects.sql
-- if invalid objects ---run
SQL> @?/rdbms/admin/utlrp.sql
SQL> @sh_invalid_objects
14. Check registry history:
from scripts directory on either node:
# sql
SQL> @/u01/app/oracle/scripts/sh_reghist.sql
<< RAC Patching is complete >>
15.Once verification is complete, drop the restore points BEFORE_spuapr2015
# sql
SQL> drop restore point before_spuapr2015;
B. Steps for Standalone(DR) Patch on DC2LAB [ 10.236.28.242(d2lsenpsh242)]
The steps for spujul2015 patching for Standalone (DR) is as follows:
1. Download and unzip patch p20803576_112030_Linux-x86-64.zip to DR node(10.236.28.242] from Oracle Support
2. cd $ORACLE_HOME/patches (cd /u01/app/oracle/patches)
3. mkdir spujul2015
4. cd /u01/app/oracle/patches/spujul2015 > mkdir patch
5. scp / win scp p20803576_112030_Linux-x86-64.zip to /u01/app/oracle/patches/spujul2015
6. cd /u01/app/oracle/patches/spujul2015 > unzip patch p20803576_112030_Linux-x86-64.zip
7. Get count of invalid objects using script sh_invalid_objects.sql from /u01/app/oracle/scripts directory
8. If invalid objects, then run at sql prompt ?/rdbms/admin/utlrp.sql script [i.e. SQL>@?/rdbms/admin/utlrp.sql]
9. Execute sh_invalid_objects script to see if there are any more invalid objects. If none, then proceed to 10 below
10. Create restore point for recovery at sql prompt [i.e. sql> create restore point before_spujul2015 guarantee flashback database; ]
11. Shutdown all oracle services [sql>shutdown immediate]
12. Stop all listeners [lsnrctl stop]
13. Apply patch on Standby DR by doing the following:
- Set current directory to the directory where the patch is located and then run OPatch utility by entering the following commands:
cd /u01/app/oracle/patches/spuapr2015/patch#
opatch napply -skip_subset -skip_duplicate
14. Once verification is complete, drop the restore points from STANDBY DR node via: SQL> drop restore point before_spujul2015;
**NOTE** I Didn’t do catbundle.sql cpu apply on Standalone node (DR) because it wasn’t’ very explicit to do so from the Oracle Support site. I would need your thought here Bruce.
Ken,
Also, on the issue of applying catbundle on standby, you should not do that. Catbundle applied on one node of the cluster is sufficient for the cluster as well as the standby.
Bruce
OEM
====
CREATING NOTIFICATION RULES in OEM
1. SETUP>INCIDENT RULES>CREATE RULE>enter Name of Rule/Description>Select Target(Job/Metric Extensions/Self Update)>Select Target(Database Server/all target(Mission Critical/Production/Staging/Test/Development=>You can specify(+ADD)/EXCLUDE Database(target(s)) you want/don't want RULE to APPLY)>Save
2. You can view/edit Rules set on specific target(database(s)):SETUP>Incident Rules>EDIT Rule(REMEDY Monitoring)>select RULES>EDIT rule>Select Event>Conditional Actions>Review
3. IWMS[Training Database] Notification Rules: EVENTS alerts: Incident Rules>View Rule Set: IWMS [Training Database] Notification Rules>applies to/AlertLog/Tablespace allocation/Tablespace Full/Recovery Area/Archive Area/Database Services/FAST Recovery=>Severity=send CRITICAL Warnings….on Threshold reached or above
*PLATFORMS
To see the different PLATFORMS that host ORACLE database in your enterprise: ENTERPRISE>CONFIGURATION>INVENTORY and USAGE DETAILS [14 RHEL(v5.11)/8 SUN OS/3 RHEL(v6.6)/1 RHEL(v5.10)
*SQL PERFORMANCE ANALYZER
To see how system changes impacts SQL performance by identifying variations in SQL execution plans and statistics caused by system change. It works by running the SQL statements in SQL Tuning SET one-after-another from a single instance session before and after the change(e.g. patching,upgrade,etc). For SQL statement executed, SQL Performance analyzer captures the execution plan and statistics and stores them in the TARGET database.
How TO…: To run the SQL PERFORMANCE ANALYZER: Go To ENTERPRISE>QUALITY MANAGEMENT>SQL PERFORMANCE ANALYZER>SEARCH database Target Name>Select Target Database(e.g. BASSP)>Continue>Login>ADVISOR CENTRAL[ADDM/Maximum availability architecture/Segment Advisor/Streams Performance Advisor/Automatic Undo Management/Memory Advisors/SQL Advisors/Data Recovery Advisor/MTTR Advisor/SQL Performance Analyzer]>Select SQL Performance Analyzer WorkFlow item[Upgrade from 9i or 10.1/Upgrade from 10.2 or 11g/Parameter Change/Optimizer Statistics/Exadata Simulation/Guided WorkFlow]
*DATABASE INSTANCE e.g: BASSD> CHECKER CENTRAL>ADVISOR CENTRAL>Checkers/undo Segment Integrity Check/Redo Integrity Check/DB Structure Integrity Check/CF Block Integrity Check/Data Block Integrity Check/Dictionary Integrity Check/Transaction Integrity Check
*OEM DATABASE PERFORMANCE: Case study database= BASSD
1. CHECK for BLOCKING SESSIONS: BASSD>Performance>Blocking Sessions>/Top Consumers/Duplicate SQL/Instance LOCKS/Instance Activity/SQL Response Time
2. Check for DATABASE REPLAY: Performance>Database Replay
3. Check for SEARCH SESSIONS: Performance>Search Sessions
4. Check for Adaptive Thresholds: Performance>Adaptive Thresholds
5. Check for Real-Time ADDM: Performance>Real-Time ADDM
6. Check for Emergency Monitoring: Performance>Emergency Monitoring
7. Check for Memory Advisor: Performance>Memory Advisor
8. Check for Advisors Home: Performance>Advisors Home
9. Check for AWR: Performance>AWR>AWR Report/AWR Administration/Compare Period ADDM/Compare Period Reports
10. Check for SQL: Performance>SQL>SQL Tuning Advisor/SQL Performance Analyzer/SQL Access Advisor/SQL Tuning Sets/SQL Plan Control/Optimizer Statistics/Cloud Control SQL History/Search SQL/Run SQL/SQL Worksheet
11. Check for SQL Monitoring: Performance>SQL Monitoring
12. Check for ASH Analytics: Performance>ASH Analytics
13. Check for TOP Activity: Performance>Top Activity
*OEM DATABASE ORACLE DATABSE: Case study database= BASSD
1. Home: Oracle Database>Home
2. Monitoring: Oracle Database>Monitoring>User Defined Metrics/All Metrics/Metric and Collection Settings/Metric Collection Errors/Status History/Incident Manager/Alert History/Blackouts
3. Diagnostics: Oracle Database>Diagnostics>Support Workbench/Database Instance Health
4. Control: Oracle Database>Control>Startup/Shutdown/Create Blackout/End Blackout
5. Job Activity: Oracle Database>Job Activity
6. Information Publisher Reports: Oracle Database>Information Publisher Reports
7. Logs: Oracle Database>Logs>Text Alert Logs Contents/Alert Log Errors/Archive/Purge Alert Log/Trace Files
8. Provisioning: Oracle Database>Provisioning>Create Provisioning profile/Create Database Template/Clone Database Home/Clone Database/Upgrade Oracle Home&Database/Upgrade Database/Activity
9. Configuration: Oracle Database>Configuration>Last Collected/Topology/Search/Compare/Comparison Job Activity/History/Save/Saved
10. Compliance: Oracle Database>Compliance>Results/Standard Associations/Real-Time Observations
11. Target Setup: Oracle Database>Target Setup>Enterprise Manager Users/Monitoring Configuration/Administrator Access/Remove Target/Add to Group/Properties
12. Target Information: Oracle Database>Target Information
*OEM DATABASE AVAILABILITY: Case study database= BASSD
1. Check for High Availability Console: Availability>High Availability Console/MAA Advisor/BACKUP & RECOVERY[Schedule Backup/Management Current Backups/Backup Reports/Restore Points/Perform Recovery/Transactions/Backup Settings/Recovery Settings/Recovery Catalog Settings]/Add Standby Database
*OEM DATABASE SCHEMA: Case study database= BASSD
1. Users: Schema>Users
2. Database Objects>Schema>Database Objects>Tables/Indexes/Views/Synonyms/Sequences/Database Links/Directory Objects/Reorganize Objects [desc dba_ob>select * 4m ob]
3. Programs: Schema>Programs/Packages/Package Bodies/Procedures/Functions/Triggers/Java Classes/Java Sources
4. Materialized Views: Schema>Materialized Views>Show all/Logs/Refresh Groups/Dimensions
5. User Defined Types: Schema>User Defined Types>Array Types/Object Types/Table Types
6. Database Export/Import: Schema>Database Export/Import>Transport Tablespaces/Export to Export Files/Import from Export Files/Import from Database/Load Data from User Files/View Export & Import Jobs
7. Change Management: Schema>Change Management>Data Comparisons/Schema Change Plans/Schema Baselines/Schema Comparisons/Schema Synchronizations
8. Data Discovery and Modeling: Schema>Data Discovery and Modeling
9. Data Subsetting: Schema>Data Subsetting
10. Data Masking Definitions: Schema>Data Masking Definition
11. Data Masking Format Library: Schema>Data Masking Format Library
12. XML Database: Schema>XML Database>Configuration/Resources/Access Control Lists/XML Schemas/XML Type Tables/XML Type Views/XML Type Indexes/XML Repository Events
13. Text Manager: Schema>Text Indexes/Query Statistics
14. Workspaces: Schema>Workspaces
*OEM DATABASE ADMINISTRATION: Case study database= BASSD
1. Initialization parameters: Administration>Initialization Parameters
2. Security: Administration>Security>Home/Reports/Users/Roles/Profiles/Audit Settings/Transparent Data Encryption/Oracle Label Security/Virtual Private Database policies/Application Contexts/Enterprise User Security/Database Vault
3. Storage: Administration>Storage>Control Files/Datafiles/Tablespaces/Make Tablespace Locally Managed/Temporary Tablespace Groups/Rollback Segments/Segment Advisor/Automatic Undo Management/Redo Log Groups/Archive Logs
4. Oracle Scheduler: Administration>Oracle Scheduler>Home/Jobs/Job Classes/Schedules/Programs/Windows/Window Groups/Global Attributes/Automated Maintenance Tasks
5. Streams Replication: Administration>Streams Replication>Setup Streams/Manage Replication/Setup Advanced Replication/Manage Advanced Replication/Manage Advanced Queues
6. Migrate to ASM: Administration>Migrate to ASM
7. Resource Manager: Administration>Resource Manager
8. Database Feature Usage: Administration>Database Feature Usage
******************************************************************************************************************************************************
VIEWING INCIDENTS that happened on your DATABASE (e.g. night before)
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Monitoring>Alert History/Incident Manager>/Events without Incidents/My Open incidents & Problems/Unassigned incidents…
CHECK HEALTH of DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Diagnostics>Database Instance Health
SHUTDOWN DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Control>Startup/Shutdown
VIEW ALERT LOG (Errors) on DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Logs>AlertLog Errors
CLONE/UPGRADE a DATABASE
1. Go to TARGETs>DATABASES><database_name>ORACLE DATABASE>Provisioning>Clone Database/Upgrade Database
MONITOR SQL STATEMENTS
1. Go to TARGETs>DATABASES><database_name>PERFORMANCE>SQL Monitoring/SQL>/SQL TUNING/OPTIMIZER Statistics/Run SQL…>BLOCKING SESSIONS
BACKUP & RECOVERY DATABASE
1. Go to TARGETs>DATABASES><database_name>AVAILABILITY>BACKUP & RECOVERY
DATABASE ADMINISTRATION
1. Go to TARGETs>DATABASES><database_name>ADMINISTRATION>Security(Users,Roles,Profiles)>Storage(Control Files,Datafiles,Tablespace,Rollback segments,Archive Logs)
********************************************************************************************************************************************************
OEM TEMPLATES(SQL scripts) for TASKS
1. DASHBOARD: TARGET>Systems>Members>DASHBOARD
2. TEMPLATE: [looking at the metrics of ALL 14 systems/database at once]>(DB_Name)>DASHBOARD[
=========PATCHING STEPS=========================
=======================================================================================================================================================
===============MISCELLANEOUS==================================
BRUCE
=====
[7/31/2015 8:54 AM] Franklin, Bruce:
Ken, gm
[7/31/2015 8:54 AM] Franklin, Bruce:
happy Friday
[7/31/2015 8:54 AM] Chando, Kenneth:
hi Bruce good morning. Thanks Bruce and same to you
[7/31/2015 8:54 AM] Chando, Kenneth:
excellent job...
[7/31/2015 8:54 AM] Franklin, Bruce:
question for you... have you applied that JAVA patch in the lab?
[7/31/2015 8:55 AM] Chando, Kenneth:
I'm about to patch the 165/166 cluster with the OJVN
[7/31/2015 8:55 AM] Chando, Kenneth:
just about to. Finished creating GRP
[7/31/2015 8:55 AM] Chando, Kenneth:
shutting down the database
[7/31/2015 8:55 AM] Franklin, Bruce:
ok, once you are done please send me the steps
[7/31/2015 8:55 AM] Chando, Kenneth:
ok, I will
[7/31/2015 8:59 AM] Chando, Kenneth:
one thing I would like to learn from you Bruce is the Standalone duplicate steps. Not in a hurry. Whenever you're free
[7/31/2015 8:59 AM] Franklin, Bruce:
sure thing
[7/31/2015 8:59 AM] Franklin, Bruce:
we can do that later
[7/31/2015 9:00 AM] Chando, Kenneth:
got you.
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 10:05 AM] Franklin, Bruce:
Ken, you are planning to apply the OJVM patch to ORCLDR standby , correct?
[7/31/2015 10:06 AM] Chando, Kenneth:
yes as well as on .165/.166 cluster
[7/31/2015 10:06 AM] Franklin, Bruce:
ok
[7/31/2015 10:06 AM] Chando, Kenneth:
almost done with cluster
[7/31/2015 10:07 AM] Franklin, Bruce:
how are you coming with getting access on the DHS side?
[7/31/2015 10:07 AM] Chando, Kenneth:
Angela Knouse said, she's waiting on my case closure to PAR approval
[7/31/2015 10:07 AM] Franklin, Bruce:
i am ready to put you to work ;)
[7/31/2015 10:08 AM] Chando, Kenneth:
hahaha...I'm excited...
[7/31/2015 10:09 AM] Franklin, Bruce:
maybe that will be done in time so that you can assist with some of the patching for July SPU and OJVM... i am lining up the schedules with each of my customers
[7/31/2015 10:09 AM] Franklin, Bruce:
give you some good exposure
[7/31/2015 10:11 AM] Chando, Kenneth:
great idea Bruce.
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 12:08 PM] Franklin, Bruce:
hey Ken, question for you...
[7/31/2015 12:08 PM] Chando, Kenneth:
ok sir
[7/31/2015 12:08 PM] Chando, Kenneth:
ride on
[7/31/2015 12:08 PM] Franklin, Bruce:
how much experience do you have with OEM setup?
[7/31/2015 12:09 PM] Franklin, Bruce:
as in the notification piece
[7/31/2015 12:09 PM] Chando, Kenneth:
mostly I have administration support but I'm a fast learner and would be glad if you challenge me with some tasks
[7/31/2015 12:10 PM] Chando, Kenneth:
just finished patching the cluster with OJVN. No issues
[7/31/2015 12:10 PM] Franklin, Bruce:
is it OJVN or OJVM?
[7/31/2015 12:10 PM] Chando, Kenneth:
about to work on the Standalone one after I go to the rest room
[7/31/2015 12:11 PM] Chando, Kenneth:
Will make the steps available to you after I complete the Standalone one. That should be easier since it's just one node
[7/31/2015 12:11 PM] Franklin, Bruce:
do you apply the patch with opatch utility?
[7/31/2015 12:12 PM] Chando, Kenneth:
no worries Bruce. I love it...I am eager to assist you in any way. I know you have alot in your plate
[7/31/2015 12:13 PM] Chando, Kenneth:
feel free to assign them. When I'm stuck, I will always reach back to you
[7/31/2015 12:14 PM] Chando, Kenneth:
will be right back, rushing to the rest room
[7/31/2015 12:18 PM] Chando, Kenneth:
I'm back Bruce
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 3:21 PM] Franklin, Bruce:
hey Ken
[7/31/2015 3:21 PM] Chando, Kenneth:
hi Bruce. patching finished
[7/31/2015 3:22 PM] Franklin, Bruce:
working on the other side, and took a lunch break, too
[7/31/2015 3:22 PM] Chando, Kenneth:
trying to complete the steps
[7/31/2015 3:22 PM] Chando, Kenneth:
wow...so you're energetic to go...Lol
[7/31/2015 3:22 PM] Chando, Kenneth:
just kidding Bruce...
[7/31/2015 3:23 PM] Franklin, Bruce:
ok, you will have the steps documented for applying the SPU and the JAVA patches today?
[7/31/2015 3:23 PM] Chando, Kenneth:
yes, I will...
[7/31/2015 3:24 PM] Chando, Kenneth:
You will get it via email
[7/31/2015 3:24 PM] Franklin, Bruce:
did i ever send you an example of how i do a playbook type document for that?
[7/31/2015 3:25 PM] Chando, Kenneth:
I don't think so
[7/31/2015 3:25 PM] Chando, Kenneth:
wouldn't mind if you make it available
[7/31/2015 3:25 PM] Franklin, Bruce:
it is really simple but helps when working with our Service Account Managers for submitting a change request
[7/31/2015 3:25 PM] Franklin, Bruce:
i will send it to you now via email
[7/31/2015 3:25 PM] Chando, Kenneth:
cool
[7/31/2015 3:29 PM] Franklin, Bruce:
just sent
[7/31/2015 3:30 PM] Franklin, Bruce:
2 playbook files
[7/31/2015 3:30 PM] Chando, Kenneth:
thanks. Just got it
[7/31/2015 3:35 PM] Chando, Kenneth:
Bruce, it's quite similar to the one Lionel sent to me. That's what I have been using too and the steps I'm compiling now might incorporate some components from these playbooks
We saved this conversation in the Conversations tab in Lync and in the Conversation History folder in Outlook.
[7/31/2015 4:08 PM] Chando, Kenneth:
hi Bruce, I just sent the steps I used for DR. I'm still working on the Cluster steps. Will try to finish that by end of day. I'm heading home. Have a great day and a awesome weekend
[7/31/2015 4:08 PM] Franklin, Bruce:
thanks
[7/31/2015 4:09 PM] Chando, Kenneth:
yw!
[7/31/2015 4:09 PM] Franklin, Bruce:
you too
[8/5/2015 11:59 AM] Franklin, Bruce:
hey Ken
[8/5/2015 11:59 AM] Franklin, Bruce:
gm
[8/5/2015 11:59 AM] Chando, Kenneth:
hi Bruce gm
[8/5/2015 11:59 AM] Franklin, Bruce:
finally
[8/5/2015 11:59 AM] Chando, Kenneth:
I'm trying to get the link to the OJVN
[8/5/2015 11:59 AM] Franklin, Bruce:
got off the Remedy bridge call
[8/5/2015 11:59 AM] Chando, Kenneth:
wow...I saw notification that status is back...
[8/5/2015 12:00 PM] Chando, Kenneth:
you made it happen Bruce...Lol
[8/5/2015 12:05 PM] Chando, Kenneth:
https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?_afrLoop=386724019383364&parent=DOCUMENT&patchId=21068553&sourceId=21068553.8&_afrWindowMode=0&_adf.ctrl-state=lazdm31se_165
[8/5/2015 12:05 PM] Chando, Kenneth:
just sent the link to you via email as well
[8/5/2015 12:06 PM] Chando, Kenneth:
Bullet one is correct. I guess there was a typo on bullet 5
[8/5/2015 12:06 PM] Franklin, Bruce:
ok; thank you sir
[8/5/2015 12:06 PM] Chando, Kenneth:
the zip file in bullet 5 is the spu which is not for the OJVN
[8/5/2015 12:06 PM] Chando, Kenneth:
you're welcome!
[8/5/2015 12:10 PM] Franklin, Bruce:
yes; that is why i wanted to clarify; i had previously download the spu and knew that probably wasn't the correct file name
[8/5/2015 12:10 PM] Franklin, Bruce:
i am assembling all my documents to get the RFCs going for patching
[8/5/2015 12:11 PM] Franklin, Bruce:
might see if we can get you involved, at least to shadow me on this round
[8/5/2015 12:12 PM] Franklin, Bruce:
we'll talk with Lionel about that
[8/5/2015 12:25 PM] Franklin, Bruce:
as for the order of patching, do the standard PSU, followed by the ojvm?
[8/5/2015 12:26 PM] Chando, Kenneth:
ok Bruce no worries. Anytime...
[8/5/2015 12:27 PM] Franklin, Bruce:
LOL ... that was question
[8/5/2015 12:27 PM] Franklin, Bruce:
;)
[8/5/2015 12:27 PM] Chando, Kenneth:
hahaha...:)
[8/5/2015 12:27 PM] Chando, Kenneth:
I thought that was information
[8/5/2015 12:27 PM] Chando, Kenneth:
yep...go ahead
[8/5/2015 12:40 PM] Franklin, Bruce:
so, that is the correct order for the patching... the Database PSU July, followed by the JVM PSU July?
[8/5/2015 12:41 PM] Chando, Kenneth:
yes, I did follow that order and had no issues
[8/5/2015 12:41 PM] Franklin, Bruce:
ok; thanks
[8/5/2015 12:41 PM] Chando, Kenneth:
yw
[8/5/2015 2:04 PM] Franklin, Bruce:
are you meeting with us?
[8/5/2015 2:05 PM] Chando, Kenneth:
yes
[8/5/2015 4:48 PM] Chando, Kenneth:
hi Bruce
[8/5/2015 4:49 PM] Chando, Kenneth:
wanted to find out when do you plan to do the OJVN install for me to shadow?
[8/5/2015 4:49 PM] Chando, Kenneth:
is that going to be today?
[8/6/2015 9:51 AM] Franklin, Bruce:
Ken, good morning
[8/6/2015 9:51 AM] Franklin, Bruce:
just saw you text from yesterday
[8/6/2015 9:51 AM] Chando, Kenneth:
gm sir...
[8/6/2015 9:52 AM] Franklin, Bruce:
no install of anything on DHS side until we have an ICCB approved RFC
[8/6/2015 9:52 AM] Chando, Kenneth:
yep, was trying to get a time for which schedule patching will take place so that I can log that in my calendar not to forget
[8/6/2015 9:53 AM] Chando, Kenneth:
ok. So you've put in your RFC and now waiting for approval?
[8/6/2015 9:53 AM] Franklin, Bruce:
target it 8/21 for DNDO JACCIS and I plan to get an email out to the other SAMs today so we can set dates for CBP and EAIR
[8/6/2015 9:53 AM] Franklin, Bruce:
i will let you know
[8/6/2015 9:53 AM] Chando, Kenneth:
thanks Bruce!
[8/6/2015 9:53 AM] Franklin, Bruce:
also, please follow-up on the email i just sent you
[8/6/2015 9:54 AM] Chando, Kenneth:
Just FYI, I realized that DR in DC2LAB is around 22% free on FRA. I checked the archivelogs via RMAN Crosscheck and it's below 7days
[8/6/2015 9:54 AM] Franklin, Bruce:
ok
[8/6/2015 9:55 AM] Franklin, Bruce:
looks like maybe hardware or vm issues with disks
[8/6/2015 9:55 AM] Chando, Kenneth:
ok, will check email now
[8/6/2015 9:55 AM] Franklin, Bruce:
thanks
[8/6/2015 10:02 AM] Chando, Kenneth:
thanks Bruce, I will go ahead and start working on the cluster patch as per OPatch documentation
[8/6/2015 10:21 AM] Franklin, Bruce:
ok, just remember to check that in the future before applying a patch
[8/6/2015 10:21 AM] Chando, Kenneth:
I will Bruce. Thanks for pointing this out
[8/6/2015 10:22 AM] Franklin, Bruce:
otherwise, if we have issues and install with an older version than Oracle supports it will be difficult to get their assistance
[8/6/2015 10:23 AM] Franklin, Bruce:
i believe we are okay on this one since we've not had any issues
[8/6/2015 10:23 AM] Chando, Kenneth:
got you
[8/6/2015 10:31 AM] Franklin, Bruce:
Ken, did you remove the directories you created in $ORACLE_HOME/patches ?
[8/6/2015 10:32 AM] Franklin, Bruce:
for the ojvm and SPU patching
[8/6/2015 10:32 AM] Chando, Kenneth:
no I didn't
[8/6/2015 10:33 AM] Franklin, Bruce:
interesting... i don't see either on the 165 or 242 servers
[8/6/2015 10:33 AM] Chando, Kenneth:
the path I had them was /u01/app/oracle/patches
[8/6/2015 10:34 AM] Chando, Kenneth:
it's there on .165
[8/6/2015 10:35 AM] Chando, Kenneth:
oh, I see, you were looking probably in $ORACLE_HOME instead
[8/6/2015 10:35 AM] Franklin, Bruce:
yes\,. there is already a patches directory in $ORACLE_HOME
[8/6/2015 10:35 AM] Chando, Kenneth:
$ORACLE_HOME/patches I mean to say
[8/6/2015 10:36 AM] Franklin, Bruce:
i guess no one told you
[8/6/2015 10:36 AM] Franklin, Bruce:
lol
[8/6/2015 10:36 AM] Chando, Kenneth:
ok...I will be using that going forward. Per document from Lionel, that was point to /u01/app/oracle/patches
[8/6/2015 10:37 AM] Franklin, Bruce:
we should not expect the new guy to know everything that we know, eh?
[8/6/2015 10:37 AM] Chando, Kenneth:
hahaha...that's why you're there....
[8/6/2015 10:37 AM] Chando, Kenneth:
Thanks so much for guiding me...
[8/6/2015 10:37 AM] Franklin, Bruce:
he should have told you
[8/6/2015 10:37 AM] Franklin, Bruce:
from now on, since Lionel is leaving, we blame everything on him
[8/6/2015 10:37 AM] Franklin, Bruce:
got it?
[8/6/2015 10:38 AM] Franklin, Bruce:
:)
[8/6/2015 10:38 AM] Chando, Kenneth:
hahaha...:)
[8/6/2015 10:38 AM] Chando, Kenneth:
you're funny Bruce...that was quite hilarious
[8/6/2015 10:38 AM] Chando, Kenneth:
got you :)
[8/6/2015 10:38 AM] Franklin, Bruce:
i refuse to let work be boring or too serious
[8/6/2015 10:38 AM] Chando, Kenneth:
great attitude and it helps alot
[8/6/2015 10:38 AM] Franklin, Bruce:
it is a blessing from the Lord and He expects us to enjoy what we do
[8/6/2015 10:39 AM] Chando, Kenneth:
100% agreed
[8/6/2015 10:40 AM] Chando, Kenneth:
The Lord requests us to be deligent in all that we do and I try to keep up to that part even though sometimes one falters
[8/6/2015 10:41 AM] Chando, Kenneth:
live is what one makes it. If one wants joy, then one should make everything s/he does joyful. That's my take...
[8/6/2015 10:41 AM] Franklin, Bruce:
i agree
[8/6/2015 10:41 AM] Chando, Kenneth:
so everything you've been guiding me has always helped to make me joyful.
[8/6/2015 10:42 AM] Chando, Kenneth:
Thanks...I will be trying my best to note this good minute details down so that the next guy who comes to join the team don't make my same mistakes
PATCHING TRICKS
1. Steps: Open RFC>Approval>Install Patch
2. TRICK(S):Day 1: Prior to Open RFC, Creat Restore Point>have Patch downloaded and saved into a DIRECTORY in a NODE>Day 2: unzip>install after RFC approval
FLASHBACK
=========
1. Best, FLASHBACK DATABASE:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
run
{
FLASHBACK DATABASE TO RESTORE POINT 'MWMS_TRAINING_START';
SQL 'ALTER DATABASE OPEN RESETLOGS';
SQL 'DROP RESTORE POINT MWMS_TRAINING_START';
SQL 'CREATE RESTORE POINT MWMS_TRAINING_START GUARANTEE FLASHBACK DATABASE';
}
EXIT;
2. FLASHBACK SCN
SELECT oldest_flashback_scn, oldest_flashback_time
FROM gv$flashback_database_log;
VIEWING PATHS: cat .godb, cat .goasm
oracle@D2LSENPSH166[orcl2]# pwd
/home/oracle
oracle@D2LSENPSH166[orcl2]# cat .godb
IMPORTANT STEPS
==============
1. ALWAYS create a restore point or BACKUP of your controlfile, database, prior to doing any upgrade(changes)
2. ASM mappings via paths in cat .godb, cat .goasm
3. Map database version paths appropriately in the ~/.bash_profile (before restart of server)
4. Know the most recent database backupset number(important for restore)
COMMANDS
==========
[root@D2LSENPSH212 ~]# hostname
D2LSENPSH212
[root@D2LSENPSH212 ~]# sudo su - oracle
oracle@D2LSENPSH212[openview]# which version
/usr/bin/which: no version in (/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/u01/app/oracle/product/11.2.0.3/bin::/usr/local/bin:/bin:/usr/bin:/u01/app/oracle/product/11.2.0.3/OPatch)
oracle@D2LSENPSH212[openview]# sql
SQL*Plus: Release 11.2.0.3.0 Production on Sun Aug 30 13:53:12 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name from v$database;
NAME
---------
OPENVIEW
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@D2LSENPSH212[openview]# history
4 pwd
5 ping -a D2LSENPSH212
6 pwd
7 cd /tmp
8 ls -l
9 cp *.sql /u01/app/oracle/scripts
10 cd -
11 ls -l
12 who
13 alog
14 pwd
15 cd ..
16 ls
17 mkdir staging
18 ping -a D2LSENPSH212
19 pwd
20 cd staging
21 ls
22 mkdir upgrade
23 cd upgrade
24 pwd
25 pwd
26 mv upgrade /u01/app/oracle
27 ls
28 pwd
29 cd ..
30 mv upgrade /u01/app/oracle
31 cd ../upgrade
32 ls
33 pwd
34 mkdir 11.2.0.3
35 cd *
36 pwd
37 cd /tmp
38 ls -ltr
39 cdp *.zip /u01/app/oracle/upgrade/11.2.0.3
40 cp *.zip /u01/app/oracle/upgrade/11.2.0.3
41 exit
42 ls -l
43 cd /u01/app/oraInventory
44 sql
45 tail -f /u01/app/oraInventory/logs/installActions2014-11-17_06-29-47PM.log
46 exit
47 df -h
48 cd /u01/app
49 ls
50 cd oracle
51 ls
52 cd upgrade
53 ls
54 cd *
55 ls
56 ls -l
57 unzip p10404530_112030_Linux-x86-64_1of7.zip
58 unzip p10404530_112030_Linux-x86-64_2of7.zip
59 ls
60 unzip p10404530_112030_Linux-x86-64_3of7.zip
61 df -h
62 ls
63 view dbupgdiag.sql
64 pwd
65 cd /tmp
66 ls
67 cp dbupgdiag.sql /u01/app/oracle/upgrade/11.2.0.3
68 cp db.rsp /u01/app/oracle/upgrade/11.2.0.3
69 cp utlu112i_5.sql /u01/app/oracle/upgrade/11.2.0.3
70 ls
71 cd -
72 ls
73 sql
74 df -h
75 cd /u01/oradata/openview
76 ls
77 cd -
78 cd -
79 mkdir backup
80 cd backup
81 pwd
82 cd /u01/app/oracle/upgrade/11.2.0.3
83 sql
84 lsnrctl stat
85 lsnrctl stop
86 sql
87 cd $ORACLE_HOME
88 ls
89 cd ..
90 ls
91 mkdir 11.2.0.3
92 ls
93 pwd
94 cd ../upgrade
95 ls
96 cd 11*
97 pwd
98 ls
99 view db.rsp
100 mv db.rsp db_install_11203.rsp
101 pwd
102 ls
103 cd database
104 ls
105 ./runInstaller -silent -noconfig -ignorePrereq -responseFile /u01/app/oracle/upgrade/11.2.0.3/db_install_11203.rsp
106 pwd
107 ls
108 cd ..
109 ls
110 sql
111 sql
112 cd
113 ls -la
114 cp -p .bash_profile .bash_profilebkp
115 ps -ef |grep -i ora
116 view .bash_profile
117 . .bash_profile
118 cd/etc
119 cd /etc
120 ls
121 view oratab
122 pwd
123 cd /u01/app/oracle/product/11.2.0.3
124 cd dbs
125 ls
126 cd ../network/admin
127 pwd
128 ls
129 ls
130 view listener.ora
131 view sqlnet.ora
132 view tnsnames.ora
133 echo $ORACLE_HOME
134 cd $ORACLE_HOME/rdbms/admin
135 pwd
136 ls -l catupgrd.sql
137 ps -ef |grep -i pmon
138 lsnrctl stat
139 sql
140 ps -ef |grep -i pmon
141 sql
142 cd -
143 cd /u01/app/oracle/upgrade/11.2.0.3
144 ls
145 sql
146 alog
147 sql
148 ps -ef |grep -i pmon
149 pwd
150 ls
151 scp db_install_11203.rsp D2LSENPSH143:/tmp
152 scp db_install_11203.rsp root@D2LSENPSH143:/tmp
153 exit
154 cd /u01/app/oracle/product/11.2.0.3
155 ls
156 sql
157 cd $ORACLE_HOME/dba
158 ls
159 cd $ORACLE_HOME/dbs
160 ls
161 ls -ltr
162 mv OPENVIEW.ora initOPENVIEW.ora
163 ls -ltr
164 cd /u01/app/oracle/product/11.2.0.2
165 cd dbs
166 ls
167 cp *.ora /u01/app/oracle/product/11.2.0.3
168 cp ora* /u01/app/oracle/product/11.2.0.3
169 cd ../network/admin
170 ls
171 pwd
172 cp *.ora /u01/app/oracle/product/11.2.0.3/network/admin
173 cd
174 ls -la
175 . ..bash_profile
176 . .bash_profile
177 echo $ORACLE_HOME
178 cd /u01/app/oracle/product/11.2.0.3/dbs
179 ls
180 cp ora* /u01/app/oracle/product/11.2.0.3/dbs
181 pwd
182 ls
183 cd ..
184 ls
185 cd /u01/app/oracle/product/11.2.0.2/dbs
186 ls
187 cp ora* /u01/app/oracle/product/11.2.0.3/dbs
188 cp *.ora /u01/app/oracle/product/11.2.0.3/dbs
189 cd ../../
190 pwd
191 cd ../upgrade/11.2.0.3
192 pwd
193 ls
194 cd
195 cat .bash_profile
196 cd -
197 ls
198 ls
199 ls -l
200 ping -a D2LSENPSH212
201 sql
202 ps -ef |grep -i pmon
203 cd /tmp
204 ls -ltr
205 cd -
206 cd ../../
207 ls
208 mkdir patches
209 cd patches
210 mkdir spuoct2014
211 cd *
212 pwd
213 cd /tmp
214 cp p19271438_112030_Linux-x86-64.zip /u01/app/oracle/patches/spuoct2014
215 cd -
216 ls
217 unzip p19271438_112030_Linux-x86-64.zip
218 ls
219 cd 19271438
220 ls
221 cat README.txt
222 pwd
223 opatch napply -skip_subset -skip_duplicate
224 cd $ORACLE_HOME/rdbms/admin
225 sql
226 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2014Nov17_22_16_15.log
227 ps -ef |grep -i pmon
228 lsnrctl start
229 lsnrctl stat
230 lsnrctl stat
231 ps -ef |grep -i pmon
232 ps -ef |grep -i pmon
233 lsnrctl stat
234 tnsping openview
235 echo $TNS_ADMIN
236 cd $TNS_ADMIN
237 ls
238 cat tnsnames.ora
239 tnsping ov_net
240 df -h
241 cd
242 cat .bash_profile
243 cd /tmp
244 ls
245 scp dbupgdiag.sql root@D2LSENPSH143:/tmp
246 cp dbup*.sql /u01/app/oracle/upgrade/11.2.0.3
247 cd cd /u01/app/oracle/patches
248 ls
249 cd /u01/app/oracle/
250 cd patches
251 ls
252 cd *
253 ls
254 scp p19271438_112030_Linux-x86-64.zip D2LSENPSH143:/tmp
255 scp p19271438_112030_Linux-x86-64.zip root@D2LSENPSH143:/tmp
256 sql
257 cd
258 view .bash_profile
259 . .bash_profile
260 alog
261 df -h
262 exit
263 cd /u01/app/oracle/patches
264 ls
265 cd *
266 ls
267 scp p19271438_112030_Linux-x86-64.zip lionel.charles@D2LSEUTSH032.localdomain/tmp
268 scp p19271438_112030_Linux-x86-64.zip lionel.charles@D2LSEUTSH032:/tmp
269 cd
270 ls -la
271 cat .bash_profile
272 cd /u01/app/oracle/patches/spuoct2014
273 ls -l
274 scripts
275 ls -ltr
276 sql
277 ls -ltr
278 cat sh_tsdf.sql
279 ls -ltr
280 view sh_tsdf.sql
281 exit
282 sql
283 exit
284 sqlplus opc_op/opc_op@openview
285 grep 1521 /etc/services
286 sqlplus opc_op/opc_op@listener
287 pwd
288 cd network
289 cd /u01/app/
290 dir
291 cd oracle/product/11.2.0.3/
292 dir
293 wpd
294 pwd
295 cd network
296 cd admin
297 dir
298 ll
299 more tnsnames.ora
300 sqlplus opc_op/opc_op@connect_data
301 sqlplus -s
302 sqlplus -s
303 sqlplus -s
304 sqlplus -s
305 sqlplus -s
306 sqlplus opc_op/opc_op@openview
307 sqlplus
308 ll
309 cd /etc/opt/OV/share/conf/OpC/mgmt_sv/report
310 cd /etc/opt/OV/share/conf/OpC/mgmt_sv/
311 cd reports/C
312 dir
313 pwd
314 sqlplus -h
315 pwd
316 vi unmanaged.sql
317 sqlplus
318 REM ***********************************************************************
319 REM File: all_nodes.sql
320 REM Description: SQL*Plus report that shows all nodes in the node bank
321 REM Language: SQL*Plus
322 REM Package: HP OpenView Operations for Unix
323 REM
324 REM (c) Copyright Hewlett-Packard Co. 1993 - 2004
325 REM ***********************************************************************
326 column nn_node_name format A80 truncate
327 column label format A25 truncate
328 column nodetype format A12
329 column isvirtual format A3
330 column licensetype format A3
331 column hb_flag format A4
332 column hb_type format A6
333 column hb_agent format A3
334 set heading off
335 set echo off
336 set linesize 150
337 set pagesize 0
338 set feedback off
339 select ' HPOM Report' from dual;
340 select ' -----------' from dual;
341 select ' ' from dual;
342 select 'Report Date: ',substr(TO_CHAR(SYSDATE,'DD-MON-YYYY'),1,20) from dual;
343 select ' ' from dual;
344 select 'Report Time: ',substr(TO_CHAR(SYSDATE,'HH24:MI:SS'),1,20) from dual;
345 select ' ' from dual;
346 select 'Report Definition:' from dual;
347 select '' from dual;
348 select ' User: opc_adm' from dual;
349 select ' Report Name: Nodes Overview' from dual;
350 select ' Report Script: /etc/opt/OV/share/conf/OpC/mgmt_sv/reports/C/unmanaged_nodes.sql' from dual;
351 select ' ' from dual;
352 select ' ' from dual;
353 select ' <--Heartbeat-->' from dual;
354 select 'Node Machine Type Node Type Lic Vir Flag Type Agt' from dual;
355 select '-------------------------------------------------------------------------------- ------------------------- ------------ --- --- ---- ------ ---' from dual;
356 select
357 nn.node_name as nn_node_name,
358 nm.machine_type_str as label,
359 DECODE(no.node_type, 0, 'Not in Realm', 1, 'Unmanaged', 2,
360 'Controlled', 3, 'Monitored', 4, 'Msg Allowed', 'Unknown') as nodetype,
361 DECODE(no.license_type, 0, 'NO', 1, 'NO', 2, 'NO', 'YES') as licensetype,
362 DECODE(no.is_virtual, 0, 'NO', 1, 'YES', 'YES') as isvirtual,
363 DECODE(no.heartbeat_flag, 0, 'NO', 'YES ') as hb_flag,
364 DECODE(mod(no.heartbeat_type,4), 0, 'None', 1, 'RPC', 2, 'Ping',
365 'Normal') as hb_type,
366 DECODE(floor(no.heartbeat_type/4), 0, 'NO', 'YES') as hb_agent
367 from
368 opc_nodes no,
369 opc_node_names nn,
370 opc_net_machine nm
371 where
372 no.node_id = nn.node_id
373 and nn.network_type = nm.network_type
374 and no.machine_type = nm.machine_type
375 and no.node_type = 1
376 order by
377 nn_node_name;
378 select
379 np.pattern as nn_node_name,
380 'Node for ext. events' as label,
381 DECODE(no.node_type, 0, 'Not in Realm', 1, 'Unmanaged', 2,
382 'Controlled', 3, 'Monitored', 4, 'Msg Allowed ', 'Unknown') as nodetype,
383 DECODE(no.license_type, 0, 'NO', 1, 'NO', 2, 'NO', 'YES') as licensetype,
384 '---','--- ', '------','---'
385 from
386 opc_nodes no,
387 opc_node_pattern np
388 where
389 no.node_id = np.pattern_id
390 and no.node_type = 1
391 order by
392 nn_node_name;
393 quit;
394 aqlplus
395 sqlplus
396 sqlplus
397 exit
398 sqlplus opc_op/opc_op@//d2lsenpsh212:1521/openview
399 sqlplus opc_op/opc_op@openview
400 sqlplus
401 exit
402 dir
403 sqlplus
404 exit
405 sqlplus
406 exit
407 sqlplus
408 cd /etc/init.d
409 dir
410 ./ovoracle status
411 ovoracle start
412 exit
413 dir
414 dir
415 ll =a
416 dir
417 ls -l
418 ls -al
419 vi .bash_profile
420 exit
421 vi .bash_profile
422 vi OVTrcSrv
423 pwd
424 cd /etc/init.d
425 dir
426 vi ovoracle
427 ./ovoracle
428 ovoracle start_msg
429 ovoracle start
430 exit
431 echo $ORACLE_HOME
432 exit
433 sqlplus
434 cd /u01/app/oracle/product/
435 dir
436 cd 11.2.0.3
437 dir
438 vi initOPENVIEW.ora
439 vi initopenview.ora
440 vi init.ora
441 vi /u01/oradata/openview/control03.ctl
442 echo $PATH
443 vi /etc/oratab
444 sqlplus
445 ex
446 sqlplus
447 pwd
448 ll
449 ./sqlplus
450 cd sqlplus
451 dir
452 ll
453 cd bin
454 dir
455 idr
456 ll
457 cd ../
458 dir
459 ll
460 cd admin
461 idr
462 ll
463 dir
464 cd ../
465 ll
466 cd ../
467 ll
468 cd network
469 dir
470 ll
471 cd admin
472 dir
473 ll
474 more listener.ora
475 ll
476 more shrept.lst
477 ls
478 ll
479 more sqlnet.ora
480
481 e
482 more /u01/app/oracle/product/11.2.0.3/network/log
483 pwd
484 cd ../log
485 ll
486 cd ../admin
487 dir
488 ll
489 more tnsnav.ora
490
491 ll
492 more tnsnames.ora
493
494 lsnrctl start
495 exit
496 dir
497 vi .bash_profile
498 sqlplus
499 lsnrctl status
500 lsnrctl stop
501 lsnrctl start
502 exit
503 ls
504 lsnrctl status
505 more /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
506 tail -50 /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
507 exit
508 lsnrctl status
509 llsnrctl stop
510 lsnrctl status
511 lsnrctl stop
512 lsnrctl start
513 lsnrctl stop
514 lsnrctl status
515 cd /u01/app/oracle/product/
516 ls
517 cd 11.2.0.2
518 dir
519 cd network/
520 dir
521 ll
522 lsnrctl status
523 l
524 ll
525 cd admin
526 dir
527 ll
528 vi listener.ora
529 pwd
530 cd ../../11.2.0.3
531 pwd
532 cd ../../../11.2.0.3
533 dir
534 cd admin
535 cd admin
536 ls
537 cd network
538 cd admin
539 ll
540 vi listener.ora
541 cd
542 ll -al
543 vi .bash_profile
544 cd /opt/OV/OMU/adminUI/
545 exit
546 lsnrctl start
547 pwd
548 exit
549 lsnrcltl status
550 lsnrcltl status
551 lsnrctl status
552 exit
553 exit
554 sqlplus
555 exit
556 sqlplus
557 opcsv -status
558 exit
559 sqlplus
560 vi /etc/hosts
561 exit
562 sqlplus
563 exit
564 sqlplus / as sysdba
565 sql
566 opcsv -start
567 exit
568 sql
569 exit
570 pwd
571 scripts
572 ls
573 alog
574 rman taget /
575 rman target /
576 sql
577 rman target /
578 df -h
579 sql
580 alog
581 sql
582 sqlplus
583 exit
584 lsnrctl start
585 more /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
586 tail -200 /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert/log.xml
587 sql
588 pwd
589 exit
590 lsnrctl -status
591 ps -ef | grep 1521
592 exit
593 lsnrctl start
594 exit
595 lsnrctl status
596 lsnrctl start
597 lsnrctl status
598 alog
599 lsnrctl
600 cd /u01/app/oracle/product/11.2.0.3/network/admin/
601 ls -al
602 vi listener.ora
603 cd /u01/app/oracle/product/11.2.0.3/network/log
604 LS -AL
605 ls -al
606 pwd
607 ls -al
608 lsnrctl start
609 cd /u01/app/oracle/diag/tnslsnr/D2LSENPSH212/listener/alert
610 ls -al
611 more log.xml
612 ls -al
613 vi log.xml
614 lsnrctl status
615 lsnrctl start
616 vi /etc/hosts
617 alog
618 lsnrctl
619 ls -al /etc/hosts
620 vi /etc/hosts
621 more /etc/hosts
622 lsnrctl start
623 snrctl status
624 lsnrctl status
625 exit
626 ps -ef |grep -i 11.2.0.3
627 ps -ef |grep -i 11.2.0.2
628 cd /u01/app/oracle/product
629 ls
630 mv 11.2.0.2 11.2.0.2_tobedeleted
631 exit
632 ls
633 vi .bash_profile
634 more .bash_profile
635 more echo "$ORACLE_DB"
636 echo "$ORACLE_DB"
637 echo "$ORACLE_DB" | tr -s '[:upper:]' '[:lower:]'
638 echo $bdump
639 cd /u01/app/oracle/diag/rdbms/
640 ls
641 cd openview/
642 ls
643 cd openview/
644 ls
645 cd trace/
646 dir
647 ll
648 more openview_vktm_9942.trc
649 ll
650 more openview_vktm_9942.trm
651 more openview_vktm_4572.trm
652
653 t
654 exit
655 sql
656 lsnrctl status
657 ls
658 ll
659 sql
660 exit
661 sql
662 exit
663 sql
664 sqlplus
665 exit
666 sql SYSDBA
667 sqlplus / as sysdba
668 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
669 ll
670 ll | grep "Jan 20"
671 tail alert_openview.log
672 sqlplus / as sysdba
673 pwd
674 ll | grep "Jan 20"
675 date
676 ll /u01/app/oracle/product/11.2.0.3/dbs/
677 ll /u01/app/oracle/product/11.2.0.3/srvm/admin/
678 cd /u01/oradata/
679 ll
680 cd openview/
681 ll
682 date
683 ll
684 ls
685 vi control01.ctl
686 pwd
687 cd /u01/app/oracle/admin/
688 ll
689 cd openview/
690 cd ud
691 ll
692 cd create/
693 ll
694 cd ../
695 ll
696 cd arch/
697 ll
698 cd ../
699 ll
700 cd pfile/
701 ll
702 vi initopenview.ora
703 ll
704 cd ../
705 ll
706 ll
707 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
708 ll
709 ls
710 ll | more
711
712 cd /u01/app/oracle/diag/rdbms/openview/tr
713 cd /u01/app/oracle/diag/rdbms/openview/
714 ll
715 cd openview/
716 ll
717 cd trace/
718 ll
719 ll | more
720 find / -name init.ora
721 exit
722 cd $ORACLE_HOMm
723 cd $ORACLE_HOME
724 ll
725 ls
726 cd admin
727 ls
728 pef
729 pwd
730 cd
731 pwd
732 cd /u01/app/
733 ls
734 cd ora
735 cd oracle/
736 l
737 ls
738 ll
739 cd admin
740 ll
741 cd openview/
742 ll
743 cd pfile/
744 ll
745 vi initopenview.ora
746 ll
747 ll
748 dir
749 cd ../
750 ll
751 cd create/
752 ll
753 pwd
754 cd /u01/app/oracle/diag/rdbms/openview/openview/trace
755 ll
756 ll
757 ll | more
758 ll | more
759 more cdmp_20150102144837/
760 cd cdmp_20150102144837/
761 ll
762 cd ..
763 ll
764 ls
765 ls
766 pwd
767 cd /u01/app/oracle
768 sql
769 exit
770 ps -ef |grep -i pmon
771 sudo su -
772 sudo su -
773 exit
774 cd /u01/app/oracle/patches
775 ls
776 mkdir spujan2015
777 cd spujan2015
778 pwd
779 ls
780 unzip p19854461_112030_Linux-x86-64.zip
781 ls
782 cd 19854461
783 sql
784 lsnrctl stop
785 ps -ef |grep -i ora
786 opatch napply -skip_subset -skip_duplicate
787 cd $ORACLE_HOME/rdbms/admin
788 sql
789 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2015Feb18_15_58_28.log
790 sql
791 lsnrctl start
792 lsnrctl stat
793 alog
794 df -h
795 exit
796 sql
797 sqlplus
798 sql
799 cd /u01/app
800 ls
801 cd oracle/
802 ls
803 cd product/
804 l
805 ll
806 cd 11.2.0.3/
807 ll
808 cd network/
809 ls
810 cd admin/
811 ll
812 more sqlnet.ora
813 more /u01/app/oracle/product/11.2.0.3/network/log
814 ll /u01/app/oracle/product/11.2.0.3/network/log
815 cd ../
816 ll
817 cd admin
818 ll
819 more tnsnav.ora
820
821 ll
822 more tnsnames.ora
823 ll
824 more shrept.lst
825
826 ll
827 moe listener.ora
828 more listener.ora
829 ll /u01/app/oracle/product/11.2.0.3/network/log
830 ll
831 find / -name \*trace\*
832 esit
833 exit
834 rman
835 alog
836 sql
837 df -h
838 alog
839 sql
840 alog
841 sql
842 alog
843 who
844 cd /u01/oradata/openview/backup
845 ls -l
846 cd *
847 ls -l
848 cd *
849 ls -l
850 cd 2014_11_17
851 ls -l
852 cd ../2015_02_07
853 ls -l
854 cd ..
855 ls -=lt
856 ls -lt
857 rm -rf 2014*
858 ls -lt
859 cd 2015_01_31
860 ls
861 cd ../2015_01_15
862 ls
863 cd ../2015_01_01
864 ls -l
865 cd ../2015_01_06
866 ls
867 cd ../2015_01_02
868 ls
869 cd ../2015_01_01
870 ls
871 df -h .
872 sql
873 lsntrl
874 sql
875 exit
876 df -h
877 scripts
878 sql
879 alog
880 sql
881 sql
882 opatch lsinventory
883 opatch lsinventory
884 alog
885 oerr ora 1543
886 exit
887 cd /u01/app/oracle/patches
888 ls
889 mkdir spuapr2015
890 cd spuapr2015
891 ls
892 unzip p20299010_112030_Linux-x86-64.zip
893 df -h
894 exit
895 cd /u01/app/oracle
896 ls
897 df -h
898 cd /u01/oradata/openview
899 ls
900 cd backup
901 ls
902 cd *
903 ls
904 cd flashback
905 ls
906 ls -l
907 pwd
908 du -h .
909 pwd
910 cd /u01/app/oracle/em*/*_inst
911 cd bin
912 ./emctl start agent
913 alog
914 exit
915 lsnrctl stop
916 cd /u01/app/oracle/patches
917 ls
918 cd spuapr2015
919 ls
920 cd 20299010
921 sql
922 ps -ef |grep -i orac
923 cd /u01/app/oracle/em*
924 cd age*
925 ls
926 cd agent_inst/bin
927 ./emctl stop agent
928 ps -ef |grep -i ora
929 cd /u01/app/oracle/patches
930 ls
931 cd spuapr2015/20*
932 pwd
933 ls
934 lsnrctl stat
935 opatch napply -skip_subset -skip_duplicate
936 cd $ORACLE_HOME/rdbms/admin
937 sql
938 view /u01/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_OPENVIEW_APPLY_2015Apr30_18_21_59.log
939 lsnrctl start
940 lsnrctl stop
941 alog
942 sql
943 lsnrctl start
944 sql
945 exit
946 patches
947 cd /u01/app/oracle/patches
948 ssh 10.236.28.32
949 ssh D2LSEUTSH032
950 exit
951 sql
952 who
953 lsnrctl stop
954 sql
955 cd
956 view .bash_profile
957 . .bash_profile
958 patches
959 mkdir spujul2015
960 cd spujul2015
961 mkdir ojvm
962 ping -a D2LSENPSH212
963 pwd
964 ls
965 unzip p20803576_112030_Linux-x86-64.zip
966 cd ojvm
967 ls
968 unzip p21068553_112030_Linux-x86-64.zip
969 cd $ORACLE_HOME
970 ls -l
971 mv OPatch OPatch_Nov172014
972 unzip p6880880_112000_Linux-x86-64.zip
973 ls -l
974 cd -
975 cd ..
976 ls
977 cd 20803576
978 sql
979 alog
980 date
981 lsnrctl stat
982 sql
983 pwd
984 opatch napply -skip_subset -skip_duplicate
985 cd $ORACLE_HOME/rdbms/admin
986 sql
987 cd ojvm
988 cd -
989 cd ojvm
990 cd ../ojvm
991 ls
992 cd 21*
993 ls
994 opatch apply
995 cd $ORACLE_HOME/sqlpatch/21068553
996 sql
997 lsnnrctl start
998 lsnrctl start
999 who
1000 exit
1001 which version
1002 sql
1003 history
oracle@D2LSENPSH212[openview]#
STEPS for DATABASE CHANGE IMPLEMENTATION
=====================================
1. OPEN an RFC in remedy
2. Request approval for infrastructure change from ICCB (Infrastructure Change Control Board)=>approval gotten
3. DBA sends out email to all stakeholders of affected SERVER, DATABASE (e.g. Unix team, APPLICATION Support team) to notify them of potential change
4. DBA Asks APPLICATION TEAM to shutdown all their applications on the server, database>APPs TEAM notify DBA when done to go ahead
5. DBA acts based on APPs team's go-ahead to EFFECT/IMPLEMENT CHANGE (e.g. applying OJVM patching)>DBA verify that server, database is working perfectly after change
6. DBA then notifies different stakeholders of Server, Database e.g. APPs TEAM, UNIX Team to test their applications and make sure it's back up and running perfectly after patch
7. APPs team confirms to DBA if all is working well or not (via email)
NOTE: AFTER change has been implemented by DBA e.g. patching, take a screenshot or copy-paste registry history highlighting the change
NOTE: (4mTRB-Bruce for NPPD customer)MICROSOFT PATCH doesn't usually specify whether REBOOT is needed or NOT for the servers during PATCHING
=>That's why we first TEST patch in TEST env/TEST Lab>Test PATCH in GSS env(owned by HP)>before applying PATCH in COMPONENT (production)
Ll /u01/app/oracle/scripts
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
? phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback | Recognition@hp
=======================================================================================================================================================
================PERFORMANCE TUNING======================================
PERFORMANCE TUNING
==================
1. How would you approach database performance: By identifying bottlenecks and fixing them
2. How do you force the optimizer to use a new plan: By first enabling baseline capture using : alter session
set optimizer_capture_sql_plan_baselines = true;
3. Difference between local and global index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions while A local index is a one-to-one mapping between a index partition and a table partition.
4. What is the difference between DB file sequential read and DB File Scattered Read?: db file sequential read wait event has three parameters: file#, first block#, and block count while db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return
5. Difference between nested loop joins and hash joins: Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can
6. What factors do you consider when creating indexes on tables? How do you select the column for an index?:• Non-key columns are defined in the INCLUDE clause of the CREATE INDEX statement.
• Non-key columns can only be defined on non-clustered indexes on tables or indexed views.
7. If you were involved at the early stages of database development and coding, what are some of the measures you would suggest for optimal performance?
1. Get candid feedback from users. Determine the performance project's scope and subsequent performance goals, as well as performance goals for the future. This process is key in future capacity planning.
2. Get a full set of operating system, database, and application statistics from the system when the performance is both good and bad. If these are not available, then get whatever is available. Missing statistics are analogous to missing evidence at a crime scene: They make detectives work harder and it is more time-consuming.
3. Sanity-check the operating systems of all systems involved with user performance. By sanity-checking the operating system, you look for hardware or operating system resources that are fully utilized. List any over-used resources as symptoms for analysis later. In addition, check that all hardware shows no errors or diagnostics.
4. Check for the top ten most common mistakes with Oracle, and determine if any of these are likely to be the problem. List these as symptoms for later analysis. These are included because they represent the most likely problems. ADDM automatically detects and reports nine of these top ten issues. See Chapter 6, "Automatic Performance Diagnostics" and "Top Ten Mistakes Found in Oracle Systems".
5. Build a conceptual model of what is happening on the system using the symptoms as clues to understand what caused the performance problems. See "A Sample Decision Process for Performance Conceptual Modeling".
6. Propose a series of remedy actions and the anticipated behavior to the system, then apply them in the order that can benefit the application the most. ADDM produces recommendations each with an expected benefit. A golden rule in performance work is that you only change one thing at a time and then measure the differences. Unfortunately, system downtime requirements might prohibit such a rigorous investigation method. If multiple changes are applied at the same time, then try to ensure that they are isolated so that the effects of each change can be independently validated.
8. Is creating an index online possible?: YES
9. What is the difference between Redo, Rollback and Undo?:Redo log files record changes to the database as a result of transactions and internal Oracle server actions,undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle.
Undo
What is Row Chaining and Row Migration?
10. How to find out background processes?: select sid, process, program
from v$session s join v$bgprocess using (paddr)
where s.status = 'ACTIVE'
and rownum < 5;
11. How to find background processes from OS:$ ps -ef|grep ora_|grep SID
12. How do you troubleshoot connectivity issues?: Verify path to TNS_ADMIN is set correctly and that all the connection identifier(SIDs) exists in the tnsnames.ora file
13. Why are bind variables important?:Bind variables have a huge impact on the stress in the shared pool Can you force literals to be converted into bind variables?: YES
14. What is adaptive cursor sharing? It allows the optimizer to generate a set of plans that are optimal for different sets of bind values
15. In Data Pump, if you restart a job in Data Pump, how it will know from where to resume?: By attaching the name of the job to be resumed. That is: expdp system/manager attach="Job_Name"
1. How would you approach database performance :http://docs.oracle.com/cd/B19306_01/server.102/b14211/technique.htm#i11146
Oracle performance methodology involves identifying bottlenecks and fixing them. It is recommended that changes be made to a system only after you have confirmed that there is a bottleneck. Performance problems generally result from either a lack of throughput, unacceptable user/job response time, or both
Before looking at any database or operating system statistics, it is crucial to get feedback from the most important components of the system: the users of the system and the people ultimately paying for the application. Typical user feedback includes statements like the following:
• "The online performance is so bad that it prevents my staff from doing their jobs."
• "The billing run takes too long."
• "When I experience high amounts of Web traffic, the response time becomes unacceptable, and I am losing customers."
• "I am currently performing 5000 trades a day, and the system is maxed out. Next month, we roll out to all our users, and the number of trades is expected to quadruple."
From candid feedback, it is easy to set critical success factors for any performance work. Determining the performance targets and the performance engineer's exit criteria make managing the performance process much simpler and more successful at all levels. These critical success factors are better defined in terms of real business goals rather than system statistics.
Some real business goals for these typical user statements might be:
• "The billing run must process 1,000,000 accounts in a three-hour window."
• "At a peak period on a Web site, the response time will not exceed five seconds for a page refresh."
• "The system must be able to process 25,000 trades in an eight-hour window."
The ultimate measure of success is the user's perception of system performance. The performance engineer's role is to eliminate any bottlenecks that degrade performance. These bottlenecks could be caused by inefficient use of limited shared resources or by abuse of shared resources, causing serialization. Because all shared resources are limited, the goal of a performance engineer is to maximize the number of business operations with efficient use of shared resources. At a very high level, the entire database server can be seen as a shared resource. Conversely, at a low level, a single CPU or disk can be seen as shared resources.
The Oracle performance improvement method can be applied until performance goals are met or deemed impossible. This process is highly iterative, and it is inevitable that some investigations will be made that have little impact on the performance of the system. It takes time and experience to develop the necessary skills to accurately pinpoint critical bottlenecks in a timely manner. However, prior experience can sometimes work against the experienced engineer who neglects to use the data and statistics available to him. It is this type of behavior that encourages database tuning by myth and folklore. This is a very risky, expensive, and unlikely to succeed method of database tuning.
The Automatic Database Diagnostic Monitor (ADDM) implements parts of the performance improvement method and analyzes statistics to provide automatic diagnosis of major performance issues. Using ADDM can significantly shorten the time required to improve the performance of a system. See Chapter 6, "Automatic Performance Diagnostics" for a description of ADDM.
Steps in The Oracle Performance Improvement Method
Perform the following initial standard checks:
1. Get candid feedback from users. Determine the performance project's scope and subsequent performance goals, as well as performance goals for the future. This process is key in future capacity planning.
2. Get a full set of operating system, database, and application statistics from the system when the performance is both good and bad. If these are not available, then get whatever is available. Missing statistics are analogous to missing evidence at a crime scene: They make detectives work harder and it is more time-consuming.
3. Sanity-check the operating systems of all systems involved with user performance. By sanity-checking the operating system, you look for hardware or operating system resources that are fully utilized. List any over-used resources as symptoms for analysis later. In addition, check that all hardware shows no errors or diagnostics.
4. Check for the top ten most common mistakes with Oracle, and determine if any of these are likely to be the problem. List these as symptoms for later analysis. These are included because they represent the most likely problems. ADDM automatically detects and reports nine of these top ten issues. See Chapter 6, "Automatic Performance Diagnostics" and "Top Ten Mistakes Found in Oracle Systems".
5. Build a conceptual model of what is happening on the system using the symptoms as clues to understand what caused the performance problems. See "A Sample Decision Process for Performance Conceptual Modeling".
6. Propose a series of remedy actions and the anticipated behavior to the system, then apply them in the order that can benefit the application the most. ADDM produces recommendations each with an expected benefit. A golden rule in performance work is that you only change one thing at a time and then measure the differences. Unfortunately, system downtime requirements might prohibit such a rigorous investigation method. If multiple changes are applied at the same time, then try to ensure that they are isolated so that the effects of each change can be independently validated.
7. Validate that the changes made have had the desired effect, and see if the user's perception of performance has improved. Otherwise, look for more bottlenecks, and continue refining the conceptual model until your understanding of the application becomes more accurate.
8. Repeat the last three steps until performance goals are met or become impossible due to other constraints
ADDM
For a quick and easy approach to performance tuning, use the Automatic Database Diagnostic Monitor (ADDM). ADDM automatically monitors your Oracle system and provides recommendations for solving performance problems should problems occur. For example, suppose a DBA receives a call from a user complaining that the system is slow. The DBA simply examines the latest ADDM report to see which of the recommendations should be implemented to solve the problem. See Chapter 6, "Automatic Performance Diagnostics" for information on the features that help monitor and diagnose Oracle systems
MANUAL PERFORMANCE TUNING DIAGNOSIS
The following steps illustrate how a performance engineer might look for bottlenecks without using automatic diagnostic features. These steps are only intended as a guideline for the manual process. With experience, performance engineers add to the steps involved. This analysis assumes that statistics for both the operating system and the database have been gathered.
1. Is the response time/batch run time acceptable for a single user on an empty or lightly loaded system?
If it is not acceptable, then the application is probably not coded or designed optimally, and it will never be acceptable in a multiple user situation when system resources are shared. In this case, get application internal statistics, and get SQL Trace and SQL plan information. Work with developers to investigate problems in data, index, transaction SQL design, and potential deferral of work to batch/background processing.
2. Is all the CPU being utilized?
If the kernel utilization is over 40%, then investigate the operating system for network transfers, paging, swapping, or process thrashing. Otherwise, move onto CPU utilization in user space. Check to see if there are any non-database jobs consuming CPU on the system limiting the amount of shared CPU resources, such as backups, file transforms, print queues, and so on. After determining that the database is using most of the CPU, investigate the top SQL by CPU utilization. These statements form the basis of all future analysis. Check the SQL and the transactions submitting the SQL for optimal execution. Oracle provides CPU statistics in V$SQL and V$SQLSTATS.
See Also:
Oracle Database Reference for more information on V$SQL and V$SQLSTATS
If the application is optimal and there are no inefficiencies in the SQL execution, consider rescheduling some work to off-peak hours or using a bigger system.
3. At this point, the system performance is unsatisfactory, yet the CPU resources are not fully utilized.
In this case, you have serialization and unscalable behavior within the server. Get the WAIT_EVENTS statistics from the server, and determine the biggest serialization point. If there are no serialization points, then the problem is most likely outside the database, and this should be the focus of investigation. Elimination of WAIT_EVENTS involves modifying application SQL and tuning database parameters. This process is very iterative and requires the ability to drill down on the WAIT_EVENTS systematically to eliminate serialization points.
Top Ten Mistakes Found in Oracle Systems
This section lists the most common mistakes found in Oracle systems. By following the Oracle performance improvement methodology, you should be able to avoid these mistakes altogether. If you find these mistakes in your system, then re-engineer the application where the performance effort is worthwhile. See "Automatic Performance Tuning Features" for information on the features that help diagnose and tune Oracle systems. See Chapter 10, "Instance Tuning Using Performance Views" for a discussion on how wait event data reveals symptoms of problems that can be impacting performance.
1. Bad Connection Management
The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance, and is totally unscalable.
2. Bad Use of Cursors and the Shared Pool
Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.
3. Bad SQL
Bad SQL is SQL that uses more resources than appropriate for the application requirement. This can be a decision support systems (DSS) query that runs for more than 24 hours or a query from an online application that takes more than a minute. SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high load SQL and the SQL tuning advisor can be used to provide recommendations for improvement. See Chapter 6, "Automatic Performance Diagnostics" and Chapter 12, "Automatic SQL Tuning".
4. Use of Nonstandard Initialization Parameters
These might have been implemented based on poor advice or incorrect assumptions. Most systems will give acceptable performance using only the set of basic parameters. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.
Likewise, optimizer parameters set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer settings should be managed together as a group to ensure consistency of performance.
See Also:
• Oracle Database Administrator's Guide for information on initialization parameters and database creation
• Oracle Database Reference for details on initialization parameters
• "Performance Considerations for Initial Instance Configuration" for information on parameters and settings in an initial instance configuration
5. Getting Database I/O Wrong
Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly, because they configure disks by disk space and not I/O bandwidth. See Chapter 8, "I/O Configuration and Design".
6. Redo Log Setup Problems
Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and the database will wait for the archive process to catch up. See Chapter 4, "Configuring a Database for Performance" for information on sizing redo logs for performance.
7. Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.
This is particularly common on INSERT-heavy applications, in applications that have raised the block size above 8K, or in applications with large numbers of active users and few rollback segments. Use automatic segment-space management (ASSM) to and automatic undo management solve this problem.
8. Long Full Table Scans
Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.
9. High Amounts of Recursive (SYS) SQL
Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Use locally managed tablespaces to reduce recursive SQL due to extent allocation. Recursive SQL executed under another user Id is probably SQL and PL/SQL, and this is not a problem.
10. Deployment and Migration Errors
In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect statistics. These errors can lead to sub-optimal execution plans and poor interactive user performance. When migrating applications of known performance, export the schema statistics to maintain plan stability using the DBMS_STATS package.
Although these errors are not directly detected by ADDM, ADDM highlights the resulting high load SQL.
3.2 Emergency Performance Methods
This section provides techniques for dealing with performance emergencies. You have already had the opportunity to read about a detailed methodology for establishing and improving application performance. However, in an emergency situation, a component of the system has changed to transform it from a reliable, predictable system to one that is unpredictable and not satisfying user requests.
In this case, the role of the performance engineer is to rapidly determine what has changed and take appropriate actions to resume normal service as quickly as possible. In many cases, it is necessary to take immediate action, and a rigorous performance improvement project is unrealistic.
After addressing the immediate performance problem, the performance engineer must collect sufficient debugging information either to get better clarity on the performance problem or to at least ensure that it does not happen again.
The method for debugging emergency performance problems is the same as the method described in the performance improvement method earlier in this book. However, shortcuts are taken in various stages because of the timely nature of the problem. Keeping detailed notes and records of facts found as the debugging process progresses is essential for later analysis and justification of any remedial actions. This is analogous to a doctor keeping good patient notes for future reference.
3.2.1 Steps in the Emergency Performance Method
The Emergency Performance Method is as follows:
1. Survey the performance problem and collect the symptoms of the performance problem. This process should include the following:
• User feedback on how the system is underperforming. Is the problem throughput or response time?
• Ask the question, "What has changed since we last had good performance?" This answer can give clues to the problem. However, getting unbiased answers in an escalated situation can be difficult. Try to locate some reference points, such as collected statistics or log files, that were taken before and after the problem.
• Use automatic tuning features to diagnose and monitor the problem. See "Automatic Performance Tuning Features" for information on the features that help diagnose and tune Oracle systems. In addition, you can use Oracle Enterprise Manager performance features to identify top SQL and sessions.
2. Sanity-check the hardware utilization of all components of the application system. Check where the highest CPU utilization is, and check the disk, memory usage, and network performance on all the system components. This quick process identifies which tier is causing the problem. If the problem is in the application, then shift analysis to application debugging. Otherwise, move on to database server analysis.
3. Determine if the database server is constrained on CPU or if it is spending time waiting on wait events. If the database server is CPU-constrained, then investigate the following:
• Sessions that are consuming large amounts of CPU at the operating system level and database; check V$SESS_TIME_MODEL for database CPU usage
• Sessions or statements that perform many buffer gets at the database level; check V$SESSTAT and V$SQLSTATS
• Execution plan changes causing sub-optimal SQL execution; these can be difficult to locate
• Incorrect setting of initialization parameters
• Algorithmic issues as a result of code changes or upgrades of all components
If the database sessions are waiting on events, then follow the wait events listed in V$SESSION_WAIT to determine what is causing serialization. The V$ACTIVE_SESSION_HISTORY view contains a sampled history of session activity which can be used to perform diagnosis even after an incident has ended and the system has returned to normal operation. In cases of massive contention for the library cache, it might not be possible to logon or submit SQL to the database. In this case, use historical data to determine why there is suddenly contention on this latch. If most waits are for I/O, then examine V$ACTIVE_SESSION_HISTORY to determine the SQL being run by the sessions that are performing all of the inputs and outputs. See Chapter 10, "Instance Tuning Using Performance Views" for a discussion on wait events.
4. Apply emergency action to stabilize the system. This could involve actions that take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in process. These naturally have service level implications.
5. Validate that the system is stable. Having made changes and restrictions to the system, validate that the system is now stable, and collect a reference set of statistics for the database. Now follow the rigorous performance method described earlier in this book to bring back all functionality and users to the system. This process may require significant application re-engineering before it is complete.
From <http://docs.oracle.com/cd/B19306_01/server.102/b14211/technique.htm>
2. How do you force the optimizer to use a new plan: http://www.oracle.com/technetwork/issue-archive/2009/09-mar/o29spm-092092.html
TECHNOLOGY: SQL
Baselines and Better Plans
By Arup Nanda
Use SQL plan management in Oracle Database 11g to optimize execution plans.
Have you ever been in a situation in which some database queries that used to behave well suddenly started performing poorly? More likely than not, you traced the cause back to a change in the execution plan. Further analysis may have revealed that the performance change was due to newly collected optimizer statistics on the tables and indexes referred to in those queries.
And thoroughly humbled by this situation, have you ever made a snap decision to stop statistics collection? This course of action keeps the execution plans pretty much the same for those queries, but it makes other things worse. Performance of some other queries, or even the same queries with different predicates (the WHERE clauses), deteriorates because of suboptimal execution plans generated from stale statistics.
Whatever action you take next carries some risk, so how can you mitigate that risk and ensure that the execution plans for the SQL statements generated are optimal while maintaining a healthy environment in which optimizer statistics are routinely collected and all SQL statements perform well without significant changes (such as adding hints)? You may resort to using stored outlines to freeze the plan, but that also means that you‘re preventing the optimizer from generating potentially beneficial execution plans.
In Oracle Database 11g, using the new SQL plan management feature, you can now examine how execution plans change over time, have the database verify new plans by executing them before using them, and gradually evolve better plans in a controlled manner.
SQL Plan Management
When SQL plan management is enabled, the optimizer stores generated execution plans in a special repository, the SQL management base. All stored plans for a specific SQL statement are said to be part of a plan history for that SQL statement.
Some of the plans in the history can be marked as “accepted.”When the SQL statement is reparsed, the optimizer considers only the accepted plans in the history. This set of accepted plans for that SQL statement is called a SQL plan baseline , or baseline for short.
The optimizer still tries to generate a better plan, however. If the optimizer does generate a new plan, it adds it to the plan history but does not consider it while reparsing the SQL, unless the new plan is better than all the accepted plans in the baseline. Therefore, with SQL plan management enabled, SQL statements will never suddenly have a less efficient plan that results in worse performance.
With SQL plan management, you can examine all the available plans in the plan history for a SQL statement, compare them to see their relative efficiency, promote a specific plan to accepted status, and even make a plan the permanent (fixed) one.
This article will show you how to manage SQL plan baselines—including capturing, selecting, and evolving baselines—by using Oracle Enterprise Manager and SQL from the command line to ensure the optimal performance of SQL statements.
Capture
The capture function of SQL plan management captures the various optimizer plans used by SQL statements. By default, capture is disabled—that is, SQL plan management does not capture the history for the SQL statements being parsed or reparsed.
Now let‘s capture the baselines for some SQL statement examples coming from one session. We will use a sample schema provided with Oracle Database 11g—SH—and the SALES table in particular.
First, we enable the baseline capture in the session:
alter session
set optimizer_capture_sql_plan_baselines = true;
Now all the SQL statements executed in this session will be captured, along with their optimization plans, in the SQL management base. Every time the plan changes for a SQL statement, it is stored in the plan history. To see this, run the script shown in Listing 1, which executes exactly the same SQL but under different circumstances. First, the SQL runs with all the defaults (including an implicit default optimizer_mode = all_rows). In the next execution, the optimizer_mode parameter value is set to first_rows. Before the third execution of the SQL, we collect fresh stats on the table and the indexes
Code Listing 1: Capturing SQL plan baselines
alter session set optimizer_capture_sql_plan_baselines = true;
-- First execution. Default Environment
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
-- Change the optimizer mode
alter session set optimizer_mode = first_rows;
-- Second execution. Opt Mode changed
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
-- Gather stats now
begin
dbms_stats.gather_table_stats (
ownname => 'SH',
tabname => 'SALES',
cascade => TRUE,
no_invalidate => FALSE,
method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO',
granularity => 'GLOBAL AND PARTITION',
estimate_percent => 10,
degree => 4
);
end;
/
-- Third execution. After stats
select * /* ARUP */ from sales
where quantity_sold > 1 order by cust_id;
If the plan changes in each of the executions of the SQL in Listing 1, the different plans will be captured in the plan history for that SQL statement. (The /* ARUP */ comment easily identifies the specific SQL statements in the shared pool.)
The easiest way to view the plan history is through Oracle Enterprise Manager. From the Database main page, choose the Server tab and then click SQL Plan Control . From that page, choose the SQL Plan Baseline tab. On that page, search for the SQL statements containing the name ARUP , as in Figure 1, which shows the plan history for the SQL statements on the lower part of the screen.
3. Difference between local and global index:
Oracle Global Index vs. Local Index
Question: What is the difference between a oracle global index and a local index?
Answer: When using Oracle partitioning, you can specify the “global” or “local” parameter in the create index syntax:
• Global Index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions. The docs says that a "global index can be partitioned by the range or hash method, and it can be defined on any type of partitioned, or non-partitioned, table".
• Local Index: A local index is a one-to-one mapping between a index partition and a table partition. In general, local indexes allow for a cleaner “divide and conquer” approach for generating fast SQL execution plans with partition pruning.
For complete details, see my tips for Oracle partitioning.
Global and Local Index partitioning with Oracle
The first partitioned index method is called a LOCAL partition. A local partitioned index creates a one-for-one match between the indexes and the partitions in the table. Of course, the key value for the table partition and the value for the local index must be identical. The second method is called GLOBAL and allows the index to have any number of partitions.
The partitioning of the indexes is transparent to all SQL queries. The great benefit is that the Oracle query engine will scan only the index partition that is required to service the query, thus speeding up the query significantly. In addition, the Oracle parallel query engine will sense that the index is partitioned and will fire simultaneous queries to scan the indexes.
Local partitioned indexes
Local partitioned indexes allow the DBA to take individual partitions of a table and indexes offline for maintenance (or reorganization) without affecting the other partitions and indexes in the table.
In a local partitioned index, the key values and number of index partitions will match the number of partitions in the base table.
CREATE INDEX year_idx
on all_fact (order_date)
LOCAL
(PARTITION name_idx1),
(PARTITION name_idx2),
(PARTITION name_idx3);
Oracle will automatically use equal partitioning of the index based upon the number of partitions in the indexed table. For example, in the above definition, if we created four indexes on all_fact, the CREATE INDEX would fail since the partitions do not match. This equal partition also makes index maintenance easier, since a single partition can be taken offline and the index rebuilt without affecting the other partitions in the table.
Global partitioned indexes
A global partitioned index is used for all other indexes except for the one that is used as the table partition key. Global indexes partition OLTP (online transaction processing) applications where fewer index probes are required than with local partitioned indexes. In the global index partition scheme, the index is harder to maintain since the index may span partitions in the base table.
For example, when a table partition is dropped as part of a reorganization, the entire global index will be affected. When defining a global partitioned index, the DBA has complete freedom to specify as many partitions for the index as desired.
Now that we understand the concept, let's examine the Oracle CREATE INDEX syntax for a globally partitioned index:
CREATE INDEX item_idx
on all_fact (item_nbr)
GLOBAL
(PARTITION city_idx1 VALUES LESS THAN (100)),
(PARTITION city_idx1 VALUES LESS THAN (200)),
(PARTITION city_idx1 VALUES LESS THAN (300)),
(PARTITION city_idx1 VALUES LESS THAN (400)),
(PARTITION city_idx1 VALUES LESS THAN (500));
Here, we see that the item index has been defined with five partitions, each containing a subset of the index range values. Note that it is irrelevant that the base table is in three partitions. In fact, it is acceptable to create a global partitioned index on a table that does not have any partitioning.
From <http://www.dba-oracle.com/t_global_local_partitioned_index.htm>
<https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5931711000346922149>
Thanks for the question, Aravindhan.
Submitted: January 16, 2013 - 12:53 am UTC | Last updated: August 08, 2013 - 4:56 pm UTC
Category: Database | Version: 11.1.0
QUESTION: Which Index is Better Global Or Local in Partitioned Table?
Latest Followup
You Asked
We have partitioned table based on date say startdate (Interval partition , For each day)
We will use query that will generate report based on days (like report for previous 5 days)
Also we use queries that will generate report based on hours (like report for previous 5 hours)
So there are queries will access data within partition and across partition as well
So please suggest whether we can for global or local index on start date
and we said...
well, if you are going to cross partitions - hitting 5 days worth of data - hopefully you would NOT be using an index at all. Hopefully you would be using a full scan of the five partitions since you are hitting every row.
If all of your queries include "startdate" in the predicate and you think you'll hit partitions at the most typically - it is likely you want to employ locally partitioned indexes for most all of your indexes.
And startdate doesn't need to be in all of these indexes (they do not need to be prefixed with startdate). Only when you are going after the previous N hours might you want an index that starts with startdate.
for example, suppose you have queries like:
select ....
from t
where startdate between sysdate and sysdate-5
and x > 100;
select ....
from t
where startdate between sysdate and sysdate-2
and x > 100;
it MIGHT make sense to have a locally partitioned index on X, just on X. If x > 100 returns a very small number of rows from those five partitions then an index on X and just on X would be appropriate. We will do five index range scans (which is acceptable) to find the rows.
For the second query we would just do two index range scans (again, acceptable).
You would want a globally partitioned index on X if you did queries like:
select ....
from t
where startdate between sysdate and sysdate-50
and x > 100;
select ....
from t
where x > 100;
From <https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5931711000346922149>
4. What is the difference between DB file sequential read and DB File Scattered Read? http://www.dba-oracle.com/m_cpu_time_execution.htm>
The db file sequential read wait event has three parameters: file#, first block#, and block count. In Oracle Database 10g, this wait event falls under the User I/O wait class. Keep the following key thoughts in mind when dealing with the db file sequential read wait event.
• The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
• The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT by individual sessions.
• Significant db file sequential read wait time is most likely an application issue.
• From <http://logicalread.solarwinds.com/oracle-db-file-sequential-read-wait-event-part1-mc01/>
WHILE …
"The db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return. A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.
The db file scattered read wait event identifies that a full table scan is occurring. When performing a full table scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other. Such reads are called scattered read calls, because the blocks are scattered throughout memory. This is why the corresponding wait event is called 'db file scattered read'. Multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full table scans into the buffer cache show up as waits for 'db file scattered read'."
Furthermore, Oracle FAQ's explains that "'db file scattered read' events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache or user PGA memory." Also, the difference between db file scattered read and db file sequential read is that file scattered reads, "is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA."
The popular Ion tool is the easiest way to analyze Oracle cache and disk performance (db block parallel reads and writes), and Ion allows you to spot hidden disk I/O performance trends.
Ion is our favorite Oracle tuning tool, and the only 3rd party tool that we use.
5. Difference between nested loop joins and hash joins: http://blog.tanelpoder.com/2010/10/06/a-the-most-fundamental-difference-between-hash-and-nested-loop-joins/>
• Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.
Nested loops, Hash join and Sort Merge joins – difference?
Nested loop (loop over loop) : http://oracle-online-help.blogspot.com/2007/03/nested-loops-hash-join-and-sort-merge.html>
In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.
Ex:
Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;
It is processed like:
For i in (select * from tab1) loop
For j in (select * from tab2 where col2=i.col1) loop
Display results;
End loop;
End loop;
The Steps involved in doing nested loop are:
a) Identify outer (driving) table
Assign inner (driven) table to outer table.
For every row of outer table, access the rows of inner table.
In execution plan it is seen like this:
NESTED LOOPS
outer_loop
inner_loop
When optimizer uses nested loops?
Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.
Optimizer may not use nested loop in case:
1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.
Hash join
Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.
The algorithm of hash join is divided in two parts
1. Build a in-memory hash table on smaller of the two tables.
2. Probe this hash table with hash value for each row second table
In simpler terms it works like
Build phase
For each row RW1 in small (left/build) table loop
Calculate hash value on RW1 join key
Insert RW1 in appropriate hash bucket.
End loop;
Probe Phase
For each row RW2 in big (right/probe) table loop
Calculate the hash value on RW2 join key
For each row RW1 in hash table loop
If RW1 joins with RW2
Return RW1, RW2
End loop;
End loop;
When optimizer uses hash join?
Optimizer uses has join while joining big tables or big fraction of small tables.
Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.
Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.
6. What factors do you consider when creating indexes on tables? How do you select the column for an index? = desc DBA_IND_COLUMNS
• SQL> desc dba_ind_columns
Name Null? Type
----------------------------------------- -------- ----------------------------
INDEX_OWNER NOT NULL VARCHAR2(30)
INDEX_NAME NOT NULL VARCHAR2(30)
TABLE_OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME VARCHAR2(4000)
COLUMN_POSITION NOT NULL NUMBER
COLUMN_LENGTH NOT NULL NUMBER
CHAR_LENGTH NUMBER
DESCEND VARCHAR2(4)
From <https://community.oracle.com/thread/1099106>
When you are creating covering index you should keep in mind some guidelines:
• Non-key columns are defined in the INCLUDE clause of the CREATE INDEX statement.
• Non-key columns can only be defined on non-clustered indexes on tables or indexed views.
• All data types are allowed except text, ntext, and image.
• Computed columns that are deterministic and either precise or imprecise can be included columns.
• As with key columns, computed columns derived from image, ntext, and text data types can be non-key (included) columns as long as the computed column data type is allowed as a non-key index column.
• Column names cannot be specified in both the INCLUDE list and in the key column list.
• Column names cannot be repeated in the INCLUDE list.
• A maximum of 1023 additional columns can be used as non-key columns (a table can have a maximum of 1024 columns).
Performance benefit gained by using covering indexes is typically great for queries that return a large number of rows (by the way this queries are called a non-selective queries). For queries that return only a small number of rows performance is small. But here you can ask, what is the small number of rows? Small numer of rows could be 10 rows for table with hundreds of rows or 1000 rows for table with 1 000 000 rows.
Building Indexes in Ascending vs Descending Order
When you are creating indexes, often the default options are used. This options create index in ascending order. This is usually the most logical way if creating an index, but in some cases this approach wouldn’t be the best. For example when you create index on ColumnA of TableA using default options, the newest data are at the end. This works perfectly when you want to get data in ascending order from the last recent at the top to the most recent at the end. But what if you need to get the most recent data at the top?. In this case you can create index in descending order. In a few following examples I will show you hot to create indexes in different order and how they can affect performance of queries. For all following examples I will use PurchasingOrderHeader of AdventureWorks2008R2 database
From <http://www.codeproject.com/Articles/234399/Database-performance-optimization-part-Indexing>
7. If you were involved at the early stages of database development and coding, what are some of the measures you would suggest for optimal performance?
8. Is creating an index online possible? http://docs.oracle.com/cd/B28359_01/server.111/b28310/indexes003.htm>
You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index build is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.
The following statements illustrate online index build operations:
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
Note:
Keep in mind that the time that it takes on online index build to complete is proportional to the size of the table and the number of concurrently executing DML statements. Therefore, it is best to start online index builds when DML activity is low.
See Also:
"Rebuilding an Existing Index"
9. What is the difference between Redo, Rollback and Undo? https://oraclenz.wordpress.com/2008/06/22/what-is-the-difference-between-rollback-and-undo-tablespace-otn-forum-by-user-user503050/>
REDO
Redo log files record changes to the database as a result of transactions and internal Oracle server actions. (A transaction is a logical unit of work, consisting of one or more SQL statements run by a user.)
Redo log files protect the database from the loss of integrity because of system failures caused by power outages, disk failures, and so on.
Redo log files must be multiplexed to ensure that the information stored in them is not lost in the event of a disk failure.
The redo log consists of groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of that group, and each group is identified by a number. The LogWriter (LGWR) process writes redo records from the redo log buffer to all members of a redo log group until the file is filled or a log switch operation is requested. Then, it switches and writes to the files in the next group. Redo log groups are used in a circular fashion.
<https://oraclenz.wordpress.com/2008/06/22/differences-between-undo-and-redo/>
There might be confusion while undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle.
Undo
Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.
Undo records are used to:
• Roll back transactions when a ROLLBACK statement is issued
• Recover the database
• Provide read consistency
• Analyze data as of an earlier point in time by using Flashback Query
When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.
Undo vs Rollback
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends (Oracle 9i and on words) to use undo tablespace (automatic undo management) to manage undo rather than rollback segments.
To see the undo management mode and other undo related information of database-
SQL> show parameter undo
NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
Since the advent of Oracle9i, less time-consuming and suggested way is—using Automatic Undo Management—in which Oracle Database creates and manages rollback segments (now called “undo segments”) in a special-purpose undo tablespace. Unlike with rollback segments, we don’t need to create or manage individual undo segments—Oracle Database does that for you when you create the undo tablespace. All transactions in an instance share a single undo tablespace. Any executing transaction can consume free space in the undo tablespace, and when the transaction completes, its undo space is freed (depending on how it’s been sized and a few other factors, like undo retention). Thus, space for undo segments is dynamically allocated, consumed, freed, and reused—all under the control of Oracle Database, rather than manual management by someone.
Switching Rollback to Undo
1. We have to create an Undo tablespace. Oracle provides a function (10g and up) that provides information on how to size new undo tablespace based on the configuration and usage of the rollback segments in the system.
DECLARE
utbsiz_in_MB NUMBER;
BEGIN
utbsiz_in_MB ;= DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/
CREATE UNDO TABLESPACE UNDOTBS
DATAFILE ‘/oradata/dbf/undotbs_1.dbf’
SIZE 100M AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED RETENTION NOGUARANTEE;
Note: In undo tablespace creation, “SEGMENT SPACE MANAGEMENT AUTO” can not be set
2.Change system parameters
SQL> alter system set undo_retention=900 scope=both;
SQL> alter system set undo_tablespace=UNDOTBS scope=both;
SQL> alter system set undo_management=AUTO scope=spfile;
SQL> shutdown immediate
SQL> startup
UNDO_MANAGEMENT is a static parameter. So database needs to be restarted.
Regards,
From <
What is Row Chaining and Row Migration? http://dba.stackexchange.com/questions/41142/how-to-check-which-background-process-are-running-in-my-oracle-database>
10. How to find out background processes?
1 select sid, process, program
2 from v$session s join v$bgprocess using (paddr)
3 where s.status = 'ACTIVE'
4* and rownum < 5
17:31:21 5 /
SID PROCESS PROGRAM
---------- ------------------------ ----------------------------------------------------------------
2 1332 ORACLE.EXE (PMON)
3 480 ORACLE.EXE (PSP0)
4 976 ORACLE.EXE (VKTM)
5 992 ORACLE.EXE (GEN0)
Elapsed: 00:00:00.05
To maximize performance and accommodate many users, a multiprocess Oracle database system uses background processes. Background processes are the processes running behind the scene and are meant to perform certain maintenance activities or to deal with abnormal conditions arising in the instance. Each background process is meant for a specific purpose and its role is well defined.
Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle database processes to provide increased parallelism for better performance and reliability.
A background process is defined as any process that is listed in V$PROCESS and has a non-null value in the pname column.
Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
Oracle background processes are visible as separate operating system processes in Unix/Linux. In Windows, these run as separate threads within the same service. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log.
Background processes are started automatically when the instance is started.
To findout background processes from database:
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
To findout background processes from OS:
$ ps -ef|grep ora_|grep SID
From <http://satya-dba.blogspot.com/2009/08/background-processes-in-oracle.html>
11. How to find background processes from OS: $ ps -ef|grep ora_|grep SID
12. How do you troubleshoot connectivity issues?
Oracle - Diagnosing Connection Problems
If you are having problems connecting to your Oracle database, then you should follow the following steps for diagnosing this:
• when you fail to connect, a file sqlnet.log is often created (see below). This can contain useful information about how the Oracle Client tried to connect, and the error it received.
• open a Windows command window and enter tnsping ORCL where ORCL is the name of the Oracle Service you are trying to connect to. If you are unsure of the Oracle Service name, from the AQT signon screen click on your Oracle database then click on Configure - the Oracle Service name is given in the field TNS Service Name.
tnsping will try to connect to the Oracle database, and will provide useful information about how it is doing this and the error it has received.
tnsnames.ora
The information about the Oracle service names, and how to connect to them, is given in the Oracle file tnsnames.ora. In many cases, connection problems have happened because the wrong tnsnames.ora file is being used.
Oracle looks at the following locations for tnsnames.ora:
• the directory referred to in environment variable TNS_ADMIN
• the directory ORACLE_HOME\network\admin. ORACLE_HOME is given in the ORACLE_HOME environment variable, or the Windows registry.
To complicate matters:
• a user may have multiple ORACLE_HOMEs
• Oracle products may have their own ORACLE_HOME (and thus tnsnames.ora). So SQL*PLUS may be using one tnsnames.ora file but (unknown to you), AQT is using another.
To clear up this uncertainty, it is recommended that the TNS_ADMIN environment variable is set to refer to directory where tnsnames.ora is located. All Oracle products and AQT will then use this tnsnames.ora file.
To view environment variables, open a Windows command window and enter SET. To permanently set an environment variable, go to the Windows Control Panel > System. Click on the Advanced tab and then the Environment Variables button (this is for Windows XP - other Windows versions may have these in a different location).
sqlnet.log
If you fail to connect, the Oracle client will generally write diagnostic information to sqlnet.log. Note that this does not include information on which tnsnames.ora file is being used, which is often the cause of many connection problems.
In earlier versions of Windows, sqlnet.log was written in the same directory as the AQT executable (eg. C:\Program Files\Advanced Query Tool v9). However for more recent versions of Windows (Windows Vista, Windows 7 and Window Server), access to the Program Files directories is restricted. As a result the file can often be created in a Virtual Store directory. You may wish to look for sqlnet.log in either:
• C:\Users\<username>\AppData\Local\VirtualStore\Program Files\Advanced Query Tool v9
• C:\Users\<username>\AppData\Local\VirtualStore\Windows\System32
Running AQT on a 64-bit version of Windows
If you are running AQT on a 64-bit version of Windows, you may fail to connect with message:
TNS could not resolve the connect identifier
This can happen due to a bug in the Oracle client in the 64-bit environment. This is described below.
By default, AQT will be installed into C:\Program Files (x86)\Advanced Query Tool v9. The Program Files (x86) directory structure is used for 32-bit applications. However there is a bug in the Oracle client - when you run a program which has a bracket in the path, the Oracle client will fail to parse tnsnames.ora correctly, resulting in the above message.
The resolution to this problem is to install AQT into a directory that doesn't have a bracket in the name.
Note that this problem has been fixed in recent versions of the Oracle Client.
From <http://www.querytool.com/help/1205.htm>
13. Why are bind variables important? Can you force literals to be converted into bind variables? YES
These simple examples clearly show how replacing literals with bind variables can save both memory and CPU, making OLTP applications faster and more scalable. If you are using third-party applications that don't use bind variables you may want to consider setting the CURSOR_SHARING parameter, but this should not be considered a replacement for bind variables. The CURSOR_SHARING parameter is less efficient and can potentially reduce performance compared to proper use of bind variables.
From <https://oracle-base.com/articles/misc/literals-substitution-variables-and-bind-variables>
Oracle Bind Variable Tips
Oracle Tips by Michael R. Ault
The perils of Non-Use of Bind Variables in Oracle
The biggest problem in many applications is the non-use of bind variables. Oracle bind variables are a super important way to make Oracle SQL reentrant.
Why is the use of bind variables such an issue?
Oracle uses a signature generation algorithm to assign a hash value to each SQL statement based on the characters in the SQL statement. Any change in a statement (generally speaking) will result in a new hash and thus Oracle assumes it is a new statement. Each new statement must be verified, parsed and have an execution plan generated and stored, all high overhead procedures.
The high overhead procedures might be avoided by using bind variables. See these notes on Oracle cursor_sharing for details.
Ad-hoc query generators (Crystal Reports, Discoverer, Business Objects) do not use bind variables, a major reason for Oracle developing the cursor_sharing parameter to force SQL to use bind variables (when cursor_sharing=force).
Bind variables and shared pool usage
Use of bind variables can have a huge impact on the stress in the shared pool and it is important to know about locating similar SQL in Oracle. This script shows how to check your shared pool for SQL that is using bind variables. Below is an example output of a database that is utilizing bind variables and the SQL is fully reentrant:
Time: 03:15 PM Bind Variable Utilization PERFSTAT dbaville database
When SQL is placed within PL/SQL, the embedded SQL never changes and a single library cache entry will be maintained and searched, greatly improving the library cache hit ratio and reducing parsing overhead.
Here are some particularly noteworthy advantages of placing SQL within Oracle stored procedures and packages:
• High productivity: PL/SQL is a language common to all Oracle environments. Developer productivity is increased when applications are designed to use PL/SQL procedures and packages because it avoids the need to rewrite code. Also, the migration complexity to different programming environments and front-end tools will be greatly reduced because Oracle process logic code is maintained inside the database with the data, where it belongs. The application code becomes a simple “shell” consisting of calls to stored procedures and functions.
• Improved Security: Making use of the “grant execute” construct, it is possible to restrict access to Oracle, enabling the user to run only the commands that are inside the procedures. For example, it allows an end user to access one procedure that has a command delete in one particular table instead of granting the delete privilege directly to the end user. The security of the database is further improved since you can define which variables, procedures and cursors will be public and which will be private, thereby completely limiting access to those objects inside the PL/SQL package. With the “grant” security model, back doors like SQL*Plus can lead to problems; with “grant execute” you force the end-user to play by your rules.
• Application portability: Every application written in PL/SQL can be transferred to any other environment that has the Oracle Database installed regardless of the platform. Systems that consist without any embedded PL/SQL or SQL become “database agnostic” and can be moved to other platforms without changing a single line of code.
• Code Encapsulation: Placing all related stored procedures and functions into packages allows for the encapsulation of storage procedures, variables and datatypes in one single program unit in the database, making packages perfect for code organization in your applications.
• Global variables and cursors: Packages can have global variables and cursors that are available to all the procedures and functions inside the package.
From <http://www.dba-oracle.com/t_bind_variables.htm>
Writing Efficient PL/SQL
Oracle Tips by Burleson Consulting
The following Tip is from the outstanding book "Oracle PL/SQL Tuning: Expert Secrets for High Performance Programming" by Dr. Tim Hall, Oracle ACE of the year, 2006:
In this chapter we will cover a large range of techniques and concepts for improving the efficiency, memory consumption and speed of PL/SQL code. Where possible these techniques are accompanied by small working examples that will help you to understand the concepts and how they can be applied to your application code to boost performance. The first area we will focus on is the use of bind variables.
Using Bind Variables
For every statement issued against the server, Oracle searches the shared pool to see if the statement has already been parsed. If an exact text match of the statement is already present in the shared pool a soft parse is performed as the execution plan for the statement has already been created and can be reused. If the statement is not found in the shared pool a hard parse must be performed to determine the optimal execution path.
The important thing to remember from the previous paragraph is the term “exact text match”, as different numbers of spaces, literal values and case will result in a failure to find a text match, such that the following statements are considered different.
SELECT 1 FROM dual WHERE dummy = ‘X’;
SELECT 1 FROM dual WHERE dummy = ‘Y’;
SELECT 1 FROM DUAL WHERE dummy = ‘X’;
SELECT 1 FROM dual WHERE dummy = ‘X’;
The first two statements only differ by the value of the search criteria, specified using a literal. In these situations exact text matches can be achieved by replacing the literal values with bind variables that have the correct values bound to them. Using the previous example the statement passed to the server might look like this.
SELECT 1 FROM dual WHERE dummy = :B1;
For every execution the bind variable may have a different value, but the text sent to the server is the same allowing for an exact text, which results in a soft parse.
There are two main problems associated with applications that do not use bind variables:
• Parsing SQL statements is a CPU intensive process, so reparsing similar statements constantly represents a waste of CPU cycles.
• Parsed statements are stored in the shared pool until they are aged out. By not using bind variables the shared pool can rapidly become filled with similar statements, which waste memory and make the instance less efficient.
The bind_variable_usage.sql script illustrates the problems associated with not using bind variables by using dynamic SQL to simulate an application sending insert statements to the server.
bind_variable_usage.sql
CREATE TABLE bind_variables (
code VARCHAR2(10)
);
BEGIN
-- Perform insert without bind variables.
FOR i IN 1 .. 10 LOOP
BEGIN
EXECUTE IMMEDIATE
'INSERT INTO bind_variables (code) VALUES (''' || i || ''')';
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
END LOOP;
-- Perform insert with bind variables.
FOR i IN 1 .. 10 LOOP
BEGIN
EXECUTE IMMEDIATE
'INSERT INTO bind_variables (code) VALUES (:B1)' USING TO_CHAR(i);
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
END LOOP;
COMMIT;
END;
/
-- Display the associated SQL text.
COLUMN sql_text FORMAT A60
COLUMN executions FORMAT 9999
SELECT sql_text,
executions
FROM v$sql
WHERE INSTR(sql_text, 'INSERT INTO bind_variables') > 0
AND INSTR(sql_text, 'EXECUTE') = 0
ORDER BY sql_text;
DROP TABLE bind_variables;
The script starts by creating a test table and executing a simple insert statement 10 times, where the insert statement concatenates a value into the string rather than using a bind variable. Next it repeats this process but this time uses a bind variable rather than concatenating the value into the string. Finally it displays the SQL text parsed by the server and stored in the shared pool, which requires query access on the v$sql view. The results from the script are displayed below
* SQL> @bind_variable_usage.sql
Table created.
PL/SQL procedure successfully completed.
SQL_TEXT EXECUTIONS
--------------------------------------------------------- ----------
INSERT INTO bind_variables (code) VALUES ('1') 1
INSERT INTO bind_variables (code) VALUES ('10') 1
INSERT INTO bind_variables (code) VALUES ('2') 1
INSERT INTO bind_variables (code) VALUES ('3') 1
INSERT INTO bind_variables (code) VALUES ('4') 1
INSERT INTO bind_variables (code) VALUES ('5') 1
INSERT INTO bind_variables (code) VALUES ('6') 1
INSERT INTO bind_variables (code) VALUES ('7') 1
INSERT INTO bind_variables (code) VALUES ('8') 1
INSERT INTO bind_variables (code) VALUES ('9') 1
INSERT INTO bind_variables (code) VALUES (:B1) 10
11 rows selected.
Table dropped.
From this we can see that when bind variables were not used the server parsed and executed each query as a unique statement, whereas the bind variable statement was parsed once and executed 10 times. This clearly demonstrates how applications that do not use bind variables can result in wasted memory in the shared pool, along with increased CPU usage.
The cursor_sharing parameter
In some situations you are not in control of the application development process and may be forced to accept applications that do not use bind variables running against the database. In these situations you can still take advantage of bind variables by using the cursor_sharing parameter at instance or session level.
ALTER SYSTEM SET CURSOR_SHARING=FORCE;
ALTER SESSION SET CURSOR_SHARING=FORCE;
The parameter can be set to one of three values:
• EXACT – The default setting where only statements with an exact text match share the same cursor.
• SIMILAR – Statements that match except for some literal values share the same cursor, unless the literal values affect the meaning of the statement or the level of optimization.
• FORCE - Statements that match except for some literal values share the same cursor, unless the literal values affect the meaning of the statement.
If we flush the shared pool and repeat the previous test with cursor sharing set to force we see a different result.
SQL> conn sys/password as sysdba
Connected.
SQL> alter system set cursor_sharing=force;
System altered.
SQL> alter system flush shared_pool;
System altered.
SQL> conn test/test
Connected.
SQL> @bind_variable_usage.sql
Table created.
PL/SQL procedure successfully completed.
SQL_TEXT EXECUTIONS
------------------------------------------------------------ ----------
INSERT INTO bind_variables (code) VALUES (:"SYS_B_0") 10
INSERT INTO bind_variables (code) VALUES (:B1) 10
2 rows selected.
Table dropped.
Here we can see that the ten insert statements using literals have been converted to a single insert statement using a bind variable called ”SYS_B_0” which has executed ten times. The statement that already used bind variables was unaltered and also executed ten times.
The cursor_sharing feature should be considered and a last resort as the process of rewriting the queries requires extra resources. It’s far better to do the job properly in the first place rather than rely on this feature
In the next section we will see how we can gain the advantages of using bind variables within dynamic SQL.
From <http://www.dba-oracle.com/plsql/t_plsql_efficient.htm>
14. What is adaptive cursor sharing?
Adaptive cursor sharing (ACS) is another feature we've blogged about before, which allows the optimizer to generate a set of plans that are optimal for different sets of bind values. A common question is how the two interact, and whether users should consider changing the value of cursor_sharing when upgrading to 11g to take advantage of ACS. The simplest way to think about the interaction between the two features for a given query is to first consider whether literal replacement will take place for a query. Consider a query containing a literal:
select * from employees where job = 'Clerk'
From <https://blogs.oracle.com/optimizer/entry/explain_adaptive_cursor_sharing_behavior_with_cursor_sharing_similar_and_force>
15. In Data Pump, if you restart a job in Data Pump, how it will know from where to resume?
• edited: hate typing in here with an ipod. Too difficult to see the complete post.
I can say, you are missing actual point here.
My comments
If in case impdp job failed and terminated , Lets suppose process already imported 100 rows, Some how its terminated, Now your question is if you start job again it should start import after 100 >rows, i.e. from 101 rows. Of course this is not possible, You have to use options TABLE exists action as Replace/Append/Truncate.
Again, Pause & Continue Client is different, For example proactively you find some problem either from alert log file(Ex: Temp file) or at log file of Import, You can give pause Ctrl + C, again after >taking proper action you can use continue client, so that by using master table it can start import from that point.
Did you read what i mentioned here? May be understanding problem with my english.
I said, if job is paused by manually then if you resume it can continue that job from that point of time after giving continue client
If job is completely failed, i said it will start from scratch.
Maybe what I should have realized is that you think you can pause a job by typing ctl-c. This does not pause the job. All it does is pause the client. The Data Pump code that is doing the work is still happily plugging along. It is still exporting if you are running expdp, and still importing if you ran impdp. If you want to verify this, export a single table that has data and a couple of indexes. Then run an import job and remap the the schema to a schema that has nothing in it and type ctl-c after you see the table created. Make sure that you have indexes on the table. Let the job sit like this forever after typing ctl-c. In another window, run sqlplus and query the table. You will see rows in it. This is because the data pump processes are still running. Dont touch the the other window and soon enough you will see that the indexes ate created. If you want to do this with export, run a job and specify a log file. Type ctl-c after the estimate phase is complete. you will see nothing happening on the screen. In another window, tail -f the log file. You will see the log file is being written to. You will also see the dump file getting bigger.
Did you read what i mentioned here? May be understanding problem with my english.
I said, if job is paused by manually then if you resume it can continue that job from that point of time >after giving continue client
If job is completely failed, i said it will start from scratch.
This is not true. Again, you can't pause a job. If you are running export and someone does a shutdown of your database or computer then all of the data pump processes are gone and your dump file is 1/2 written. If you attach to that old job and issue a continue_client, the job will continue where it left off. If you were running import when this happened and If it was importing your payroll table data and imported everything but 1 row, when the system and database are back up, the payroll table will be empty. If you attach to the job and issue continue_client, all of the data will be loaded at that time.
Its a background job, Once you scheduled either by Crontab/Nohup, AFAIK you cant process pause in >impdp job. There would be no control with you.
ONCE AGAIN... THIS IS WRONG INFORMATION!!!!
Again - you can never pause a job. You can either stop it by
ctl-c
export> stop
or kill a job
ctl-c
export> kill
If you started the job using some script then:
expdp user/password attach=you_job_name_here
export> stop or kill
I know how to continue that job when i ran in foreground, What happens when i run by crontab or by >nohup ?
get the job name. Either by know what the script will do or by querying user_datapump_jobs or dba_datapump_jobs and then
expdp user/password attach=your_jobname_here
Can you please justify how its wrong information? I know that job can be paused and we have full >control when we run from our session(foreground).
Your job cant' be paused, and your job can be restarted even if you didn't use the client to start the job. That is why it is wrong. You have full control over a datapump job no matter how /where it was started
I'm saying when job is scheduled then you do not have control. If you have any other way please do >mention, Please note when it ran in background.
Again, if the data pump job is running, you have full control over it. You can have 20 different sessions attached to the job and all 20 dba can control it. You could change the parallel to be 20 while another dba connected to the job could add data files, while a 3rd dba attached to the job could bump the parallel value to 50.
Your understanding of what ctl-c does is what is confusing you and what makes your statements wrong. Like i said above, it does not pause the job. It just disconnects the client from the server processes. The server processes are running and exporting/importing just as they would be if there was a client attached. Typing continue will reattach it. So, that is why what you said is wrong.
If you want more tests to run, run your favorite expdp command ant type ctl-c after estimate is complete. Then at the Export> prompt, type exit. Your job will continue. If you specified a log file, it will be updated and you can tail -f it out.
Hope this clears it up for you.
Dean
Edited by: Dean Gagne on Jan 27, 2012 5:41 PM
From <https://community.oracle.com/thread/2340182>
EXAMPLES of DATAPUMP => expdp restart doubt
From <https://community.oracle.com/thread/2340182>
•
• Pleas Guide me how to attach job when it is running in background
I know well how to pause , its not cancel , and re attach job. If you process through background , i already mentioned >either shell script or Nohup when job processed there is no control with you.
This is documented. Le't say your initial command was:
expdp system/manager job_name=full_1_27_2012 directory=dpump_dir dumpfile=full_1_27_2012.dmp full=y
Then you can simply do this:
expdp system/manager attach=full_1_27_2012 =>resume from where JOB=full_1_27_2012 failed (e.g. after server got rebooted, etc)
This will bring you to the
EXPORT> help? (start, stop)
prompt. If the job is still running, you can then say
EXPORT> stop
IMPDP/EXPDP:
*Create directory BACKUP_DIR(oracle) as ' /u01/Test'>Grant read,write on directory BACKUP_DIR to scott/hr[ from SYS / as sysdba profile(not system user)]
-@/../Test$expdp scott/pw directory=BACKUP_DIR dumpfile=SCOTT_EXP.dmp Logfile=SCOTT_EXP.log
Best,
Ken Chando
HP Enterprise Services
2610 Wycliff Rd Suite 220
Raleigh, NC 27607
? phone: (919) 424-5394
C phone (919) 349-5439
Email : Kenneth.Chando@hp.com
Thank you for your feedback | Recognition@hp
=======================================================================================================================================================
================HP LAB DR==================CLUSTER============
oracle@D2LSENPSH242[ORCLDR]# ll /u01/app/oracle/scripts
total 124
-rw-r--r-- 1 oracle oinstall 458 Oct 28 2013 sh_invalid_objects.sql
-rw-r--r-- 1 oracle oinstall 4996 Apr 22 2014 sh_tsdf.sql
-rw-r--r-- 1 oracle oinstall 452 Jul 29 2014 sh_fra.sql
-rw-r--r-- 1 oracle oinstall 175 Jul 31 2014 rman_delete_logs.txt
-rw-r--r-- 1 oracle oinstall 53 Jul 31 2014 sh_asmdisks.sql
-rw-r--r-- 1 oracle oinstall 53 Jul 31 2014 sh_asm_usage.sql
-rw-r--r-- 1 oracle oinstall 446 Jul 31 2014 sh_asm_files.sql
-rw-r--r-- 1 oracle oinstall 537 Oct 15 2014 sh_users.sql
-rw-r--r-- 1 oracle oinstall 137 Oct 15 2014 users_ORCL.txt
-rw-r--r-- 1 oracle oinstall 293 Oct 15 2014 sh_asmdisk_size.sql
-rw-r--r-- 1 oracle oinstall 465 Jan 27 2015 sh_restpnts.sql
-rw-r--r-- 1 oracle oinstall 538 Jan 27 2015 sh_reghist.sql
-rwxrwxrwx 1 oracle oinstall 1012 Feb 10 2015 delete_applied_logs_ORCLDR.sh
-rwxrwxrwx 1 oracle oinstall 17909 Feb 10 2015 rm_applied_logs.sh
-rwxrwxrwx 1 oracle oinstall 18000 Feb 10 2015 delete_applied_logs.sh
-rw-r--r-- 1 oracle oinstall 1950 Feb 10 2015 delete_applied_logs.log
-rw-r--r-- 1 oracle oinstall 630 Feb 13 2015 alogs2.sql
-rw-r--r-- 1 oracle oinstall 2726 Apr 23 16:24 tsdf_ORCL.txt
-rw-r--r-- 1 oracle oinstall 681 Apr 27 13:59 alogs.sql
-rw-r--r-- 1 oracle oinstall 713 May 4 15:57 alogs165.sql
-rw-r--r-- 1 oracle oinstall 713 May 4 15:58 alogs166.sql
-rw-r--r-- 1 oracle oinstall 395 Aug 6 12:53 asm_files.txt
oracle@D2LSENPSH242[ORCLDR]# cat sh_reghist.sql
cat: sh_reghist.sql: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# cd /u01/app/oracle/scripts
oracle@D2LSENPSH242[ORCLDR]# cat sh_reghist.sql
REM ************************************************************************************************
REM sh_reghist.sql
REM list contents of registry$history
REM
REM ************************************************************************************************
SET echo off heading on
set pages 9999 lines 140
column action_time format a30
column action format a15
column namespace format a12
column version format a12
column comments format a30
column bundle_series format a14
select * from registry$history;
spool off
SET echo on
oracle@D2LSENPSH242[ORCLDR]# rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Fri Sep 18 15:57:28 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database (not started)
RMAN> crosscheck archivelog all;
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of crosscheck command at 09/18/2015 15:57:48
RMAN-12010: automatic channel allocation initialization failed
RMAN-06403: could not obtain a fully authorized session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
RMAN> show all;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 09/18/2015 15:58:12
RMAN-06403: could not obtain a fully authorized session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
RMAN> exit
Recovery Manager complete.
oracle@D2LSENPSH242[ORCLDR]# ls /u01/
app
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app
11.2.0.3 grid oracle oraInventory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle
acfs acfsmounts admin backup cfgtoollogs checkpoints Clusterware D2LSENPSH242 diag media patches product scripts staging
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/backup
incr incr2
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin
+ASM LABDBDR orcl ORCLDR
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/ORCLDR
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/orcl
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/LABDBDR
adump
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/admin/+ASM
pfile
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/11.2.0.3
ls: /u01/app/oracle/11.2.0.3: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/11.2.0.3
grid
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory
backup ContentsXML install.platform logs oraInstaller.properties oui
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory/backup
2013-03-06_05-13-17PM 2013-03-06_12-03-21PM 2013-08-07_08-50-55PM 2013-08-14_06-34-59PM 2013-08-14_09-03-01PM 2013-08-14_09-03-30PM
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oraInventory/logs
installActions2013-03-06_04-58-02PM.log installActions2013-08-20_08-12-40PM.log OPatch2015-04-22_04-16-33-PM.log oraInstall2013-03-06_12-03-21PM.out
installActions2013-03-06_05-13-17PM.log installActions2013-08-20_08-15-30PM.log OPatch2015-07-23_05-08-13-PM.log oraInstall2013-08-07_08-50-55PM.err
installActions2013-03-06_11-33-57AM.log installActions2013-08-20_08-34-18PM.log OPatch2015-07-23_05-10-14-PM.log oraInstall2013-08-07_08-50-55PM.out
installActions2013-03-06_11-36-25AM.log installActions2013-08-20_08-41-09PM.log OPatch2015-07-23_05-13-35-PM.log oraInstall2013-08-14_06-34-59PM.err
installActions2013-03-06_11-42-00AM.log installActions2013-08-21_06-37-08PM.log OPatch2015-07-23_05-16-03-PM.log oraInstall2013-08-14_06-34-59PM.out
installActions2013-03-06_11-46-37AM.log installActions2013-08-21_06-37-22PM.log OPatch2015-07-23_05-26-44-PM.log oraInstall2013-08-14_09-03-01PM.err
installActions2013-03-06_11-48-33AM.log installActions2013-09-04_02-13-30PM.log OPatch2015-07-31_04-43-21-PM.log oraInstall2013-08-14_09-03-01PM.out
installActions2013-03-06_11-54-44AM.log OPatch2013-08-16_08-46-21-PM.log OPatch2015-08-06_03-42-12-PM.log oraInstall2013-08-14_09-03-30PM.err
installActions2013-08-07_08-50-55PM.log OPatch2013-08-16_08-59-58-PM.log OPatch2015-08-06_05-32-39-PM.log oraInstall2013-08-14_09-03-30PM.out
installActions2013-08-14_06-34-59PM.log OPatch2013-08-16_09-05-30-PM.log oraInstall2013-03-06_05-13-17PM.err oraInstall2013-08-20_08-41-09PM.err
installActions2013-08-20_06-52-37PM.log OPatch2013-10-29_08-26-14-PM.log oraInstall2013-03-06_05-13-17PM.out oraInstall2013-08-20_08-41-09PM.out
installActions2013-08-20_06-53-27PM.log OPatch2014-01-25_06-00-24-PM.log oraInstall2013-03-06_11-48-33AM.err oraInstall2013-09-04_02-13-30PM.err
installActions2013-08-20_07-19-41PM.log OPatch2014-07-30_09-39-40-PM.log oraInstall2013-03-06_11-48-33AM.out oraInstall2013-09-04_02-13-30PM.out
installActions2013-08-20_08-11-18PM.log OPatch2014-10-20_05-19-54-PM.log oraInstall2013-03-06_11-54-44AM.err UpdateNodeList2013-03-06_12-03-21PM.log
installActions2013-08-20_08-11-49PM.log OPatch2014-10-20_05-22-05-PM.log oraInstall2013-03-06_11-54-44AM.out UpdateNodeList2013-08-14_09-03-01PM.log
installActions2013-08-20_08-12-11PM.log OPatch2015-01-26_09-24-58-PM.log oraInstall2013-03-06_12-03-21PM.err UpdateNodeList2013-08-14_09-03-30PM.log
oracle@D2LSENPSH242[ORCLDR]# ls /
bin boot dev edsinfo.txt etc home lib lib64 lost+found media misc mnt opt proc root RPM sbin selinux srv sys tftpboot tmp u01 usr var
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app
11.2.0.3 grid oracle oraInventory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle
acfs acfsmounts admin backup cfgtoollogs checkpoints Clusterware D2LSENPSH242 diag media patches product scripts staging
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/cfgtoologs
ls: /u01/app/oracle/cfgtoologs: No such file or directory
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/cfgtoollogs
asmca dbca emca netca postinstall
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/checkpoints
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/diag
asm clients crs diagtool lsnrctl netcman ofm rdbms tnslsnr
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/media
database grid OMS
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/media/database
doc install response rpm runInstaller sshsetup stage welcome.html
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/patches
oracle@D2LSENPSH242[ORCLDR]# ls /u01/app/oracle/staging
11.2.0.3
oracle@D2LSENPSH242[ORCLDR]#
=24hours=60minsx24hrs=1day=1440mins=3days=3x1440mins=4320:alter system set db_flashback_retention_target=4320;#3dys
=======================================================================================================================================================
==TURNING ARCHIVELOG MODE and FLASHBACK ON================
alter system set db_recovery_file_dest_size=10G scope=both;
sql>show parameter db_recovery_file_dest
//bounce dbase to mount state:shutdown immediate;>startup mount;
alter database archivelog;
alter database flashback on; #turn flashback off=>alter database flashback off;
alter database open;
alter system set db_flashback_retention_target=2880
================================CPU status script===============================================================
select se.username,ss.sid,ROUND(value/100) "CPU Usage"
FROM v$session se,v$sesstat ss,v$statname st
WHERE ss.statistic#=st.statistic#
AND name LIKE '%CPU used by this session %'
AND se.sid=ss.SID
AND se.username IS NOT NULL
ORDER BY value DESC;
==================================KEN TESTED below CPU usage script and it worked ====================================================
select
ss.username,
se.SID,
VALUE/100 cpu_usage_seconds
from
v$session ss,
v$sesstat se,
v$statname sn
where
se.STATISTIC#=sn.STATISTIC#
and
NAME like '%CPU used by this session%'
and
se.SID=ss.SID
and
ss.status='ACTIVE'
and
ss.username is not null
order by VALUE desc;
==================================Alex's CPU script below=============================================================================================
SQL> set pages 9999 lines 120
SQL> select * from (select * from (select sid, serial#, process pid, username un, program, sql.sql_id, sql.child_number cn, last_active_time lat, optimizer_mode om, plan_hash_value plan_hash, buffer_gets gets, rows_processed num_rows, executions execs, cpu_time, elapsed_time, round(elapsed_time/(case executions when 0 then 1 else executions end)/1000) mspe, round(buffer_gets/(case executions when 0 then 1 else executions end)) as gpe, round(buffer_gets/(case rows_processed when 0 then 1 else rows_processed end)) as gpr, sql_text from v$sql sql, v$session sess where sql.sql_id=sess.sql_id and sql.child_number=sess.sql_child_number) order by mspe desc) where rownum<21;
============KEN's notes:=====on how to improve on the ORACLE CPU issues =============================================================================
1. Logical I/O(consistent gets) has high CPU overhead and buffer touches can be reduced via SQL tuning (adding more selective indexes, materialized views)
2. Library cache contention(high parses) drives-up the CPU
**NOTE: Having 100% CPU is not always a problem, it's normal for VIRTUAL memory SERVERS to drive CPU consumption to 100%. Also note that in oracle 10g and beyond, we have the[ _optimizer_cost_model ] which is set to CPU, from the default in 9i and earlier of IO. This parameter is for Oracle databases that are CPU-bound and it tells oracle to create the CBO decision tree weights with estimated CPU consumption, not estimated I/O costs [www.dba-oracle.com/t_high_cpu.htm]
**NOTE: When analyzing vmstat output, there are several metrics to which you should pay attention. For example keep an eye on CPU run queue column. The RUN QUEUE should NEVER exceed the number of CPUs on the SERVER.
If you do notice the RUN QUEUE exceeding the amount of CPUs, it is a good indication that YOUR server has a BOTTLENECK. Inside of Oracle, you can display CPU for any Oracle user session with this script:
==================================KEN TESTED below CPU usage script and it worked ====================================================
select
ss.username,
se.SID,
VALUE/100 cpu_usage_seconds
from
v$session ss,
v$sesstat se,
v$statname sn
where
se.STATISTIC#=sn.STATISTIC#
and
NAME like '%CPU used by this session%'
and
se.SID=ss.SID
and
ss.status='ACTIVE'
and
ss.username is not null
order by VALUE desc;
=======================================DATAGUARD redo log status check================================================
SQL> select group#,type,member from v$logfile;
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------
3 ONLINE
/u01/oradata/TAMSP1/redo03a.log
2 ONLINE
/u01/oradata/TAMSP1/redo02a.log
4 ONLINE
/u01/oradata/TAMSP1/redo04a.log
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------
4 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo04b.log
1 ONLINE
/u01/oradata/TAMSP1/redo01a.log
1 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo01b.log
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------
3 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo03b.log
2 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo02b.log
5 STANDBY
/u01/oradata/TAMSP1/sredo05a.log
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------
5 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo05b.log
6 STANDBY
/u01/oradata/TAMSP1/sredo06a.log
6 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo06b.log
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------
7 STANDBY
/u01/oradata/TAMSP1/sredo07a.log
7 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo07b.log
14 rows selected.
========================
GROUP# TYPE
---------- -------
MEMBER
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3 ONLINE
/u01/oradata/TAMSP1/redo03a.log
2 ONLINE
/u01/oradata/TAMSP1/redo02a.log
4 ONLINE
/u01/oradata/TAMSP1/redo04a.log
4 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo04b.log
1 ONLINE
/u01/oradata/TAMSP1/redo01a.log
1 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo01b.log
3 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo03b.log
2 ONLINE
/u01/FRA/TAMSP1/onlinelog/redo02b.log
5 STANDBY
/u01/oradata/TAMSP1/sredo05a.log
5 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo05b.log
6 STANDBY
/u01/oradata/TAMSP1/sredo06a.log
6 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo06b.log
7 STANDBY
/u01/oradata/TAMSP1/sredo07a.log
7 STANDBY
/u01/FRA/TAMSP1/onlinelog/sredo07b.log
14 rows selected.
================dbstart script_ken=============================================================================
#!/bin/sh
#
# $Id: dbstart.sh /unix/5 2012/11/09 18:37:28 tmagana Exp $
# Copyright (c) 1991, 2012, Oracle and/or its affiliates. All rights reserved.
#
###################################
#
# usage: dbstart $ORACLE_HOME
#
# This script is used to start ORACLE from /etc/rc(.local).
# It should ONLY be executed as part of the system boot procedure.
#
# This script will start all databases listed in the oratab file
# whose third field is a "Y". If the third field is set to "Y" and
# there is no ORACLE_SID for an entry (the first field is a *),
# then this script will ignore that entry.
#
# This script requires that ASM ORACLE_SID's start with a +, and
# that non-ASM instance ORACLE_SID's do not start with a +.
#
# If ASM instances are to be started with this script, it cannot
# be used inside an rc*.d directory, and should be invoked from
# rc.local only. Otherwise, the CSS service may not be available
# yet, and this script will block init from completing the boot
# cycle.
#
# If you want dbstart to auto-start a single-instance database that uses
# an ASM server that is auto-started by CRS (this is the default behavior
# for an ASM cluster), you must change the database's ORATAB entry to use
# a third field of "W" and the ASM's ORATAB entry to use a third field of "N".
# These values specify that dbstart auto-starts the database only after
# the ASM instance is up and running.
#
# Note:
# Use ORACLE_TRACE=T for tracing this script.
#
# The progress log for each instance bringup plus Error and Warning message[s]
# are logged in file $ORACLE_HOME/startup.log. The error messages related to
# instance bringup are also logged to syslog (system log module).
# The Listener log is located at $ORACLE_HOME_LISTNER/listener.log
#
# On all UNIX platforms except SOLARIS
# ORATAB=/etc/oratab
#
# To configure, update ORATAB with Instances that need to be started up
# Entries are of the form:
# To configure, update ORATAB with Instances that need to be started up
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y|W>:
# An example entry:
# main:/usr/lib/oracle/emagent_10g:Y
#
# Overall algorithm:
# 1) Bring up all ASM instances with 'Y' entry in status field in oratab entry
# 2) Bring up all Database instances with 'Y' entry in status field in
# oratab entry
# 3) If there are Database instances with 'W' entry in status field
# then
# iterate over all ASM instances (irrespective of 'Y' or 'N') AND
# wait for all of them to be started
# fi
# 4) Bring up all Database instances with 'W' entry in status field in
# oratab entry
#
#####################################
LOGMSG="logger -puser.alert -s "
trap 'exit' 1 2 3
# for script tracing
case $ORACLE_TRACE in
T) set -x ;;
esac
# Set path if path not set (if called from /etc/rc)
SAVE_PATH=/bin:/usr/bin:/etc:${PATH} ; export PATH
SAVE_LLP=$LD_LIBRARY_PATH
# First argument is used to bring up Oracle Net Listener
ORACLE_HOME_LISTNER=$1
if [ ! $ORACLE_HOME_LISTNER ] ; then
echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
echo "Usage: $0 ORACLE_HOME"
else
LOG=$ORACLE_HOME_LISTNER/listener.log
# Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
# a different ORACLE_HOME for each entry in the oratab.
ORACLE_HOME=$ORACLE_HOME_LISTNER ; export ORACLE_HOME
# Start Oracle Net Listener
if [ -x $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
ORACLE_HOME=$ORACLE_HOME_LISTNER ; export ORACLE_HOME
# Start Oracle Net Listener
if [ -x $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
echo "$0: Starting Oracle Net Listener" >> $LOG 2>&1
$ORACLE_HOME_LISTNER/bin/lsnrctl start >> $LOG 2>&1 &
VER10LIST=`$ORACLE_HOME_LISTNER/bin/lsnrctl version | grep "LSNRCTL for " | cut -d' ' -f5 | cut -d'.' -f1`
export VER10LIST
else
echo "Failed to auto-start Oracle Net Listener using $ORACLE_HOME_LISTNER/bin/tnslsnr"
fi
fi
# Set this in accordance with the platform
ORATAB=/etc/oratab
if [ ! $ORATAB ] ; then
echo "$ORATAB not found"
exit 1;
fi
# Checks Version Mismatch between Listener and Database Instance.
# A version 10 listener is required for an Oracle Database 10g database.
# Previous versions of the listener are not supported for use with an Oracle
# Database 10g database. However, it is possible to use a version 10 listener
# with previous versions of the Oracle database.
checkversionmismatch() {
if [ $VER10LIST ] ; then
VER10INST=`sqlplus -V | grep "Release " | cut -d' ' -f3 | cut -d'.' -f1`
if [ $VER10LIST -lt $VER10INST ] ; then
$LOGMSG "Listener version $VER10LIST NOT supported with Database version $VER10INST"
$LOGMSG "Restart Oracle Net Listener using an alternate ORACLE_HOME_LISTNER:"
$LOGMSG "lsnrctl start"
fi
fi
}
# Starts a Database Instance
startinst() {
# Called programs use same database ID
export ORACLE_SID
# Put $ORACLE_HOME/bin into PATH and export.
PATH=$ORACLE_HOME/bin:${SAVE_PATH} ; export PATH
# add for bug # 652997
LD_LIBRARY_PATH=${ORACLE_HOME}/lib:${SAVE_LLP} ; export LD_LIBRARY_PATH
PFILE=${ORACLE_HOME}/dbs/init${ORACLE_SID}.ora
SPFILE=${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora
SPFILE=${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora
SPFILE1=${ORACLE_HOME}/dbs/spfile.ora
echo ""
echo "$0: Starting up database \"$ORACLE_SID\""
date
echo ""
checkversionmismatch
# See if it is a V6 or V7 database
VERSION=undef
if [ -f $ORACLE_HOME/bin/sqldba ] ; then
SQLDBA=sqldba
VERSION=`$ORACLE_HOME/bin/sqldba command=exit | awk '
/SQL\*DBA: (Release|Version)/ {split($3, V, ".") ;
print V[1]}'`
case $VERSION in
"6") ;;
*) VERSION="internal" ;;
esac
else
if [ -f $ORACLE_HOME/bin/svrmgrl ] ; then
SQLDBA=svrmgrl
VERSION="internal"
else
SQLDBA="sqlplus /nolog"
fi
fi
STATUS=1
if [ -f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.dbf ] ; then
STATUS="-1"
fi
if [ -f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.ora ] ; then
STATUS="-1"
fi
pmon=`ps -ef | grep -w "ora_pmon_$ORACLE_SID" | grep -v grep`
if [ "$pmon" != "" ] ; then
STATUS="-1"
$LOGMSG "Warning: ${INST} \"${ORACLE_SID}\" already started."
fi
if [ $STATUS -eq -1 ] ; then
$LOGMSG "Warning: ${INST} \"${ORACLE_SID}\" possibly left running when system went down (system crash?)."
$LOGMSG "Action: Notify Database Administrator."
case $VERSION in
case $VERSION in
"6") sqldba "command=shutdown abort" ;;
"internal") $SQLDBA $args <<EOF
connect internal
shutdown abort
EOF
;;
*) $SQLDBA $args <<EOF
connect / as sysdba
shutdown abort
quit
EOF
;;
esac
if [ $? -eq 0 ] ; then
STATUS=1
else
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
fi
if [ $STATUS -eq 1 ] ; then
if [ -e $SPFILE -o -e $SPFILE1 -o -e $PFILE ] ; then
case $VERSION in
"6") sqldba command=startup ;;
"internal") $SQLDBA <<EOF
connect internal
startup
EOF
;;
*) $SQLDBA <<EOF
connect / as sysdba
startup
quit
EOF
;;
esac
if [ $? -eq 0 ] ; then
echo ""
echo "$0: ${INST} \"${ORACLE_SID}\" warm started."
else
$LOGMSG ""
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
else
else
$LOGMSG ""
$LOGMSG "No init file found for ${INST} \"${ORACLE_SID}\"."
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
fi
}
# Starts an ASM Instance
startasminst() {
# Called programs use same database ID
export ORACLE_SID
ORACLE_HOME=`echo $LINE | awk -F: '{print $2}' -`
# Called scripts use same home directory
export ORACLE_HOME
# For ASM instances, we have a dependency on the CSS service.
# Wait here for it to become available before instance startup.
# Is the 10g install intact? Are all necessary binaries present?
if [ ! -x $ORACLE_HOME/bin/crsctl ]; then
$LOGMSG "$ORACLE_HOME/bin/crsctl not found when attempting to start"
$LOGMSG " ASM instance $ORACLE_SID."
else
COUNT=0
$ORACLE_HOME/bin/crsctl check css
RC=$?
while [ "$RC" != "0" ];
do
COUNT=`expr $COUNT + 1`
if [ $COUNT = 15 ] ; then
# 15 tries with 20 sec interval => 5 minutes timeout
$LOGMSG "Timed out waiting to start ASM instance $ORACLE_SID"
$LOGMSG " CSS service is NOT available."
exit $COUNT
fi
$LOGMSG "Waiting for Oracle CSS service to be available before starting "
$LOGMSG " ASM instance $ORACLE_SID. Wait $COUNT."
sleep 20
$ORACLE_HOME/bin/crsctl check css
RC=$?
done
fi
startinst
}
startinst
}
# Start of dbstartup script
#
# Loop for every entry in oratab file and and try to start
# that ORACLE.
#
# ASM instances need to be started before 'Database instances'
# ASM instance is identified with '+' prefix in ORACLE_SID
# Following loop brings up ASM instance[s]
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #comment-line in oratab
*)
ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID - ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'Y'.
if [ "`echo $LINE | awk -F: '{print $NF}' -`" = "Y" ] ; then
# If ASM instances
if [ `echo $ORACLE_SID | cut -b 1` = '+' ]; then
INST="ASM instance"
ORACLE_HOME=`echo $LINE | awk -F: '{print $2}' -`
# Called scripts use same home directory
export ORACLE_HOME
# file for logging script's output
LOG=$ORACLE_HOME/startup.log
touch $LOG
chmod a+r $LOG
echo "Processing $INST \"$ORACLE_SID\": log file $ORACLE_HOME/startup.log"
startasminst >> $LOG 2>&1
fi
fi
;;
esac
done
# exit if there was any trouble bringing up ASM instance[s]
if [ "$?" != "0" ] ; then
exit 2
fi
fi
#
# Following loop brings up 'Database instances'
#
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #comment-line in oratab
*)
ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID - ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'Y'.
if [ "`echo $LINE | awk -F: '{print $NF}' -`" = "Y" ] ; then
# If non-ASM instances
if [ `echo $ORACLE_SID | cut -b 1` != '+' ]; then
INST="Database instance"
ORACLE_HOME=`echo $LINE | awk -F: '{print $2}' -`
# Called scripts use same home directory
export ORACLE_HOME
# file for logging script's output
LOG=$ORACLE_HOME/startup.log
touch $LOG
chmod a+r $LOG
echo "Processing $INST \"$ORACLE_SID\": log file $ORACLE_HOME/startup.log"
startinst >> $LOG 2>&1
fi
fi
;;
esac
done
#
# Following loop brings up 'Database instances' that have wait state 'W'
#
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #comment-line in oratab
*)
ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID - ignore this entry
# same as NULL SID - ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'W'.
if [ "`echo $LINE | awk -F: '{print $NF}' -`" = "W" ] ; then
W_ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
# DB instances with 'W' (wait state) have a dependency on ASM instances via CRS.
# Wait here for 'all' ASM instances to become available.
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #comment-line in oratab
*)
ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID - ignore this entry
ORACLE_SID=""
continue
fi
if [ `echo $ORACLE_SID | cut -b 1` = '+' ]; then
INST="ASM instance"
ORACLE_HOME=`echo $LINE | awk -F: '{print $2}' -`
if [ -x $ORACLE_HOME/bin/srvctl ] ; then
COUNT=0
NODE=`olsnodes -l`
RNODE=`srvctl status asm -n $NODE | grep "$ORACLE_SID is running"`
RC=$?
while [ "$RC" != "0" ]; # wait until this comes up!
do
COUNT=$((COUNT+1))
if [ $COUNT = 5 ] ; then
# 5 tries with 60 sec interval => 5 minutes timeout
$LOGMSG "Error: Timed out waiting on CRS to start ASM instance $ORACLE_SID"
exit $COUNT
fi
$LOGMSG "Waiting for Oracle CRS service to start ASM instance $ORACLE_SID"
$LOGMSG "Wait $COUNT."
sleep 60
RNODE=`srvctl status asm -n $NODE | grep "$ORACLE_SID is running"`
RC=$?
done
else
$LOGMSG "Error: \"${W_ORACLE_SID}\" has dependency on ASM instance \"${ORACLE_SID}\""
$LOGMSG "Error: Need $ORACLE_HOME/bin/srvctl to check this dependency"
fi
fi # asm instance
fi
fi # asm instance
;;
esac
done # innner while
fi
;;
esac
done # outer while
# by now all the ASM instances have come up and we can proceed to bring up
# DB instance with 'W' wait status
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #comment-line in oratab
*)
ORACLE_SID=`echo $LINE | awk -F: '{print $1}' -`
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID - ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'W'.
if [ "`echo $LINE | awk -F: '{print $NF}' -`" = "W" ] ; then
INST="Database instance"
if [ `echo $ORACLE_SID | cut -b 1` = '+' ]; then
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started"
$LOGMSG "Error: incorrect usage: 'W' not allowed for ASM instances"
continue
fi
ORACLE_HOME=`echo $LINE | awk -F: '{print $2}' -`
# Called scripts use same home directory
export ORACLE_HOME
# file for logging script's output
LOG=$ORACLE_HOME/startup.log
touch $LOG
chmod a+r $LOG
echo "Processing $INST \"$ORACLE_SID\": log file $ORACLE_HOME/startup.log"
startinst >> $LOG 2>&1
fi
;;
esac
done
==============================Revoking script to fix HARDENING/audit issues===========================================================
select OWNER, TABLE_NAME, PRIVILEGE from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
PROMPT
PROMPT Please revoke these privileges from PUBLIC by executing the following statements:
set head off feed off
select 'revoke '||PRIVILEGE||' on '||trim(OWNER)||'.'||TABLE_NAME||' from PUBLIC;'
from DBA_TAB_PRIVS
where GRANTEE='PUBLIC' and TABLE_NAME in ('UTL_FILE','UTL_TCP','UTL_SMTP','UTL_HTTP','DBMS_RANDOM','DBMS_LDAP','DBMS_LDAP_UTIL','DBMS_BACKUP_RESTORE','DBMS_JAVA');
set head on feed on
PROMPT
PROMPT All system privileges except for CREATE SESSION must be restricted to DBAs,
PROMPT application object owner accounts/schemas (locked accounts), and default Oracle accounts.
PROMPT List of system privileges assigned to Roles
break on grantee skip 1;
col privilege format a35
select grantee, privilege , admin_option
from dba_sys_privs
where grantee in (select role from dba_roles)
and grantee not in ('SELECT_CATALOG_ROLE', 'DBA'
,'IMP_FULL_DATABASE'
,'EXP_FULL_DATABASE','RECOVERY_CATALOG_OWNER'
,'SCHEDULER_ADMIN', 'AQ_ADMINISTRATOR_ROLE')
and privilege not in ('CREATE SESSION')
and (admin_option = 'YES' or privilege like '%ANY%')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of Roles assigned to Users
break on granted_role skip 1 ;
select granted_role, grantee, admin_option
from dba_role_privs
where grantee not in ('SYS','SYSTEM', 'DBA',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS',
'OLAPSYS','SYSMAN','PERFSTAT')
order by granted_role, grantee
/
clear breaks;
PROMPT
PROMPT List of system privs assigned directly to users
PROMPT These should be reassigned using roles.
break on grantee skip 1;
select grantee, privilege
from dba_sys_privs
where grantee not in (select role from dba_roles)
and grantee not in ('SYS','SYSTEM',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS','ORDPLUGINS',
'XDB','WMSYS','DBSNMP','OLAPSYS','SYSMAN','PERFSTAT')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of object privs assigned directly to users
PROMPT Privileges should be controlled using roles.
col privilege format a10;
col grantee format a15;
col owner_object format a40;
break on grantee on privilege skip 1;
select grantee, privilege,
owner||'.'||table_name owner_object
from dba_tab_privs
where grantee not in (select role from dba_roles)
and grantee not in ('SYS','SYSTEM','PUBLIC',
'DMSYS','CTXSYS','OUTLN','ORDSYS','MDSYS',
'SDB','WMSYS','XDB','DBSNMP',
'OLAPSYS','SYSMAN','PERFSTAT')
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of users that can pass on system privs and the objects they control
PROMPT Users should not be able to pass system privs to others
break on grantee;
select grantee, privilege
from dba_sys_privs
where admin_option='YES'
and grantee not in ('DBA','SYSTEM','SYS', 'SCHEDULER_ADMIN'
,'XDB','AQ_ADMINISTRATOR_ROLE')
order by grantee, privilege
/
clear breaks;
PROMPT
PROMPT List of system privileges that should be reviewed and possibly revoked
break on grantee skip 1;
select grantee, privilege
from dba_sys_privs
where( privilege like 'ADMINISTER %'
or privilege like '%ANY%'
or (privilege like 'ALTER%' and privilege not like '%SESSION')
or
privilege like 'DROP %'
or
privilege like 'AUDIT%'
or privilege in ('BECOME USER', 'CREATE DATABASE LINK', 'CREATE PROFILE',
'CREATE ROLE', 'CREATE USER', 'CREATE ROLLBACK SEGMENT',
'EXPORT FULL DATABASE', 'IMPORT FULL DATABASE', 'MANAGE TABLESPACE')
)
and grantee not in ('DBA','SYSTEM','SYS','IMP_FULL_DATABASE'
,'EXP_FULL_DATABASE','DMSYS', 'SCHEDULER_ADMIN','ORDSYS', 'XDB'
,'MDSYS','RECOVERY_CATALOG_OWNER','WMSYS','CTXSYS','DMSYS','DBSNMP',
'PERFSTAT','ORDPLUGINS', 'AQ_ADMINISTRATOR_ROLE','OUTLN' )
order by grantee, privilege
/
clear breaks;
clear columns;
PROMPT
PROMPT List of object privs that should be reviewed and possibly revoked.
col owner_object format a40;
col grantee format a15;
col privilege format a10;
break on grantee skip 1;
select grantee, privilege,
owner||'.'||table_name owner_object
from dba_tab_privs
where owner in ('SYS','SYSTEM')
and table_name like 'DBA%'
and grantee not in ('SELECT_CATALOG_ROLE','SYSTEM','DBA'
,'MDSYS','ORDSYS','WMSYS','DMSYS','AQ_ADMINISTRATOR_ROLE','CTXSYS')
/
clear breaks;
clear columns;
PROMPT
PROMPT List of objects created using sys or system
PROMPT excluding those created on installation
break on object_type skip 1;
col object_name format a40;
select distinct object_type, object_name
from dba_objects
where owner in ('SYS','SYSTEM')
and trunc(created) > (select trunc(created) from v$database)
and object_type not like 'INDEX%'
order by object_type, object_name
/
clear breaks;
column owner format a10;
column segment_name format a25;
column segment_type format a25;
set feedback off heading off
select 'The Following is a list of all objects that are owned by users other than SYS and SYSTEM '||chr(13)||chr(10),
'but are stored in the SYSTEM tablespace....'
from dual
where 0 < ( select count(*)
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM','OUTLN')
and tablespace_name = 'SYSTEM' )
/
set heading on
break on owner skip 1;
select owner, segment_name, segment_type
from sys.dba_segments
where owner not in ('SYS', 'SYSTEM','OUTLN')
and tablespace_name = 'SYSTEM'
order by owner, segment_name
/
prompt
prompt
set feedback off heading off
select 'The Following Users have the SYSTEM tablespace as their Default or '||chr(13)||chr(10),
'Temporary Tablespace. Please change that for all non-system accounts'
from dual
where 0 <
( select count(*)
from sys.dba_users
where username not in ('SYS', 'SYSTEM','OUTLN')
and ( default_tablespace = 'SYSTEM' or temporary_tablespace = 'SYSTEM') )
/
set heading on
column username format a10;
column default_tablespace format a15 heading 'Default';
column temporary_tablespace format a15 heading 'Temporary';
column account_status format a16 heading 'Account Status';
select username, default_tablespace, temporary_tablespace, account_status
from sys.dba_users
where username not in ('SYS', 'SYSTEM','OUTLN')
and ( default_tablespace = 'SYSTEM' or temporary_tablespace = 'SYSTEM')
order by username
/
set feedback on
prompt
prompt
set feedback on
column username format a20;
PROMPT Opened Accounts
select username , account_status
from dba_users where account_status = 'OPEN'
/
PROMPT Accounts NOT open
select username , account_status
from dba_users where account_status != 'OPEN'
/
spool off;
oracle@d2asedvic004[BASSD]#
================================================HOW to Kill CURRENT RUNNING RMAN backup job script==================================================
scp oracle@10.236.19.246:/u01/app/FRA/backup/* . /u02/app/oracle/acfs/backup
scp oracle@10.232.19.246:/u01/app/FRA/backup/keep/test /u02/app/oracle/acfs/backup/d2aclprsh154/keep/ .
scp oracle@10.232.19.246:/u01/app/FRA/backup/keep/test /u02/app/oracle/acfs/backup/d2aclprsh154/keep/ .
scp oracle@10.232.11.38:/u01/app/FRA/backup/keep/test oracle@10.232.19.246/u02/app/oracle/acfs/backup/d2aclprsh154/keep
scp oracle@10.232.11.38:/u01/app/FRA/backup/keep/test oracle@10.232.19.246/u02/app/oracle/acfs/backup/d2aclprsh154/ .
scp oracle@10.232.11.38:/u01/app/FRA/backup/keep/* /u02/app/oracle/acfs/backup/d2aclprsh154/keep
=============================
oracle@d2aclprsh154[D2GSSP1]# scp oracle@10.232.11.38:/u01/app/FRA/backup/keep/* /u02/app/oracle/acfs/backup/d2aclprsh154/keep
WARNING: THIS IS A U.S. DEPARTMENT OF HOMELAND SECURITY COMPUTER SYSTEM. THIS
COMPUTER SYSTEM, INCLUDING ALL RELATED EQUIPMENT, NETWORKS AND NETWORK DEVICES
(SPECIFICALLY INCLUDING INTERNET ACCESS), ARE PROVIDED ONLY FOR AUTHORIZED U.S.
GOVERNMENT USE. DHS COMPUTER SYSTEMS MAY BE MONITORED FOR ALL LAWFUL PURPOSES,
INCLUDING TO ENSURE THAT THEIR USE IS AUTHORIZED, FOR MANAGEMENT OF THE SYSTEM,
TO FACILITATE PROTECTION AGAINST UNAUTHORIZED ACCESS, AND TO VERIFY SECURITY
PROCEDURES, SURVIVABILITY AND OPERATIONAL SECURITY. MONITORING INCLUDES ACTIVE
ATTACKS BY AUTHORIZED DHS ENTITIES TO TEST OR VERIFY THE SECURITY OF THIS
SYSTEM. DURING MONITORING, INFORMATION MAY BE EXAMINED, RECORDED, COPIED AND
USED FOR AUTHORIZED PURPOSES. ALL INFORMATION, INCLUDING PERSONAL INFORMATION,
PLACED ON OR SENT OVER THIS SYSTEM MAY BE MONITORED. USE OF THIS DHS COMPUTER
SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS
SYSTEM. UNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION. EVIDENCE OF
UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE,
CRIMINAL OR OTHER ADVERSE ACTION. USE OF THIS SYSTEM CONSTITUTES CONSENT TO
MONITORING FOR THESE PURPOSES.
oracle@10.232.11.38's password:
test 100% 0 0.0KB/s 00:00
oracle@d2aclprsh154[D2GSSP1]#
**Double-checking if copying from BASSD to 154/keep was SUCCESSFUL ***
======================================================================
oracle@d2aclprsh154[D2GSSP1]# pwd
/u02/app/oracle/acfs/backup/d2aclprsh154/keep
oracle@d2aclprsh154[D2GSSP1]# ll
total 0
-rw-r--r-- 1 oracle oinstall 0 Nov 8 18:55 test
oracle@d2aclprsh154[D2GSSP1]#
=============Kill RUNNING RMAN BACKUP =======================================
Find pid and spid and KILL it by:
1.
sql> set linesize 250 pagesize 2000
sql> define _editor=vi
select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p
where s.client_info LIKE 'rman%'
AND s.SID=sw.SID
AND s.PADDR=p.ADDR;
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL> ed
Wrote file afiedt.buf
select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p where s.client_info LIKE 'rman%'
AND s.SID=sw.SID
AND s.PADDR=p.ADDR;
SQL> /
SPID SID SERIAL# EVENT SEC_WAIT STATE CLIENT_INFO
------------------------ ---------- ---------- ---------------------------------------------------------------- ---------- ------------------- ----------------------------------------------------------------
1742 347 54874 RMAN backup & recovery I/O 0 WAITED SHORT TIME rman channel=ORA_DISK_1
2.
sql>alter system kill session 'SID,SERIAL#';
&&&&&&&&&&&&&&&&&& HAD to kill NORMAL RUNNINIG backup because it takes 2.8-hours to complete. This will up to 9:00pm + and RFC backup needs to start at 9:00pm&&&&&&&&&&&&&&&&&&&&&&&
SQL> ed
Wrote file afiedt.buf
1 select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
2 FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p where s.client_info LIKE 'rman%'
3 AND s.SID=sw.SID
4* AND s.PADDR=p.ADDR
SQL> /
SPID SID SERIAL# EVENT SEC_WAIT STATE CLIENT_INFO
------------------------ ---------- ---------- ---------------------------------------------------------------- ---------- ------------------- ----------------------------------------------------------------
1742 347 54874 RMAN backup & recovery I/O 0 WAITED SHORT TIME rman channel=ORA_DISK_1
SQL> alter system kill session '347,54874';
System altered.
============================================================
SQL>
alter system kill session '347,54874' immediate;
=========================================================
SQL> ed
Wrote file afiedt.buf
1 select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
2 FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p where s.client_info LIKE 'rman%'
3 AND s.SID=sw.SID
4 AND s.PADDR=p.ADDR;select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
5 FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p where s.client_info LIKE 'rman%'
6 AND s.SID=sw.SID
7* AND s.PADDR=p.ADDR
8 /
AND s.PADDR=p.ADDR;select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
*
ERROR at line 4:
ORA-00933: SQL command not properly ended
SQL> @backup_hist.sql
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME IN_SIZE OUT_SIZE HRS
----------- ------------- --------- -------------- -------------- ---------- ---------- -------
12798 DB FULL COMPLETED 10/18/16 23:59 10/19/16 01:47 211.73G 41.75G 1.81
12827 DB FULL COMPLETED 10/19/16 23:59 10/20/16 01:49 211.11G 41.61G 1.83
12854 DB FULL COMPLETED 10/20/16 23:59 10/21/16 01:46 211.23G 41.67G 1.78
12883 DB FULL COMPLETED 10/21/16 23:59 10/22/16 01:45 211.55G 41.72G 1.77
12910 DB FULL COMPLETED 10/22/16 23:59 10/23/16 01:41 210.89G 41.56G 1.70
12960 DB FULL COMPLETED 10/24/16 23:59 10/25/16 01:48 211.29G 41.67G 1.82
12987 DB FULL COMPLETED 10/25/16 23:59 10/26/16 01:48 212.23G 41.85G 1.82
13014 DB FULL COMPLETED 10/26/16 23:59 10/27/16 01:50 212.11G 41.81G 1.85
13041 DB FULL COMPLETED 10/27/16 23:59 10/28/16 01:47 212.04G 41.80G 1.80
13068 DB FULL COMPLETED 10/28/16 23:59 10/29/16 01:48 212.07G 41.80G 1.82
13095 DB FULL COMPLETED 10/29/16 23:59 10/30/16 01:50 212.04G 41.80G 1.85
13143 DB FULL COMPLETED 10/31/16 23:59 11/01/16 01:50 212.37G 41.89G 1.85
13172 DB FULL COMPLETED 11/01/16 23:59 11/02/16 01:50 212.91G 41.90G 1.84
13201 DB FULL COMPLETED 11/02/16 23:59 11/03/16 01:49 212.62G 41.94G 1.84
13228 DB FULL COMPLETED 11/03/16 23:59 11/04/16 01:50 212.59G 41.90G 1.85
13255 DB FULL COMPLETED 11/04/16 23:59 11/05/16 01:49 213.72G 42.12G 1.83
13282 DB FULL COMPLETED 11/05/16 23:59 11/06/16 01:43 212.47G 41.88G 1.73
13330 DB FULL COMPLETED 11/07/16 23:59 11/08/16 02:50 296.26G 81.08G 2.86
13357 DB FULL RUNNING 11/08/16 23:59 11/09/16 00:32 55.06G 11.49G .54
19 rows selected.
SQL> /
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME IN_SIZE OUT_SIZE HRS
----------- ------------- --------- -------------- -------------- ---------- ---------- -------
12798 DB FULL COMPLETED 10/18/16 23:59 10/19/16 01:47 211.73G 41.75G 1.81
12827 DB FULL COMPLETED 10/19/16 23:59 10/20/16 01:49 211.11G 41.61G 1.83
12854 DB FULL COMPLETED 10/20/16 23:59 10/21/16 01:46 211.23G 41.67G 1.78
12883 DB FULL COMPLETED 10/21/16 23:59 10/22/16 01:45 211.55G 41.72G 1.77
12910 DB FULL COMPLETED 10/22/16 23:59 10/23/16 01:41 210.89G 41.56G 1.70
12960 DB FULL COMPLETED 10/24/16 23:59 10/25/16 01:48 211.29G 41.67G 1.82
12987 DB FULL COMPLETED 10/25/16 23:59 10/26/16 01:48 212.23G 41.85G 1.82
13014 DB FULL COMPLETED 10/26/16 23:59 10/27/16 01:50 212.11G 41.81G 1.85
13041 DB FULL COMPLETED 10/27/16 23:59 10/28/16 01:47 212.04G 41.80G 1.80
13068 DB FULL COMPLETED 10/28/16 23:59 10/29/16 01:48 212.07G 41.80G 1.82
13095 DB FULL COMPLETED 10/29/16 23:59 10/30/16 01:50 212.04G 41.80G 1.85
13143 DB FULL COMPLETED 10/31/16 23:59 11/01/16 01:50 212.37G 41.89G 1.85
13172 DB FULL COMPLETED 11/01/16 23:59 11/02/16 01:50 212.91G 41.90G 1.84
13201 DB FULL COMPLETED 11/02/16 23:59 11/03/16 01:49 212.62G 41.94G 1.84
13228 DB FULL COMPLETED 11/03/16 23:59 11/04/16 01:50 212.59G 41.90G 1.85
13255 DB FULL COMPLETED 11/04/16 23:59 11/05/16 01:49 213.72G 42.12G 1.83
13282 DB FULL COMPLETED 11/05/16 23:59 11/06/16 01:43 212.47G 41.88G 1.73
13330 DB FULL COMPLETED 11/07/16 23:59 11/08/16 02:50 296.26G 81.08G 2.86
13357 DB FULL RUNNING 11/08/16 23:59 11/09/16 00:32 55.06G 11.49G .56
19 rows selected.
SQL>
SQL> select p.SPID,s.sid,s.serial#,sw.EVENT,sw.SECONDS_IN_WAIT AS SEC_WAIT,sw.STATE,CLIENT_INFO
FROM V$SESSION_WAIT sw,V$SESSION s, V$PROCESS p where s.client_info LIKE 'rman%'
AND s.SID=sw.SID
AND s.PADDR=p.ADDR; 2 3 4
no rows selected
SQL> @backup_hist.sql
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME IN_SIZE OUT_SIZE HRS
----------- ------------- --------- -------------- -------------- ---------- ---------- -------
12798 DB FULL COMPLETED 10/18/16 23:59 10/19/16 01:47 211.73G 41.75G 1.81
12827 DB FULL COMPLETED 10/19/16 23:59 10/20/16 01:49 211.11G 41.61G 1.83
12854 DB FULL COMPLETED 10/20/16 23:59 10/21/16 01:46 211.23G 41.67G 1.78
12883 DB FULL COMPLETED 10/21/16 23:59 10/22/16 01:45 211.55G 41.72G 1.77
12910 DB FULL COMPLETED 10/22/16 23:59 10/23/16 01:41 210.89G 41.56G 1.70
12960 DB FULL COMPLETED 10/24/16 23:59 10/25/16 01:48 211.29G 41.67G 1.82
12987 DB FULL COMPLETED 10/25/16 23:59 10/26/16 01:48 212.23G 41.85G 1.82
13014 DB FULL COMPLETED 10/26/16 23:59 10/27/16 01:50 212.11G 41.81G 1.85
13041 DB FULL COMPLETED 10/27/16 23:59 10/28/16 01:47 212.04G 41.80G 1.80
13068 DB FULL COMPLETED 10/28/16 23:59 10/29/16 01:48 212.07G 41.80G 1.82
13095 DB FULL COMPLETED 10/29/16 23:59 10/30/16 01:50 212.04G 41.80G 1.85
13143 DB FULL COMPLETED 10/31/16 23:59 11/01/16 01:50 212.37G 41.89G 1.85
13172 DB FULL COMPLETED 11/01/16 23:59 11/02/16 01:50 212.91G 41.90G 1.84
13201 DB FULL COMPLETED 11/02/16 23:59 11/03/16 01:49 212.62G 41.94G 1.84
13228 DB FULL COMPLETED 11/03/16 23:59 11/04/16 01:50 212.59G 41.90G 1.85
13255 DB FULL COMPLETED 11/04/16 23:59 11/05/16 01:49 213.72G 42.12G 1.83
13282 DB FULL COMPLETED 11/05/16 23:59 11/06/16 01:43 212.47G 41.88G 1.73
13330 DB FULL COMPLETED 11/07/16 23:59 11/08/16 02:50 296.26G 81.08G 2.86
13357 DB FULL RUNNING 11/08/16 23:59 11/09/16 00:49 55.06G 11.49G .83
19 rows selected.
SQL> alter system kill session '347,54874' immediate;
System altered.
SQL> @backup_hist.sql
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME IN_SIZE OUT_SIZE HRS
----------- ------------- --------- -------------- -------------- ---------- ---------- -------
12798 DB FULL COMPLETED 10/18/16 23:59 10/19/16 01:47 211.73G 41.75G 1.81
12827 DB FULL COMPLETED 10/19/16 23:59 10/20/16 01:49 211.11G 41.61G 1.83
12854 DB FULL COMPLETED 10/20/16 23:59 10/21/16 01:46 211.23G 41.67G 1.78
12883 DB FULL COMPLETED 10/21/16 23:59 10/22/16 01:45 211.55G 41.72G 1.77
12910 DB FULL COMPLETED 10/22/16 23:59 10/23/16 01:41 210.89G 41.56G 1.70
12960 DB FULL COMPLETED 10/24/16 23:59 10/25/16 01:48 211.29G 41.67G 1.82
12987 DB FULL COMPLETED 10/25/16 23:59 10/26/16 01:48 212.23G 41.85G 1.82
13014 DB FULL COMPLETED 10/26/16 23:59 10/27/16 01:50 212.11G 41.81G 1.85
13041 DB FULL COMPLETED 10/27/16 23:59 10/28/16 01:47 212.04G 41.80G 1.80
13068 DB FULL COMPLETED 10/28/16 23:59 10/29/16 01:48 212.07G 41.80G 1.82
13095 DB FULL COMPLETED 10/29/16 23:59 10/30/16 01:50 212.04G 41.80G 1.85
13143 DB FULL COMPLETED 10/31/16 23:59 11/01/16 01:50 212.37G 41.89G 1.85
13172 DB FULL COMPLETED 11/01/16 23:59 11/02/16 01:50 212.91G 41.90G 1.84
13201 DB FULL COMPLETED 11/02/16 23:59 11/03/16 01:49 212.62G 41.94G 1.84
13228 DB FULL COMPLETED 11/03/16 23:59 11/04/16 01:50 212.59G 41.90G 1.85
13255 DB FULL COMPLETED 11/04/16 23:59 11/05/16 01:49 213.72G 42.12G 1.83
13282 DB FULL COMPLETED 11/05/16 23:59 11/06/16 01:43 212.47G 41.88G 1.73
13330 DB FULL COMPLETED 11/07/16 23:59 11/08/16 02:50 296.26G 81.08G 2.86
13357 DB FULL FAILED 11/08/16 23:59 11/09/16 00:50 55.05G 11.49G .84
19 rows selected.
SQL>
================= LINKING Queries==================================================================================================== Select username, default_tablespace, temporary_tablespace, profile, account_status
from sys.dba_users
union
select grantee, privilege priv
from dba_sys_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
order by grantee, priv
================
select username, default_tablespace, temporary_tablespace, profile, account_status,grantee,privilege priv
from sys.dba_users,dba_sys_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
order by username
/
===========GOOD 10,000 rows =================
select username, profile, account_status,granted_role,admin_option,default_role,grantee
from sys.dba_users,dba_role_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
order by username
================Omer Test Datafile create script=============================================================================
create tablespace DBA_TEST datafile '/u01/oradata/EAIRT/dba_test.dbf' size 5M autoextend on next 1M maxsize 10M;
==================================================================================================================
OMER test TABLESPACE
====================
For OEM:
1. Create Test tablespaces
For Standalone:
CREATE TEMPORARY TABLESPACE DBA_TEMP TEMPFILE '{replace_with_db_file_path}/dba_temp.dbf' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
create tablespace DBA_TEST datafile '{replace_with_db_file_path}/dba_test.dbf' size 5M autoextend on next 1M maxsize 10M
extent management local AUTOALLOCATE
segment space management auto;
For RAC:
CREATE TEMPORARY TABLESPACE DBA_TEMP TEMPFILE '+DATADG' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
create tablespace DBA_TEST datafile '+DATADG' size 5M autoextend on next 1M maxsize 10M
extent management local AUTOALLOCATE
segment space management auto;
2. Create Test Account
create user DBA_TEST identified by dbaT_Q1Y2016
default tablespace DBA_TEST
quota unlimited on DBA_TEST
temporary tablespace DBA_TEMP
profile DHS_H_APPL;
grant create session to DBA_TEST;
grant Connect, resource to DBA_TEST;
grant select on dba_tablespace_usage_metrics to DBA_TEST;
3. Connect with Test account and Create Test table
connect DBA_TEST/dbaT_Q1Y2016
create table countries (
country_id varchar2(7),
country_name varchar2(100));
4. Run initial Test
begin
for IDs in 1..60000
loop
INSERT INTO dba_test.countries (country_id, country_name) VALUES (DBMS_RANDOM.string('L',7), DBMS_RANDOM.string('L',90));
commit;
End Loop;
End;
/
col Total_MB format 999,999
col Used_MB format 999,999
col USED_PERCENT format 990
select tablespace_name,
(tablespace_size*8192)/(1024*1024) total_mb,
(used_space*8192)/(1024*1024) used_mb,
used_percent
from dba_tablespace_usage_metrics
where tablespace_name like 'DBA%';
5. If usage is < 85 % insert more rows and check
begin
for IDs in 1..5000
loop
INSERT INTO dba_test.countries (country_id, country_name) VALUES (dbms_random.string('L',7), dbms_random.string('L',90));
commit;
End Loop;
End;
/
select tablespace_name,
(tablespace_size*8192)/(1024*1024) total_mb,
(used_space*8192)/(1024*1024) used_mb,
used_percent
from dba_tablespace_usage_metrics
where tablespace_name like 'DBA%';
Check OEM alerts in your email
6. insert more rows and check to check critical alerts when usage > 92 %
begin
for IDs in 1..1000
loop
INSERT INTO dba_test.countries (country_id, country_name) VALUES (dbms_random.string('L',7), dbms_random.string('L',90));
commit;
End Loop;
End;
/
select tablespace_name,
(tablespace_size*8192)/(1024*1024) total_mb,
(used_space*8192)/(1024*1024) used_mb,
used_percent
from dba_tablespace_usage_metrics
where tablespace_name like 'DBA%';
Check OEM alerts/incidents in your email
exit
7. Cleanup - Login with dba account and
-- alter system kill session '000, 000';
drop table dba_test.countries;
drop user DBA_TEST cascade;
drop tablespace DBA_TEST including contents and datafiles;
========================================================== Delete trace file script======================================================
cd /u01/app/oracle/diag/rdbms/eairt/EAIRT/trace
find . -name "EAIRT*.tr*" -mtime +3 -print -exec rm -f {} \;
# auditing EAIRT
0 10 * * 1-6 /u01/app/oracle/scripts/audit/archive_audit.sh EAIRT > /u01/app/oracle/scripts/audit/logs/archive_audit_EAIRT.log 2>&1
0 * * * 1-6 /u01/app/oracle/scripts/audit/hourly_archive_audit.sh EAIRT > /u01/app/oracle/scripts/audit/logs/hourly_archive_audit_EAIRT.log 2>&1
0 22 * * 6 /u01/app/oracle/scripts/audit/purge_audit.sh EAIRT > /u01/app/oracle/scripts/audit/logs/purge_audit_EAIRT.log 2>&1
0 23 * * 1-6 /u01/app/oracle/scripts/rmanbackup_EAIRT_disk.sh > /u01/app/oracle/scripts/rmanbackup_EAIRT_disk.log 2>&1
0 23 * * 1-6 /
#0 00 * * 7 /u01/app/oracle/scripts/delete_archlogs.sh EAIRT > /u01/app/oracle/scripts/delete_archlogs.log 2>&1
=======================
#!/bin/ksh
echo "Starting old trace file delete at `date`"
. $HOME/.profile
rman target=/ << EOF
DELETE NOPROMPT BACKUPSET COMPLETED BEFORE 'sysdate-1';
backup device type disk format '/u01/oradata/backup/EAIRT/db_%d_%I_%s_%p.bkup' tag daily_backup database;
backup device type disk format '/u01/oradata/backup/EAIRT/log_%d_%I_%s_%p.bkup' tag daily_backup archivelog all
not backed up delete all input;
backup device type disk format '/u01/oradata/backup/EAIRT/cf_%d_%U.bkup' tag daily_backup current controlfile;
allocate channel for maintenance type disk;
delete noprompt obsolete device type disk;
release channel;
EXIT;
EOF
echo "Backup Completed at `date`"
cp /u01/app/oracle/scripts/delete_old_trace.sh /u01/app/oracle/diag/rdbms/eairt/EAIRT/trace
=============================script to view ALLSESSIONS==============================================================================
SQL:
====
oracle@d2asepric071[BASSP]# vi sh_all_sessions.sql
SET echo off heading on
set pages 9999 lines 120
REM Lists active sessions in the database - Remove or change the vs.status wherr
e clause to
REM see sessions with other statuses
spool all_users.txt
column username format a15
column status format a8
column osuser format a10
column logtime format a18
column sid format 9999
column serial# format 9999999
column spid format a6 heading 'PID'
select vs.sid, vs.serial#, vs.username, vs.status, vs.osuser,
to_char(vs.logon_time,'dd-mon-yy hh24:Mi:ss') logtime
from v$session vs
where vs.username is not null
order by 4
/
spool off
"sh_all_sessions.sql" [dos] 23L, 665C 1,1 Top
===================================ORACLE Queries=====================================================
CHECKING OBJECTS that has CHANGED:
1. Today (set linesize 100/pagesize 1000)
==========
SQL> set linesize 250 pagesize 2000
SQL> select object_type,object_name,last_ddl_time from user_objects where last_ddl_time >= TRUNC(SYSDATE) order by object_type,object_name;
=============================================================================================================================================
OBJECT_TYPE
-----------------------
OBJECT_NAME
----------------------------------------------------------------------------------------------------
LAST_DDL_
---------
INDEX PARTITION
SYS_IL0000195324C00009$$
19-MAY-16
INDEX PARTITION
WRP$_REPORTS_DETAILS_IDX01
19-MAY-16
INDEX PARTITION
WRP$_REPORTS_DETAILS_IDX02
19-MAY-16
INDEX PARTITION
WRP$_REPORTS_IDX01
19-MAY-16
INDEX PARTITION
WRP$_REPORTS_IDX02
19-MAY-16
JOB
CLEANUP_NON_EXIST_OBJ
19-MAY-16
JOB
CLEANUP_ONLINE_IND_BUILD
19-MAY-16
JOB
CLEANUP_ONLINE_PMO
19-MAY-16
JOB
CLEANUP_TAB_IOT_PMO
19-MAY-16
JOB
CLEANUP_TRANSIENT_PKG
19-MAY-16
JOB
CLEANUP_TRANSIENT_TYPE
19-MAY-16
JOB
FILE_SIZE_UPD
19-MAY-16
JOB
ORA$AUTOTASK_CLEAN
19-MAY-16
JOB
PURGE_LOG
19-MAY-16
JOB
RSE$CLEAN_RECOVERABLE_SCRIPT
19-MAY-16
JOB
SM$CLEAN_AUTO_SPLIT_MERGE
19-MAY-16
LOB PARTITION
SYS_LOB0000195324C00009$$
19-MAY-16
TABLE PARTITION
WRP$_REPORTS
19-MAY-16
TABLE PARTITION
WRP$_REPORTS_DETAILS
19-MAY-16
TABLE PARTITION
WRP$_REPORTS_TIME_BANDS
19-MAY-16
======================Linesize 150 /pagesize 1000 =================================================================================================
OBJECT_TYPE
-----------------------
OBJECT_NAME LAST_DDL_
-------------------------------------------------------------------------------------------------------------------------------- ---------
INDEX PARTITION
SYS_IL0000195324C00009$$ 19-MAY-16
INDEX PARTITION
WRP$_REPORTS_DETAILS_IDX01 19-MAY-16
INDEX PARTITION
WRP$_REPORTS_DETAILS_IDX02 19-MAY-16
INDEX PARTITION
WRP$_REPORTS_IDX01 19-MAY-16
INDEX PARTITION
WRP$_REPORTS_IDX02 19-MAY-16
JOB
CLEANUP_NON_EXIST_OBJ 19-MAY-16
JOB
CLEANUP_ONLINE_IND_BUILD 19-MAY-16
JOB
CLEANUP_ONLINE_PMO 19-MAY-16
JOB
CLEANUP_TAB_IOT_PMO 19-MAY-16
JOB
CLEANUP_TRANSIENT_PKG 19-MAY-16
JOB
CLEANUP_TRANSIENT_TYPE 19-MAY-16
JOB
FILE_SIZE_UPD 19-MAY-16
JOB
ORA$AUTOTASK_CLEAN 19-MAY-16
JOB
PURGE_LOG 19-MAY-16
JOB
RSE$CLEAN_RECOVERABLE_SCRIPT 19-MAY-16
JOB
SM$CLEAN_AUTO_SPLIT_MERGE 19-MAY-16
LOB PARTITION
SYS_LOB0000195324C00009$$ 19-MAY-16
TABLE PARTITION
WRP$_REPORTS 19-MAY-16
TABLE PARTITION
WRP$_REPORTS_DETAILS 19-MAY-16
TABLE PARTITION
WRP$_REPORTS_TIME_BANDS 19-MAY-16
20 rows selected.
===============Verify Object changes for the last 7 days[sysdate-7] (i.e. SYSDATE(2day's date to last 7days) ==================================
select object_type,object_name,last_ddl_time from user_objects where last_ddl_time >= TRUNC(SYSDATE-7) order by object_type,object_name
==============
FULL DATABASE EXPORT script:
============================
/home/oracle/eair_export.sh
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
=====================Information about REDO Logscripts=========================================================================================== Information about REDO LOGs (status,memberships, etc) in the CURRENT CONTROL FILE (mounted) in TAMSP database =>redo3a & redo3b are REDO LOGs CURRENTLY used in PRIMARY/STANDBY respectively.
-------------------------------------------------------- ======================== ====== ------ ------ ------ -------
set pages 999 lines 120
col group# format 999999 jus cen
col status format a20 jus cen
col member format a55 jus cen
col bytes format 999,999,999,999
col mbytes heading "Megabytes" format 999,999
select * from v$LOG;
GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME CON_ID
------- ---------- ---------- ---------------- ---------- ---------- --- -------------------- ------------- --------- ------------ ------------ ------
1 1 173 209,715,200 512 2 YES INACTIVE 15740871 18-JUL-16 15740901 18-JUL-16 0
2 1 174 209,715,200 512 2 YES INACTIVE 15740901 18-JUL-16 15906113 19-JUL-16 0
3 1 175 209,715,200 512 2 NO CURRENT 15906113 19-JUL-16 2.8147E+14 0
4 1 172 209,715,200 512 2 YES INACTIVE 15653555 17-JUL-16 15740871 18-JUL-16 0
=====================================================================
OMERS
=====
SQL> @sh_redo_logs.sql
Redo Log Summary
Size
Group Thread Member Archived Status (MB)
----- ------ ------------------------------------------------------------ ---------- ---------- ----
1 1 /u01/oradata/TAMSP1/redo01a.log YES INACTIVE 200
1 1 /u01/FRA/TAMSP1/onlinelog/redo01b.log YES INACTIVE 200
2 1 /u01/oradata/TAMSP1/redo02a.log YES INACTIVE 200
2 1 /u01/FRA/TAMSP1/onlinelog/redo02b.log YES INACTIVE 200
3 1 /u01/FRA/TAMSP1/onlinelog/redo03b.log NO CURRENT 200
3 1 /u01/oradata/TAMSP1/redo03a.log NO CURRENT 200
4 1 /u01/oradata/TAMSP1/redo04a.log YES INACTIVE 200
4 1 /u01/FRA/TAMSP1/onlinelog/redo04b.log YES INACTIVE 200
8 rows selected.
SQL>
SQL> @sh_logs.sql
SQL> set pages 999 lines 120
SQL> col group# format 999999 jus cen
SQL> col status format a20 jus cen
SQL> col member format a55 jus cen
SQL> col bytes format 999,999,999,999
SQL> col mbytes heading "Megabytes" format 999,999
SQL>
SQL> select a.group#, b.status, a.member, b.bytes/(1024*1024) mbytes
2 from v$logfile a, v$log b
3 where a.group# = b.group#
4 union
5 select a.group#, b.status, a.member, b.bytes/(1024*1024) mbytes
6 from v$logfile a, v$standby_log b
7 where a.group# = b.group#
8 order by 1,3
9 /
Redo Log Summary
Group Status Member Megabytes
------- -------------------- ------------------------------------------------------- ---------
1 INACTIVE /u01/FRA/TAMSP1/onlinelog/redo01b.log 200
1 INACTIVE /u01/oradata/TAMSP1/redo01a.log 200
2 INACTIVE /u01/FRA/TAMSP1/onlinelog/redo02b.log 200
2 INACTIVE /u01/oradata/TAMSP1/redo02a.log 200
3 CURRENT /u01/FRA/TAMSP1/onlinelog/redo03b.log 200
3 CURRENT /u01/oradata/TAMSP1/redo03a.log 200
4 INACTIVE /u01/FRA/TAMSP1/onlinelog/redo04b.log 200
4 INACTIVE /u01/oradata/TAMSP1/redo04a.log 200
5 UNASSIGNED /u01/FRA/TAMSP1/onlinelog/sredo05b.log 200
5 UNASSIGNED /u01/oradata/TAMSP1/sredo05a.log 200
6 UNASSIGNED /u01/FRA/TAMSP1/onlinelog/sredo06b.log 200
6 UNASSIGNED /u01/oradata/TAMSP1/sredo06a.log 200
7 UNASSIGNED /u01/FRA/TAMSP1/onlinelog/sredo07b.log 200
7 UNASSIGNED /u01/oradata/TAMSP1/sredo07a.log 200
14 rows selected.
SQL>
SQL>
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@d2asenpnp001.dc2.dhs.gov[TAMSP1]$ exit
======================VIEW members in a REDO LOG FILE GROUP============================================= Member Group======and to see STANDBY/DATAGUARD==========================
rem --- To view all the members in a REDO LOG file GROUP--------v$logfile-----
SQL> desc v$logfile
Name Null? Type
----------------------------------------------------------------- -------- --------------------------------------------
GROUP# NUMBER
STATUS VARCHAR2(7)
TYPE VARCHAR2(7)
MEMBER VARCHAR2(513)
IS_RECOVERY_DEST_FILE VARCHAR2(3)
CON_ID NUMBER
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
set pages 999 lines 120
col group# format 999999 jus cen
col status format a20 jus cen
col member format a55 jus cen
col bytes format 999,999,999,999
col mbytes heading "Megabytes" format 999,999
select * from v$LOGFILE;
-----------------------------
SQL> set pagesize 999 lines 120
SQL> col group# format a10 jus cen
SQL> col status format a20 jus cen
SQL> col member format a55 jus cen
SQL> col IS_RECOVERY_DEST_FILE for a75
SQL> col bytes format 999,999,999,999
SQL> col mbytes heading "Megabytes" format 999,999
SQL> select * from v$LOGFILE;
GROUP# STATUS TYPE MEMBER IS_RECOVERY_DEST_FILE
---------- -------------------- ------- --------------------------------- ----- -----------------
IS_RECOVERY_DEST_FILE CON_ID
--------------------------------------------------------------------------- ----------
########## ONLINE /u01/oradata/TAMSP1/redo03a.log
NO 0 NO
########## ONLINE /u01/oradata/TAMSP1/redo02a.log
NO 0
########## ONLINE /u01/oradata/TAMSP1/redo04a.log
NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo04b.log
NO 0
########## ONLINE /u01/oradata/TAMSP1/redo01a.log
NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo01b.log
NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo03b.log
NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo02b.log
NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo05a.log
NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo05b.log
NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo06a.log
NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo06b.log
NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo07a.log
NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo07b.log
NO 0
14 rows selected.
============================================================================================================
GROUP# STATUS TYPE MEMBER IS_RECOVERY_DEST_FILE CON_ID
----------- --------- ----- ------------ ----------------------- ----------
########## ONLINE /u01/oradata/TAMSP1/redo03a.log NO 0
########## ONLINE /u01/oradata/TAMSP1/redo02a.log NO 0
########## ONLINE /u01/oradata/TAMSP1/redo04a.log NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo04b.log NO 0
########## ONLINE /u01/oradata/TAMSP1/redo01a.log NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo01b.log NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo03b.log NO 0
########## ONLINE /u01/FRA/TAMSP1/onlinelog/redo02b.log NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo05a.log NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo05b.log NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo06a.log NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo06b.log NO 0
########## STANDBY /u01/oradata/TAMSP1/sredo07a.log NO 0
########## STANDBY /u01/FRA/TAMSP1/onlinelog/sredo07b.log NO 0
============================RAC====RESIZE REDO Logs====================================================================================
In RAC
@sh_redo_logs
set pages 999 lines 120
col thread# format 999 heading 'Thread'
col group# format 999 heading 'Group'
col member format a60 heading 'Member' justify c
col status format a10 heading 'Status' justify c
col archived format a10 heading 'Archived'
col fsize format 9999 heading 'Size|(MB)'
select l.group#,l.thread#,
member,
archived,
l.status,
(bytes/1024/1024) fsize
from v$log l,
v$logfile f
where f.group# = l.group#
order by 1;
Thread Group Member Archived Status (MB)
------ ----- ------------------------------------------------------------ ---------- ---------- ----
1 2 +DATADG/idmuat/onlinelog/group_2.262.783895383 YES INACTIVE 50
1 2 +FRADG/idmuat/onlinelog/group_2.258.783895383 YES INACTIVE 50
1 1 +DATADG/idmuat/onlinelog/group_1.261.783895381 NO CURRENT 50
1 1 +FRADG/idmuat/onlinelog/group_1.257.783895381 NO CURRENT 50
2 3 +DATADG/idmuat/onlinelog/group_3.265.783895579 YES INACTIVE 50
2 3 +FRADG/idmuat/onlinelog/group_3.259.783895579 YES INACTIVE 50
2 4 +DATADG/idmuat/onlinelog/group_4.266.783895579 NO CURRENT 50
2 4 +FRADG/idmuat/onlinelog/group_4.260.783895579 NO CURRENT 50
8 rows selected.
Looks like in each instance we only have t2o Redo groups - We need at least three (preferrably 4) in each instance.
The first column (thread) tells which instance owns the redo group.
In Instance 1:
-- Add new redo log groups
alter database add logfile group 5 (
'+DATADG','+FRADG') size 200M reuse;
alter database add logfile group 6 (
'+DATADG','+FRADG') size 200M reuse;
-- Now drop and re-add Group 1 and 2
-- if they are not INACTIVE do a log switch
alter system switch logfile;
-- If necessary Issue a global checkpoint on any one node to turn all the ACTIVE redo log groups to INACTIVE.
alter system checkpoint global;
alter database drop logfile group 2;
alter database add logfile group 2 (
'+DATADG','+FRADG') size 200M reuse;
alter database drop logfile group 1;
alter database add logfile group 1 (
'+DATADG','+FRADG') size 200M reuse;
In Instance 2
-- Add new Redo log groups
alter database add logfile group 7 (
'+DATADG','+FRADG') size 200M reuse;
alter database add logfile group 8 (
'+DATADG','+FRADG') size 200M reuse;
-- Drop and re-add Groups 3 and 4 with the right size
-- if they are not INACTIVE do a log switch
alter system switch logfile;
-- If necessary Issue a global checkpoint on any one node to turn all the ACTIVE redo log groups to INACTIVE.
alter system checkpoint global;
alter database drop logfile group 3;
alter database add logfile group 3 (
'+DATADG','+FRADG') size 200M reuse;
alter database drop logfile group 4;
alter database add logfile group 4 (
'+DATADG','+FRADG') size 200M reuse;
--------------------------------------------Cronjob Work order-------------------------------------------------------------------------------
1. RESTORE POINT CREATION
=========================
script: @/u01/app/oracle/scripts/cr_rstpnt_wo325140.sql
---------------
Hi Katie,
Restore point has been created. See below:
SQL> @cr_rstpnt_wo258406.sql
Restore point created.
NAME SCN TIME GUA STORAGE_MB
---------------------------------------- ---------- --------------------------------- --- ----------
WO_258406 686313988 26-JAN-16 09.30.08.000000000 PM YES 50
SQL>
Let us know when you want it dropped.
=================================Show Users with DBA privilege Script========================================================
script name: sh_list_of_privilege_users_with_granted_role_DBA.sql
===========
users_with_DBA_role
=============
rem -----------------------------------------------------------
rem list all users with DBA privileges
rem output is spooled to wo291685.out
---------------------------------------------------------------
spool wo291685.log
set linesize 150
set pagesize 1000
select * from dba_role_privs where granted_role='DBA';
spool off;
BASSD: vi wo291685.sql
=====
rem-----------------------------------------------------------
rem list all users with DBA privileges
rem output is spooled to wo291685.out
---------------------------------------------------------------
spool wo291685.log
set linesize 150
set pagesize 1000
select * from dba_role_privs where granted_role='DBA';
spool off;
$$$$$$$$$$$$$$$$$$ FROM BIGIDY $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
BREAK ON USERNAME SKIP 2;
SELECT GRANTEE AS USERNAME, OWNER || ‘.’ || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN (‘ANONYMOUS’, ‘MGMT_VIEW’, ‘SYS’, ‘SYSTEM’, ‘APPQOSSYS’, ‘XDB’, ‘SYSMAN’, ‘OLAPSYS’, ‘ORDSYS’, ‘OWBSYS’, ‘MDSYS’, ‘EXFSYS’, ‘APEX_PUBLIC_USER’, ‘CTXSYS’, ‘FLOWS_FILES’, ‘OLAPSYS’, ‘ORDPLUGINS’, ‘ORACLE_OCM’, ‘PUBLIC’, ‘DBSNMP’, ‘DBA’, ‘AUDITDB’, ‘TSMSYS’, ‘DBAUDCON’, ‘DBAUDIT’, ‘OEM_USR’, ‘WMSYS’, ‘ORADBSS’, ‘OUTLN’, ‘MONITOR’)
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1, 2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&&&&&&&&&&&&&&&&&&&&&&&&& BIGIDY &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
spool /u01/app/oracle/home/wo291685.log
set linesize 150
set pagesize 1000
SELECT GRANTEE AS USERNAME, OWNER || '.' || TABLE_NAME AS HAS_ACCESS_TO, PRIVILEGE
FROM DBA_TAB_PRIVS
WHERE GRANTEE NOT IN ('ANONYMOUS','MGMT_VIEW','SYS','SYSTEM','APPQOSSYS','XDB','SYSMAN','OLAPSYS','ORDSYS','OWBSYS','MDSYS','EXFSYS','APEX_030200','APEX_PUBLIC_USER','CTXSYS','FLOWS_FILES','OLAPSYS','ORDPLUGINS','ORACLE_OCM','PUBLIC','DBSNMP','DBA','AUDITDB','TSMSYS'
,'DBAUDCON','DBAUDIT','OEM_USR','WMSYS','ORADBSS','OUTLN','MONITOR')
AND GRANTEE IN (SELECT USERNAME FROM DBA_USERS)
ORDER BY 1,2
spool off;
%%%%%%%%%%%%%%%%%% 4m BASSP %%%%%%%% user_privs $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
column priv format a45
column grantee format a25
set pagesize 66
spool sh_user_privs.log
select grantee, privilege priv
from dba_sys_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
union
select grantee, privilege||' on '||table_name priv
from dba_tab_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
union
select grantee, granted_role priv
from dba_role_privs
where grantee not in
('ORACLE','IMP_FULL_DATABASE','EXP_FULL_DATABASE', 'QDBA',
'DBSNMP','DBA','CONNECT','RESOURCE','RECOVERY_CATALOG_OWNER',
'SYS','SYSTEM','TAB_OWNER','TEST',
'SELECT_CATALOG_ROLE','SNMPAGENT',
'Q_USER_ROLE','LMS','EXECUTE_CATALOG_ROLE','DELETE_CATALOG_ROLE')
order by grantee, priv;
spool off
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%INDEX-nice-presentation-results/display%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
SQL> select 'alter index'||index_name||'rebuild;'from all_indexes where status='UNUSABLE';
################################################################################################################################################
===========================USER Last login details script===============================================================================
select TO_CHAR(TIMESTAMP#,'MM/DD/YY HH:MI:SS') TIMESTAMP,
USERID, AA.NAME ACTION FROM SYS.AUD$ AT, SYS.AUDIT_ACTIONS AA
WHERE AT.ACTION# = AA.ACTION
and AA.name='LOGON'
and userid in ('&User_id')
ORDER BY TIMESTAMP# DESC;
select OS_USERNAME,action_name,USERNAME,to_char(timestamp, 'DD MON YYYY hh24:mi') logon_time,
to_char(logoff_time,'DD MON YYYY hh24:mi') logoff
from dba_audit_session where username = '&user'
AND (timestamp > (sysdate - 61))
order by logon_time,username,timestamp,logoff_time;
-------------------------------------------------------User_roles---------------------------------
Tue Jun 07 page 1
ORACLE USER REPORT
User Status Default Temporary Users Profile Roles Admin? Default?
---------- ---------- --------------- --------------- --------------- --------------------- ------- ----------
APEX_04020 EXPIRED & SYSAUX TEMP DHS_H_APPL CONNECT NO YES
0 LOCKED
RESOURCE NO YES
APHELPS EXPIRED BASS_DATA TEMP DHS_H_IND R_TSTR NO YES
ASEVILLA EXPIRED BASS_DATA TEMP DHS_H_IND DBA NO YES
R_CBM NO YES
ASHOME LOCKED BASS_DATA TEMP DHS_H_IND R_CBM NO YES
R_DEV NO YES
BASS_ICE OPEN BASS_DATA TEMP DHS_H_APPL R_ETL NO YES
BASS_SEC OPEN BASS_DATA TEMP DHS_H_APPL R_ETL NO YES
CBM_ODI_MA OPEN CBM_EPM_TBSP APP_TEMP DHS_H_APPL R_ETL NO YES
STER
CBM_ODI_WO OPEN CBM_EPM_TBSP APP_TEMP DHS_H_APPL R_ETL NO YES
RK
CTXSYS LOCKED SYSAUX TEMP DHS_H_APPL CTXAPP YES YES
RESOURCE NO YES
DBA_TEST OPEN DBA_TEST DBA_TEMP DHS_H_APPL CONNECT NO YES
RESOURCE NO YES
DBSNMP OPEN SYSAUX TEMP DHS_H_APPL CDB_DBA NO YES
DV_MONITOR NO YES
OEM_MONITOR NO YES
DEV_BIPLAT OPEN OBI_REP_TBSP APP_TEMP DHS_H_APPL CONNECT NO YES
FORM
RESOURCE NO YES
DEV_MDS OPEN OBI_REP_TBSP APP_TEMP DHS_H_APPL CONNECT NO YES
DVF LOCKED SYSAUX TEMP DHS_H_APPL CONNECT NO YES
DVSYS LOCKED SYSAUX TEMP DHS_H_APPL CONNECT NO YES
DV_ACCTMGR YES YES
DV_ADMIN YES YES
DV_AUDIT_CLEANUP YES YES
DV_DATAPUMP_NETWORK_L YES YES
INK
DV_GOLDENGATE_ADMIN YES YES
DV_GOLDENGATE_REDO_AC YES YES
CESS
DV_MONITOR YES YES
DV_OWNER YES YES
DV_PATCH_ADMIN YES YES
DV_PUBLIC YES YES
DV_SECANALYST YES YES
DV_STREAMS_ADMIN YES YES
DV_XSTREAM_ADMIN YES YES
RESOURCE NO YES
EPM_DBA OPEN EPM_TBSP TEMP DHS_H_APPL DBA NO YES
EVOSE OPEN USERS TEMP DHS_H_IND DBA NO YES
GSMCATUSER LOCKED USERS TEMP DHS_H_APPL AQ_ADMINISTRATOR_ROLE NO YES
CONNECT NO YES
GSMADMIN_ROLE NO YES
GSM_POOLADMIN_ROLE NO YES
GSMUSER LOCKED USERS TEMP DHS_H_APPL GSMUSER_ROLE NO YES
INFRA_REP_ OPEN INFRA TEMP DHS_H_APPL RESOURCE NO YES
DA
INFRA_REP_ OPEN INFRA TEMP DHS_H_APPL RESOURCE NO YES
DM
R_INFRA NO YES
INFRA_REP_ OPEN INFRA TEMP DHS_H_APPL RESOURCE NO YES
MM
R_INFRA NO YES
INFRA_REP_ OPEN INFRA TEMP DHS_H_APPL RESOURCE NO YES
PC
R_INFRA NO YES
JKSHITIJ OPEN BASS_DATA TEMP DHS_H_IND R_DEV NO YES
R_TSTR NO YES
JSHARP OPEN BASS_DATA TEMP DHS_H_IND R_READ NO YES
JSTRATON LOCKED BASS_DATA TEMP DHS_H_IND R_DEV NO YES
JTHERIANOS EXPIRED BASS_DATA TEMP DHS_H_IND R_TSTR NO YES
KBROCK OPEN BASS_DATA TEMP DHS_H_IND R_DEV NO YES
LBACSYS LOCKED SYSTEM TEMP DHS_H_APPL LBAC_DBA YES YES
RESOURCE NO YES
LOAD_ADMIN OPEN BASS_DATA TEMP DHS_H_APPL DBA NO YES
R_ETL NO YES
LPOMPONIO OPEN BASS_DATA TEMP DHS_H_IND DBA NO YES
R_DEV NO YES
MDDATA LOCKED USERS TEMP DHS_H_APPL CONNECT NO YES
RESOURCE NO YES
MDSYS LOCKED SYSAUX TEMP DHS_H_APPL CONNECT NO YES
RESOURCE NO YES
MMANIGOLD OPEN BASS_DATA TEMP DHS_H_IND R_READ NO YES
OBIEE OPEN BASS_DATA TEMP DHS_H_APPL R_READ NO YES
ODI_WORK OPEN ODI_TBSP APP_TEMP DHS_H_APPL R_ETL NO YES
OEM_DBA OPEN SYSAUX TEMP DHS_H_APPL CONNECT NO YES
OJVMSYS LOCKED USERS TEMP DHS_H_APPL RESOURCE NO YES
OLAPSYS LOCKED SYSAUX TEMP DHS_H_APPL OLAP_DBA NO YES
ORDSYS LOCKED SYSAUX TEMP DHS_H_APPL JAVAUSERPRIV NO YES
OUTLN LOCKED SYSTEM TEMP DHS_H_APPL RESOURCE NO YES
SHY OPEN BASS_DATA TEMP DHS_H_IND R_TSTR NO YES
SPATIAL_CS LOCKED USERS TEMP DHS_H_APPL CONNECT NO YES
W_ADMIN_US
R
RESOURCE NO YES
SPATIAL_CSW_ADMIN YES YES
SPATIAL_WF LOCKED USERS TEMP DHS_H_APPL CONNECT NO YES
S_ADMIN_US
R
RESOURCE NO YES
SPATIAL_WFS_ADMIN YES YES
STG_EPM OPEN BASS_STG TEMP DHS_H_APPL R_ETL NO YES
STG_ICE OPEN BASS_STG TEMP DHS_H_APPL R_ETL NO YES
SYS OPEN SYSTEM TEMP DHS_H_APPL ADM_PARALLEL_EXECUTE_ YES YES
TASK
APEX_ADMINISTRATOR_RO YES YES
LE
APEX_GRANTS_FOR_NEW_U YES YES
SERS_ROLE
R_INFRA NO YES
AQ_ADMINISTRATOR_ROLE YES YES
AQ_USER_ROLE YES YES
AUDIT_ADMIN YES YES
AUDIT_VIEWER YES YES
AUTHENTICATEDUSER YES YES
CAPTURE_ADMIN YES YES
CDB_DBA YES YES
CONNECT YES YES
CSW_USR_ROLE YES YES
CTXAPP YES YES
DATAPUMP_EXP_FULL_DAT YES YES
ABASE
DATAPUMP_IMP_FULL_DAT YES YES
ABASE
DBA YES YES
DBFS_ROLE YES YES
DELETE_CATALOG_ROLE YES YES
DV_REALM_OWNER YES YES
DV_REALM_RESOURCE YES YES
EJBCLIENT YES YES
EM_EXPRESS_ALL YES YES
EM_EXPRESS_BASIC YES YES
EXECUTE_CATALOG_ROLE YES YES
EXP_FULL_DATABASE YES YES
GATHER_SYSTEM_STATIST YES YES
ICS
GDS_CATALOG_SELECT YES YES
GSMADMIN_ROLE YES YES
GSMUSER_ROLE YES YES
GSM_POOLADMIN_ROLE YES YES
HS_ADMIN_EXECUTE_ROLE YES YES
HS_ADMIN_ROLE YES YES
HS_ADMIN_SELECT_ROLE YES YES
IMP_FULL_DATABASE YES YES
JAVADEBUGPRIV YES YES
JAVAIDPRIV YES YES
JAVASYSPRIV YES YES
JAVAUSERPRIV YES YES
JAVA_ADMIN YES YES
JAVA_DEPLOY YES YES
JMXSERVER YES YES
LBAC_DBA YES YES
LOGSTDBY_ADMINISTRATO YES YES
R
OEM_ADVISOR YES YES
OEM_MONITOR YES YES
OLAP_DBA YES YES
OLAP_USER YES YES
OLAP_XS_ADMIN YES YES
OPTIMIZER_PROCESSING_ YES YES
RATE
ORDADMIN YES YES
PDB_DBA YES YES
PROVISIONER YES YES
RECOVERY_CATALOG_OWNE YES YES
R
RECOVERY_CATALOG_USER YES YES
RESOURCE YES YES
R_CBM YES YES
R_DEV YES YES
R_ETL YES YES
R_INFRA YES YES
R_READ YES YES
R_TSTR YES YES
SCHEDULER_ADMIN YES YES
SELECT_CATALOG_ROLE YES YES
SPATIAL_CSW_ADMIN YES YES
SPATIAL_WFS_ADMIN YES YES
WFS_USR_ROLE YES YES
XDBADMIN YES YES
XDB_SET_INVOKER YES YES
XDB_WEBSERVICES YES YES
XDB_WEBSERVICES_OVER_ YES YES
HTTP
XDB_WEBSERVICES_WITH_ YES YES
PUBLIC
XS_CACHE_ADMIN YES YES
XS_NAMESPACE_ADMIN YES YES
R_CBM YES YES
R_DEV YES YES
R_ETL YES YES
R_INFRA YES YES
R_READ YES YES
R_TSTR YES YES
SCHEDULER_ADMIN YES YES
SELECT_CATALOG_ROLE YES YES
SPATIAL_CSW_ADMIN YES YES
SPATIAL_WFS_ADMIN YES YES
WFS_USR_ROLE YES YES
XDBADMIN YES YES
XDB_SET_INVOKER YES YES
XDB_WEBSERVICES YES YES
XDB_WEBSERVICES_OVER_ YES YES
HTTP
XDB_WEBSERVICES_WITH_ YES YES
PUBLIC
XS_CACHE_ADMIN YES YES
XS_NAMESPACE_ADMIN YES YES
XS_RESOURCE YES YES
XS_SESSION_ADMIN YES YES
SYSBACKUP LOCKED USERS TEMP DHS_H_APPL SELECT_CATALOG_ROLE NO YES
SYSTEM OPEN SYSTEM TEMP DHS_H_APPL AQ_ADMINISTRATOR_ROLE YES YES
DBA NO YES
TBARTHA OPEN BASS_DATA TEMP DHS_H_IND R_READ NO YES
USERTEST OPEN USER TEMP DEFAULT CONNECT NO YES
WMSYS LOCKED SYSAUX TEMP DHS_H_APPL WM_ADMIN_ROLE YES YES
XDB LOCKED SYSAUX TEMP DHS_H_APPL CTXAPP NO YES
DBFS_ROLE NO YES
RESOURCE NO YES
169 rows selected.
#############################################################################################################
################################ 360% HEALTH CHECK script ###############################################################################################
# ###############################################################################################
# DATABASE DAILY HEALTH CHECK MONITORING SCRIPT
# [VER 3.3]
# ===============================================================================
# CAUTION:
# THIS SCRIPT MAY CAUSE A SLIGHT PERFORMANCE IMPACT WHEN IT RUN,
# I RECOMMEND TO NOT RUN THIS SCRIPT SO FREQUENT, I USUALLY RUN IT ONCE A DAY.
# E.G. YOU MAY CONSIDER TO SCHEDULE IT TO RUN ONE TIME BETWEEN 12:00AM to 5:00AM.
# ===============================================================================
#
# FEATURES:
# CHECKING CPU UTILIZATION.
# CHECKING FILESYSTEM UTILIZATION.
# CHECKING TABLESPACES UTILIZATION.
# CHECKING FLASH RECOVERY AREA UTILIZATION.
# CHECKING ASM DISKGROUPS UTILIZATION.
# CHECKING BLOCKING SESSIONS ON THE DATABASE.
# CHECKING UNUSABLE INDEXES ON THE DATABASE.
# CHECKING INVALID OBJECTS ON THE DATABASE.
# CHECKING FAILED LOGIN ATTEMPTS ON THE DATABASE.
# CHEKCING AUDIT RECORDS ON THE DATABASE.
# CHECKING CORRUPTED BLOCKS ON THE DATABASE.
# CHECKING FAILED JOBS IN THE DATABASE.
# CHECKING ACTIVE INCIDENTS.
# CHECKING OUTSTANDING ALERTS.
# CHECKING DATABASE SIZE GROWTH.
# CHECKING OS / HARDWARE STATISTICS.
# CHECKING RESOURCE LIMITS.
# CHECKING RECYCLEBIN.
# CHECKING CURRENT RESTORE POINTS.
# CHECKING HEALTH MONITOR CHECKS RECOMMENDATIONS THAT RUN BY DBMS_HM PACKAGE.
# CHEKCING MONITORED INDEXES.
# CHECKING REDOLOG SWITCHES.
# CHECKING MODIFIED INTIALIZATION PARAMETERS SINCE THE LAST DB STARTUP.
# CHECKING ADVISORS RECOMMENDATIONS:
# - SQL TUNING ADVISOR
# - SGA ADVISOR
# - PGA ADVISOR
# - BUFFER CACHE ADVISOR
# - SHARED POOL ADVISOR
# - SEGMENT ADVISOR
#
# # # #
# Author: KENNY # # # # ###
# # # # # #
#
# Created: 02-14-17 Based on dbalarm.sh script.
# Modifications:18-05-14 Added Filsystem monitoring.
# 19-05-14 Added CPU monitoring.
# 09-12-14 Added Tablespaces monitoring
# Added BLOCKING SESSIONS monitoring
# Added UNUSABLE INDEXES monitoring
# Added INVALID OBJECTS monitoring
# Added FAILED LOGINS monitoring
# Added AUDIT RECORDS monitoring
# Added CORRUPTED BLOCKS monitoring
# [It will NOT run a SCAN. It will look at V$DATABASE_BLOCK_CORRUPTION]
# Added FAILED JOBS monitoring.
# 06-10-15 Replaced mpstat with iostat for CPU Utilization Check
# 02-11-15 Enhanced "FAILED JOBS monitoring" part.
# 13-12-15 Added Advisors Recommendations to the report
# 04-04-16 dba_tablespace_usage_metrics view will be used for 11g onwards versions
# for checking tablespaces size, advised by: Satyajit Mohapatra
# 10-04-16 Add Flash Recovery Area monitoring
# 10-04-16 Add ASM Disk Groups monitoring
# 15-07-16 Add ACTIVE INCIDENTS, RESOURCE LIMITS, RECYCLEBIN, RESTORE POINTS,
# MONITORED INDEXES, REDOLOG SWITCHES, MODIFIED SPFILE PARAMETERS checks.
# 02-01-17 Removed ALERTLOG check for DB & Listener +
# Merged alerts with advisors. [Recommended by: KEN]
# 03-01-17 Added checking RAC status feature. [Recommended by: OraDetector]
# 09-01-17 Added RMAN BACKUP CHECK.
#
#
#
#
#
# ###############################################################################################
SCRIPT_NAME="dbdailychk.sh"
SRV_NAME=`uname -n`
MAIL_LIST="youremail@yourcompany.com"
case ${MAIL_LIST} in "youremail@yourcompany.com")
echo
echo "##############################################################################################"
echo "You Missed Something :-)"
echo "In order to receive the HEALTH CHECK report via Email, you have to ADD your E-mail in line# 80"
echo "by replacing this template [youremail@yourcompany.com] with YOUR E-mail address."
echo "DB HEALTH CHECK result will be saved on disk..."
echo "##############################################################################################"
echo;;
esac
# #########################
# THRESHOLDS:
# #########################
# Send an E-mail for each THRESHOLD if been reached:
# ADJUST the following THRESHOLD VALUES as per your requirements:
FSTHRESHOLD=95 # THRESHOLD FOR FILESYSTEM %USED [OS]
CPUTHRESHOLD=95 # THRESHOLD FOR CPU %UTILIZATION [OS]
TBSTHRESHOLD=95 # THRESHOLD FOR TABLESPACE %USED [DB]
FRATHRESHOLD=95 # THRESHOLD FOR FLASH RECOVERY AREA %USED [DB]
ASMTHRESHOLD=95 # THRESHOLD FOR ASM DISK GROUPS [DB]
UNUSEINDXTHRESHOLD=1 # THRESHOLD FOR NUMBER OF UNUSABLE INDEXES [DB]
INVOBJECTTHRESHOLD=1 # THRESHOLD FOR NUMBER OF INVALID OBJECTS [DB]
FAILLOGINTHRESHOLD=1 # THRESHOLD FOR NUMBER OF FAILED LOGINS [DB]
AUDITRECOTHRESHOLD=1 # THRESHOLD FOR NUMBER OF AUDIT RECORDS [DB]
CORUPTBLKTHRESHOLD=1 # THRESHOLD FOR NUMBER OF CORRUPTED BLOCKS [DB]
FAILDJOBSTHRESHOLD=1 # THRESHOLD FOR NUMBER OF FAILED JOBS [DB]
# CHECK CLUSTERWARE HEALTH:
CLUSTER_CHECK=Y
# #######################################
# Excluded INSTANCES:
# #######################################
# Here you can mention the instances dbalarm will IGNORE and will NOT run against:
# Use pipe "|" as a separator between each instance name.
# e.g. Excluding: -MGMTDB, ASM instances:
EXL_DB="\-MGMTDB|ASM" #Excluding INSTANCES [Will not get reported offline].
# #########################
# Excluded ERRORS:
# #########################
# Here you can exclude the errors that you don't want to be alerted when they appear in the logs:
# Use pipe "|" between each error.
EXL_ALERT_ERR="ORA-2396|TNS-00507|TNS-12502|TNS-12560|TNS-12537|TNS-00505" #Excluded ALERTLOG ERRORS [Will not get reported].
EXL_LSNR_ERR="TNS-00507|TNS-12502|TNS-12560|TNS-12537|TNS-00505" #Excluded LISTENER ERRORS [Will not get reported].
# ################################
# Excluded FILESYSTEM/MOUNT POINTS:
# ################################
# Here you can exclude specific filesystems/mount points from being reported by dbalarm:
# e.g. Excluding: /dev/mapper, /dev/asm mount points:
EXL_FS="\/dev\/mapper\/|\/dev\/asm\/" #Excluded mount points [Will be skipped during the check].
# #########################
# Checking The FILESYSTEM:
# #########################
# Report Partitions that reach the threshold of Used Space:
FSLOG=/tmp/filesystem_DBA_BUNDLE.log
echo "Reported By Script: ${SCRIPT_NAME}" > ${FSLOG}
echo "" >> ${FSLOG}
df -h >> ${FSLOG}
df -h | grep -v "^Filesystem" |awk '{print substr($0, index($0, $2))}'| egrep -v "${EXL_FS}"|awk '{print $(NF-1)" "$NF}'| while read OUTPUT
do
PRCUSED=`echo ${OUTPUT}|awk '{print $1}'|cut -d'%' -f1`
FILESYS=`echo ${OUTPUT}|awk '{print $2}'`
if [ ${PRCUSED} -ge ${FSTHRESHOLD} ]
then
mail -s "ALARM: Filesystem [${FILESYS}] on Server [${SRV_NAME}] has reached ${PRCUSED}% of USED space" ${MAIL_LIST} < ${FSLOG}
fi
done
rm -f ${FSLOG}
# #############################
# Checking The CPU Utilization:
# #############################
# Report CPU Utilization if reach >= CPUTHRESHOLD:
OS_TYPE=`uname -s`
CPUUTLLOG=/tmp/CPULOG_DBA_BUNDLE.log
# Getting CPU utilization in last 5 seconds:
case `uname` in
Linux ) CPU_REPORT_SECTIONS=`iostat -c 1 5 | sed -e 's/,/./g' | tr -s ' ' ';' | sed '/^$/d' | tail -1 | grep ';' -o | wc -l`
CPU_COUNT=`cat /proc/cpuinfo|grep processor|wc -l`
if [ ${CPU_REPORT_SECTIONS} -ge 6 ]; then
CPU_IDLE=`iostat -c 1 5 | sed -e 's/,/./g' | tr -s ' ' ';' | sed '/^$/d' | tail -1| cut -d ";" -f 7`
else
CPU_IDLE=`iostat -c 1 5 | sed -e 's/,/./g' | tr -s ' ' ';' | sed '/^$/d' | tail -1| cut -d ";" -f 6`
fi
;;
AIX ) CPU_IDLE=`iostat -t $INTERVAL_SEC $NUM_REPORT | sed -e 's/,/./g'|tr -s ' ' ';' | tail -1 | cut -d ";" -f 6`
CPU_COUNT=`lsdev -C|grep Process|wc -l`
;;
SunOS ) CPU_IDLE=`iostat -c $INTERVAL_SEC $NUM_REPORT | tail -1 | awk '{ print $4 }'`
CPU_COUNT=`psrinfo -v|grep "Status of processor"|wc -l`
;;
HP-UX) SAR="/usr/bin/sar"
CPU_COUNT=`lsdev -C|grep Process|wc -l`
if [ ! -x $SAR ]; then
echo "sar command is not supported on your environment | CPU Check ignored"; CPU_IDLE=99
else
CPU_IDLE=`/usr/bin/sar 1 5 | grep Average | awk '{ print $5 }'`
fi
;;
*) echo "uname command is not supported on your environment | CPU Check ignored"; CPU_IDLE=99
;;
esac
# Getting Utilized CPU (100-%IDLE):
CPU_UTL_FLOAT=`echo "scale=2; 100-($CPU_IDLE)"|bc`
# Convert the average from float number to integer:
CPU_UTL=${CPU_UTL_FLOAT%.*}
if [ -z ${CPU_UTL} ]
then
CPU_UTL=1
fi
# Compare the current CPU utilization with the Threshold:
CPULOG=/tmp/top_processes_DBA_BUNDLE.log
if [ ${CPU_UTL} -ge ${CPUTHRESHOLD} ]
then
echo "CPU STATS:" > ${CPULOG}
echo "=========" >> ${CPULOG}
mpstat 1 5 >> ${CPULOG}
echo "" >> ${CPULOG}
echo "VMSTAT Output:" >> ${CPULOG}
echo "=============" >> ${CPULOG}
echo "[If the runqueue number in the (r) column exceeds the number of CPUs [${CPU_COUNT}] this indicates a CPU bottleneck on the system]." >> ${CPULOG}
echo "" >> ${CPULOG}
vmstat 2 5 >> ${CPULOG}
echo "" >> ${CPULOG}
echo "Top 10 Processes:" >> ${CPULOG}
echo "================" >> ${CPULOG}
echo "" >> ${CPULOG}
top -c -b -n 1|head -17 >> ${CPULOG}
#ps -eo pcpu,pid,user,args | sort -k 1 -r | head -11 >> ${CPULOG}
# Check ACTIVE SESSIONS on DB side:
for ORACLE_SID in $( ps -ef|grep pmon|grep -v grep|egrep -v ${EXL_DB}|awk '{print $NF}'|sed -e 's/ora_pmon_//g'|grep -v sed|grep -v "s///g" )
do
export ORACLE_SID
# Getting ORACLE_HOME:
# ###################
ORA_USER=`ps -ef|grep ${ORACLE_SID}|grep pmon|egrep -v ${EXL_DB}|awk '{print $1}'|tail -1`
USR_ORA_HOME=`grep ${ORA_USER} /etc/passwd| cut -f6 -d ':'|tail -1`
# SETTING ORATAB:
if [ -f /etc/oratab ]
then
ORATAB=/etc/oratab
export ORATAB
## If OS is Solaris:
elif [ -f /var/opt/oracle/oratab ]
then
ORATAB=/var/opt/oracle/oratab
export ORATAB
fi
# ATTEMPT1: Get ORACLE_HOME using pwdx command:
PMON_PID=`pgrep -lf _pmon_${ORACLE_SID}|awk '{print $1}'`
export PMON_PID
ORACLE_HOME=`pwdx ${PMON_PID}|awk '{print $NF}'|sed -e 's/\/dbs//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from PWDX is ${ORACLE_HOME}"
# ATTEMPT2: If ORACLE_HOME not found get it from oratab file:
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
## If OS is Linux:
if [ -f /etc/oratab ]
then
ORATAB=/etc/oratab
ORACLE_HOME=`grep -v '^\#' $ORATAB | grep -v '^$'| grep -i "^${ORACLE_SID}:" | perl -lpe'$_ = reverse' | cut -f3 | perl -lpe'$_ = reverse' |cut -f2 -d':'`
export ORACLE_HOME
## If OS is Solaris:
elif [ -f /var/opt/oracle/oratab ]
then
ORATAB=/var/opt/oracle/oratab
ORACLE_HOME=`grep -v '^\#' $ORATAB | grep -v '^$'| grep -i "^${ORACLE_SID}:" | perl -lpe'$_ = reverse' | cut -f3 | perl -lpe'$_ = reverse' |cut -f2 -d':'`
export ORACLE_HOME
fi
#echo "ORACLE_HOME from oratab is ${ORACLE_HOME}"
fi
# ATTEMPT3: If ORACLE_HOME is still not found, search for the environment variable: [Less accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`env|grep -i ORACLE_HOME|sed -e 's/ORACLE_HOME=//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from environment is ${ORACLE_HOME}"
fi
# ATTEMPT4: If ORACLE_HOME is not found in the environment search user's profile: [Less accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`grep -h 'ORACLE_HOME=\/' $USR_ORA_HOME/.bash_profile $USR_ORA_HOME/.*profile | perl -lpe'$_ = reverse' |cut -f1 -d'=' | perl -lpe'$_ = reverse'|tail -1`
export ORACLE_HOME
#echo "ORACLE_HOME from User Profile is ${ORACLE_HOME}"
fi
# ATTEMPT5: If ORACLE_HOME is still not found, search for orapipe: [Least accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`locate -i orapipe|head -1|sed -e 's/\/bin\/orapipe//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from orapipe search is ${ORACLE_HOME}"
fi
# Check Long Running Transactions if CPUDIGMORE=Y:
case ${CPUDIGMORE} in
y|Y|yes|YES|Yes)
${ORACLE_HOME}/bin/sqlplus -s '/ as sysdba' << EOF
set linesize 200
SPOOL ${CPULOG} APPEND
prompt
prompt ----------------------------------------------------------------
Prompt ACTIVE SESSIONS ON DATABASE $ORACLE_SID:
prompt ----------------------------------------------------------------
set feedback off linesize 200 pages 1000
col "OS_PID" for a8
col module for a30
col event for a27
col "USER|SID,SER# |MOD|MACHINE" for a60
col WAIT_STATE for a25
col "STATUS|WAIT_STATE|TIME_WAITED" for a31
col "CURR_SQLID" for a35
col "SQLID | FULL_SQL_TEXT" for a75
select p.spid "OS_PID",s.USERNAME||'|'||s.sid||','||s.serial#||' | '||substr(s.MODULE,1,27)||'|'||substr(s.MACHINE,1,20) "USER|SID,SER# |MOD|MACHINE",
substr(s.status||'|'||w.state||'|'||w.seconds_in_wait||'|'||LAST_CALL_ET||'|'||LOGON_TIME,1,50) "ST|WA_ST|WAITD|ACTIVE|LOGIN",
substr(s.status||'|'||w.state||'|'||w.seconds_in_wait||'sec',1,30) "STATUS|WAIT_STATE|TIME_WAITED",
--substr(w.event,1,30)"EVENT",s.SQL_ID ||' | '|| Q.SQL_FULLTEXT "SQLID | FULL_SQL_TEXT"
substr(w.event,1,30)"EVENT",s.SQL_ID
from v\$session s,v\$process p, v\$session_wait w, v\$SQL Q
where s.username is not null
and s.status='ACTIVE'
and p.addr = s.paddr
and s.sid=w.sid
and s.SQL_ID=Q.SQL_ID
order by s.USERNAME||' | '||s.sid||','||s.serial#,s.MODULE;
prompt
prompt ----------------------------------------------------------------
Prompt Long Running Operations On DATABASE $ORACLE_SID:
prompt ----------------------------------------------------------------
col "USER | SID,SERIAL#" for a40
col MESSAGE for a80
col "%COMPLETE" for 999.99
col "SID|SERIAL#" for a12
set linesize 200
select USERNAME||' | '||SID||','||SERIAL# "USER | SID,SERIAL#",SQL_ID,START_TIME,SOFAR/TOTALWORK*100 "%COMPLETE",
trunc(ELAPSED_SECONDS/60) MIN_ELAPSED, trunc(TIME_REMAINING/60) MIN_REMAINING,substr(MESSAGE,1,80)MESSAGE
from v\$session_longops where SOFAR/TOTALWORK*100 <>'100'
order by MIN_REMAINING;
SPOOL OFF
EOF
;;
esac
done
mail -s "ALERT: CPU Utilization on Server [${SRV_NAME}] has reached [${CPU_UTL}%]" ${MAIL_LIST} < ${CPULOG}
fi
rm -f ${CPUUTLLOG}
rm -f ${CPULOG}
# #########################
# Getting ORACLE_SID:
# #########################
# Exit with sending Alert mail if No DBs are running:
INS_COUNT=$( ps -ef|grep pmon|grep -v grep|egrep -v ${EXL_DB}|wc -l )
if [ $INS_COUNT -eq 0 ]
then
echo "Reported By Script: ${SCRIPT_NAME}:" > /tmp/oracle_processes_DBA_BUNDLE.log
echo " " >> /tmp/oracle_processes_DBA_BUNDLE.log
echo "Current running INSTANCES on server [${SRV_NAME}]:" >> /tmp/oracle_processes_DBA_BUNDLE.log
echo "***************************************************" >> /tmp/oracle_processes_DBA_BUNDLE.log
ps -ef|grep -v grep|grep pmon >> /tmp/oracle_processes_DBA_BUNDLE.log
echo " " >> /tmp/oracle_processes_DBA_BUNDLE.log
echo "Current running LISTENERS on server [${SRV_NAME}]:" >> /tmp/oracle_processes_DBA_BUNDLE.log
echo "***************************************************" >> /tmp/oracle_processes_DBA_BUNDLE.log
ps -ef|grep -v grep|grep tnslsnr >> /tmp/oracle_processes_DBA_BUNDLE.log
mail -s "ALARM: No Databases Are Running on Server: $SRV_NAME !!!" ${MAIL_LIST} < /tmp/oracle_processes_DBA_BUNDLE.log
rm -f /tmp/oracle_processes_DBA_BUNDLE.log
exit
fi
# #########################
# Setting ORACLE_SID:
# #########################
for ORACLE_SID in $( ps -ef|grep pmon|grep -v grep|egrep -v ${EXL_DB}|awk '{print $NF}'|sed -e 's/ora_pmon_//g'|grep -v sed|grep -v "s///g" )
do
export ORACLE_SID
# #########################
# Getting ORACLE_HOME
# #########################
ORA_USER=`ps -ef|grep ${ORACLE_SID}|grep pmon|grep -v grep|egrep -v ${EXL_DB}|awk '{print $1}'|tail -1`
USR_ORA_HOME=`grep ${ORA_USER} /etc/passwd| cut -f6 -d ':'|tail -1`
# SETTING ORATAB:
if [ -f /etc/oratab ]
then
ORATAB=/etc/oratab
export ORATAB
## If OS is Solaris:
elif [ -f /var/opt/oracle/oratab ]
then
ORATAB=/var/opt/oracle/oratab
export ORATAB
fi
# ATTEMPT1: Get ORACLE_HOME using pwdx command:
PMON_PID=`pgrep -lf _pmon_${ORACLE_SID}|awk '{print $1}'`
export PMON_PID
ORACLE_HOME=`pwdx ${PMON_PID}|awk '{print $NF}'|sed -e 's/\/dbs//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from PWDX is ${ORACLE_HOME}"
# ATTEMPT2: If ORACLE_HOME not found get it from oratab file:
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
## If OS is Linux:
if [ -f /etc/oratab ]
then
ORATAB=/etc/oratab
ORACLE_HOME=`grep -v '^\#' $ORATAB | grep -v '^$'| grep -i "^${ORACLE_SID}:" | perl -lpe'$_ = reverse' | cut -f3 | perl -lpe'$_ = reverse' |cut -f2 -d':'`
export ORACLE_HOME
## If OS is Solaris:
elif [ -f /var/opt/oracle/oratab ]
then
ORATAB=/var/opt/oracle/oratab
ORACLE_HOME=`grep -v '^\#' $ORATAB | grep -v '^$'| grep -i "^${ORACLE_SID}:" | perl -lpe'$_ = reverse' | cut -f3 | perl -lpe'$_ = reverse' |cut -f2 -d':'`
export ORACLE_HOME
fi
#echo "ORACLE_HOME from oratab is ${ORACLE_HOME}"
fi
# ATTEMPT3: If ORACLE_HOME is still not found, search for the environment variable: [Less accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`env|grep -i ORACLE_HOME|sed -e 's/ORACLE_HOME=//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from environment is ${ORACLE_HOME}"
fi
# ATTEMPT4: If ORACLE_HOME is not found in the environment search user's profile: [Less accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`grep -h 'ORACLE_HOME=\/' $USR_ORA_HOME/.bash_profile $USR_ORA_HOME/.*profile | perl -lpe'$_ = reverse' |cut -f1 -d'=' | perl -lpe'$_ = reverse'|tail -1`
export ORACLE_HOME
#echo "ORACLE_HOME from User Profile is ${ORACLE_HOME}"
fi
# ATTEMPT5: If ORACLE_HOME is still not found, search for orapipe: [Least accurate]
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
ORACLE_HOME=`locate -i orapipe|head -1|sed -e 's/\/bin\/orapipe//g'`
export ORACLE_HOME
#echo "ORACLE_HOME from orapipe search is ${ORACLE_HOME}"
fi
# TERMINATE: If all above attempts failed to get ORACLE_HOME location, EXIT the script:
if [ ! -f ${ORACLE_HOME}/bin/sqlplus ]
then
echo "Please export ORACLE_HOME variable in your .bash_profile file under oracle user home directory in order to get this script to run properly"
echo "e.g."
echo "export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1"
mail -s "dbalarm script on Server [${SRV_NAME}] failed to find ORACLE_HOME, Please export ORACLE_HOME variable in your .bash_profile file under oracle user home directory" ${MAIL_LIST} < /dev/null
exit
fi
# #########################
# Variables:
# #########################
export PATH=$PATH:${ORACLE_HOME}/bin
export LOG_DIR=${USR_ORA_HOME}/BUNDLE_Logs
mkdir -p ${LOG_DIR}
chown -R ${ORA_USER} ${LOG_DIR}
chmod -R go-rwx ${LOG_DIR}
if [ ! -d ${LOG_DIR} ]
then
mkdir -p /tmp/BUNDLE_Logs
export LOG_DIR=/tmp/BUNDLE_Logs
chown -R ${ORA_USER} ${LOG_DIR}
chmod -R go-rwx ${LOG_DIR}
fi
# ########################
# Getting ORACLE_BASE:
# ########################
# Get ORACLE_BASE from user's profile if it EMPTY:
if [ -z "${ORACLE_BASE}" ]
then
ORACLE_BASE=`grep -h 'ORACLE_BASE=\/' $USR_ORA_HOME/.bash* $USR_ORA_HOME/.*profile | perl -lpe'$_ = reverse' |cut -f1 -d'=' | perl -lpe'$_ = reverse'|tail -1`
fi
# #########################
# Getting DB_NAME:
# #########################
VAL1=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" <<EOF
set pages 0 feedback off;
prompt
SELECT name from v\$database
exit;
EOF
)
# Getting DB_NAME in Uppercase & Lowercase:
DB_NAME_UPPER=`echo $VAL1| perl -lpe'$_ = reverse' |awk '{print $1}'|perl -lpe'$_ = reverse'`
DB_NAME_LOWER=$( echo "$DB_NAME_UPPER" | tr -s '[:upper:]' '[:lower:]' )
export DB_NAME_UPPER
export DB_NAME_LOWER
# DB_NAME is Uppercase or Lowercase?:
if [ -d $ORACLE_HOME/diagnostics/${DB_NAME_LOWER} ]
then
DB_NAME=$DB_NAME_LOWER
else
DB_NAME=$DB_NAME_UPPER
fi
# #########################
# Getting DB_UNQ_NAME:
# #########################
VAL121=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" <<EOF
set pages 0 feedback off;
prompt
select value from v\$parameter where name='db_unique_name';
exit;
EOF
)
# Getting DB_NAME in Uppercase & Lowercase:
DB_UNQ_NAME=`echo $VAL121| perl -lpe'$_ = reverse' |awk '{print $1}'|perl -lpe'$_ = reverse'`
export DB_UNQ_NAME
# In case DB_UNQ_NAME variable is empty then use DB_NAME instead:
case ${DB_UNQ_NAME}
in '') DB_UNQ_NAME=${DB_NAME}; export DB_UNQ_NAME;;
esac
# ###################
# Checking DB Version:
# ###################
VAL311=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" <<EOF
set pages 0 feedback off;
prompt
select version from v\$instance;
exit;
EOF
)
DB_VER=`echo $VAL311|perl -lpe'$_ = reverse' |awk '{print $1}'|perl -lpe'$_ = reverse'|cut -f1 -d '.'`
# #####################
# Getting DB Block Size:
# #####################
VAL312=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" <<EOF
set pages 0 feedback off;
prompt
select value from v\$parameter where name='db_block_size';
exit;
EOF
)
blksize=`echo $VAL312|perl -lpe'$_ = reverse' |awk '{print $1}'|perl -lpe'$_ = reverse'|cut -f1 -d '.'`
# #####################
# Getting DB ROLE:
# #####################
VAL312=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" <<EOF
set pages 0 feedback off;
prompt
select DATABASE_ROLE from v\$database;
exit;
EOF
)
DB_ROLE=`echo $VAL312|perl -lpe'$_ = reverse' |awk '{print $1}'|perl -lpe'$_ = reverse'|cut -f1 -d '.'`
case ${DB_ROLE} in
PRIMARY) DB_ROLE_ID=0;;
*) DB_ROLE_ID=1;;
esac
# ############################################
# Checking FAILED JOBS ON THE DATABASE:
# ############################################
VAL40=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" << EOF
set pages 0 feedback off echo off;
--SELECT (SELECT COUNT(*) FROM dba_jobs where failures <> '0') + (SELECT COUNT(*) FROM dba_scheduler_jobs where FAILURE_COUNT <> '0') FAIL_COUNT FROM dual;
SELECT (SELECT COUNT(*) FROM dba_jobs where failures <> '0') + (SELECT COUNT(*) FROM DBA_SCHEDULER_JOB_RUN_DETAILS where LOG_DATE > sysdate-1 and STATUS<>'SUCCEEDED') FAIL_COUNT FROM dual;
exit;
EOF
)
VAL50=`echo $VAL40 | awk '{print $NF}'`
if [ ${VAL50} -ge ${FAILDJOBSTHRESHOLD} ]
then
VAL60=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" << EOF
set linesize 190 pages 100
spool ${LOG_DIR}/failed_jobs.log
PROMPT DBMS_JOBS:
PROMPT ^^^^^^^^^^
col LAST_RUN for a25
col NEXT_RUN for a25
set long 9999999
--select dbms_xmlgen.getxml('select job,schema_user,failures,LAST_DATE LAST_RUN,NEXT_DATE NEXT_RUN from dba_jobs where failures <> 0') xml from dual;
select job,schema_user,failures,to_char(LAST_DATE,'DD-Mon-YYYY hh24:mi:ss')LAST_RUN,to_char(NEXT_DATE,'DD-Mon-YYYY hh24:mi:ss')NEXT_RUN from dba_jobs where failures <> '0';
PROMPT
PROMPT DBMS_SCHEDULER:
PROMPT ^^^^^^^^^^^^^^^
col OWNER for a25
col JOB_NAME for a40
col STATE for a11
col STATUS for a11
col FAILURE_COUNT for 999 heading 'Fail'
col RUNTIME_IN_LAST24H for a25
col RUN_DURATION for a14
--HTML format Outputs:
--Set Markup Html On Entmap On Spool On Preformat Off
-- Get the whole failed runs in the last 24 hours:
select to_char(LOG_DATE,'DD-Mon-YYYY hh24:mi:ss')RUNTIME_IN_LAST24H,OWNER,JOB_NAME,STATUS,ERROR#,RUN_DURATION from DBA_SCHEDULER_JOB_RUN_DETAILS where LOG_DATE > sysdate-1 and STATUS<>'SUCCEEDED';
--XML Output
--select dbms_xmlgen.getxml('select to_char(LOG_DATE,''DD-Mon-YYYY hh24:mi:ss'')RUNTIME_IN_LAST24H,OWNER,JOB_NAME,STATUS,ERROR#,RUN_DURATION from DBA_SCHEDULER_JOB_RUN_DETAILS where LOG_DATE > sysdate-1 and STATUS<>''SUCCEEDED''') xml from dual;
spool off
exit;
EOF
)
mail -s "WARNING: FAILED JOBS detected on database [${DB_NAME_UPPER}] on Server [${SRV_NAME}]" ${MAIL_LIST} < ${LOG_DIR}/failed_jobs.log
rm -f ${LOG_DIR}/failed_jobs.log
fi
# ############################################
# LOGFILE SETTINGS:
# ############################################
# Logfile path variable:
DB_HEALTHCHK_RPT=${LOG_DIR}/${DB_NAME}_HEALTH_CHECK_REPORT.log
export DB_HEALTHCHK_RPT
# Flush the logfile:
echo "" > ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "REPORTED BY: ${SCRIPT_NAME}" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
# ############################################
# Checking RAC/ORACLE_RESTART Services:
# ############################################
case ${CLUSTER_CHECK} in
y|Y|yes|YES|Yes)
# Check for ocssd clusterware process:
CHECK_OCSSD=`ps -ef|grep 'ocssd.bin'|grep -v grep|wc -l`
CHECK_CRSD=`ps -ef|grep 'crsd.bin'|grep -v grep|wc -l`
if [ ${CHECK_CRSD} -gt 0 ]
then
CLS_STR=crs
export CLS_STR
CLUSTER_TYPE=CLUSTERWARE
export CLUSTER_TYPE
else
CLS_STR=has
export CLS_STR
CLUSTER_TYPE=ORACLE_RESTART
export CLUSTER_TYPE
fi
if [ ${CHECK_CRSD} -gt 0 ]
then
GRID_HOME=`ps -ef|grep 'ocssd.bin'|grep -v grep|awk '{print $NF}'|sed -e 's/\/bin\/ocssd.bin//g'|grep -v sed|grep -v "//g"`
export GRID_HOME
echo "^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "CLUSTERWARE CHECKS:" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "" >> ${DB_HEALTHCHK_RPT}
FILE_NAME=${GRID_HOME}/bin/ocrcheck
export FILE_NAME
if [ -f ${FILE_NAME} ]
then
echo "" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "OCR DISKS CHECKING:" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
${GRID_HOME}/bin/ocrcheck >> ${DB_HEALTHCHK_RPT}
echo "" >> ${DB_HEALTHCHK_RPT}
fi
FILE_NAME=${GRID_HOME}/bin/crsctl
export FILE_NAME
if [ -f ${FILE_NAME} ]
then
echo "" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "VOTE DISKS CHECKING:" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
${GRID_HOME}/bin/crsctl query css votedisk >> ${DB_HEALTHCHK_RPT}
echo "" >> ${DB_HEALTHCHK_RPT}
fi
fi
if [ ${CHECK_OCSSD} -gt 0 ]
then
GRID_HOME=`ps -ef|grep 'ocssd.bin'|grep -v grep|awk '{print $NF}'|sed -e 's/\/bin\/ocssd.bin//g'|grep -v sed|grep -v "//g"`
export GRID_HOME
FILE_NAME=${GRID_HOME}/bin/crsctl
export FILE_NAME
if [ -f ${FILE_NAME} ]
then
echo "" >> ${DB_HEALTHCHK_RPT}
echo "" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "${CLUSTER_TYPE} SERVICES:" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
AWK=/usr/bin/awk
$AWK \
'BEGIN {printf "%-55s %-24s %-18s\n", "HA Resource", "Target", "State";
printf "%-55s %-24s %-18s\n", "-----------", "------", "-----";}' >> ${DB_HEALTHCHK_RPT}
$GRID_HOME/bin/crsctl status resource | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$1'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-55s %-24s %-18s\n", appname, apptarget, appstate; state=0;}' >> ${DB_HEALTHCHK_RPT}
fi
FILE_NAME=${ORACLE_HOME}/bin/srvctl
export FILE_NAME
if [ -f ${FILE_NAME} ]
then
echo "" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
echo "DATABASE SERVICES STATUS:" >> ${DB_HEALTHCHK_RPT}
echo "^^^^^^^^^^^^^^^^^^^^^^^^" >> ${DB_HEALTHCHK_RPT}
${ORACLE_HOME}/bin/srvctl status service -d ${DB_UNQ_NAME} >> ${DB_HEALTHCHK_RPT}
echo "" >> ${DB_HEALTHCHK_RPT}
fi
fi
;;
esac
# ############################################
# Checking Advisors:
# ############################################
# If the database version is 10g onward collect the advisors recommendations:
if [ ${DB_VER} -gt 9 ]
then
VAL611=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" << EOF
set linesize 190 pages 100
spool ${DB_HEALTHCHK_RPT} app
PROMPT
PROMPT ^^^^^^^^^^^^^^^^
PROMPT Tablespaces Size: [Based on Datafiles MAXSIZE]
PROMPT ^^^^^^^^^^^^^^^^
set pages 200 linesize 200 tab off
col tablespace_name for A25
col Total_MB for 999999999999
col Used_MB for 999999999999
col '%Used' for 999.99
comp sum of Total_MB on report
comp sum of Used_MB on report
bre on report
select tablespace_name,
(tablespace_size*$blksize)/(1024*1024) Total_MB,
(used_space*$blksize)/(1024*1024) Used_MB,
used_percent "%Used"
from dba_tablespace_usage_metrics;
PROMPT ^^^^^^^^^^^^^^
PROMPT ASM STATISTICS:
PROMPT ^^^^^^^^^^^^^^
select name,state,OFFLINE_DISKS,total_mb,free_mb,ROUND((1-(free_mb / total_mb))*100, 2) "%FULL" from v\$asm_diskgroup;
PROMPT ^^^^^^^^^^^^^^
PROMPT FRA STATISTICS:
PROMPT ^^^^^^^^^^^^^^
PROMPT
PROMPT FRA_SIZE:
PROMPT ^^^^^^^^^
col name for a35
SELECT NAME,NUMBER_OF_FILES,SPACE_LIMIT/1024/1024/1024 AS TOTAL_SIZE_GB,SPACE_USED/1024/1024/1024 SPACE_USED_GB,
SPACE_RECLAIMABLE/1024/1024/1024 SPACE_RECLAIMABLE_GB,ROUND((SPACE_USED-SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1) AS "%FULL_AFTER_CLAIM",
ROUND((SPACE_USED)/SPACE_LIMIT * 100, 1) AS "%FULL_NOW" FROM V\$RECOVERY_FILE_DEST;
PROMPT FRA_COMPONENTS:
PROMPT ^^^^^^^^^^^^^^^^^
select * from v\$flash_recovery_area_usage;
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT DATABASE GROWTH: [In the Last ~8 days]
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
set serveroutput on
Declare
v_BaselineSize number(20);
v_CurrentSize number(20);
v_TotalGrowth number(20);
v_Space number(20);
cursor usageHist is
select a.snap_id,
SNAP_TIME,
sum(TOTAL_SPACE_ALLOCATED_DELTA) over ( order by a.SNAP_ID) ProgSum
from
(select SNAP_ID,
sum(SPACE_ALLOCATED_DELTA) TOTAL_SPACE_ALLOCATED_DELTA
from DBA_HIST_SEG_STAT
group by SNAP_ID
having sum(SPACE_ALLOCATED_TOTAL) <> 0
order by 1 ) a,
(select distinct SNAP_ID,
to_char(END_INTERVAL_TIME,'DD-Mon-YYYY HH24:Mi') SNAP_TIME
from DBA_HIST_SNAPSHOT) b
where a.snap_id=b.snap_id;
Begin
select sum(SPACE_ALLOCATED_DELTA) into v_TotalGrowth from DBA_HIST_SEG_STAT;
select sum(bytes) into v_CurrentSize from dba_segments;
v_BaselineSize := (v_CurrentSize - v_TotalGrowth) ;
dbms_output.put_line('SNAP_TIME Database Size(GB)');
for row in usageHist loop
v_Space := (v_BaselineSize + row.ProgSum)/(1024*1024*1024);
dbms_output.put_line(row.SNAP_TIME || ' ' || to_char(v_Space) );
end loop;
end;
/
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^
PROMPT Active Incidents:
PROMPT ^^^^^^^^^^^^^^^^^
set linesize 220
col RECENT_PROBLEMS_1_WEEK_BACK for a45
select PROBLEM_KEY RECENT_PROBLEMS_1_WEEK_BACK,to_char(FIRSTINC_TIME,'DD-MON-YY HH24:mi:ss') FIRST_OCCURENCE,to_char(LASTINC_TIME,'DD-MON-YY HH24:mi:ss')
LAST_OCCURENCE FROM V\$DIAG_PROBLEM WHERE LASTINC_TIME > SYSDATE -10;
PROMPT
PROMPT OUTSTANDING ALERTS:
PROMPT ^^^^^^^^^^^^^^^^^^^
select * from DBA_OUTSTANDING_ALERTS;
PROMPT
PROMPT CORRUPTED BLOCKS:
PROMPT ^^^^^^^^^^^^^^^^^
select * from V\$DATABASE_BLOCK_CORRUPTION;
PROMPT
PROMPT BLOCKING SESSIONS:
PROMPT ^^^^^^^^^^^^^^^^^^
set linesize 200 pages 0 echo on feedback on
col BLOCKING_STATUS for a90
select 'User: '||s1.username || '@' || s1.machine || '(SID=' || s1.sid ||' ) running SQL_ID:'||s1.sql_id||' is blocking
User: '|| s2.username || '@' || s2.machine || '(SID=' || s2.sid || ') running SQL_ID:'||s2.sql_id||' For '||s2.SECONDS_IN_WAIT||' sec
----------------------------------------------------------------
Warn user '||s1.username||' Or use the following statement to kill his session:
----------------------------------------------------------------
ALTER SYSTEM KILL SESSION '''||s1.sid||','||s1.serial#||''' immediate;' AS blocking_status
from gv\$LOCK l1, gv\$SESSION s1, gv\$LOCK l2, gv\$SESSION s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2
order by s2.SECONDS_IN_WAIT desc;
PROMPT
PROMPT UN-USABLE INDEXES:
PROMPT ^^^^^^^^^^^^^^^^^^
PROMPT
set echo on feedback on pages 100
select 'ALTER INDEX '||OWNER||'.'||INDEX_NAME||' REBUILD ONLINE;' from dba_indexes where status='UNUSABLE';
PROMPT
PROMPT INVALID OBJECTS:
PROMPT ^^^^^^^^^^^^^^^^
PROMPT
set pages 0
select 'alter package '||owner||'.'||object_name||' compile;' from dba_objects where status <> 'VALID' and object_type like '%PACKAGE%' union
select 'alter type '||owner||'.'||object_name||' compile specification;' from dba_objects where status <> 'VALID' and object_type like '%TYPE%'union
select 'alter '||object_type||' '||owner||'.'||object_name||' compile;' from dba_objects where status <> 'VALID' and object_type not in ('PACKAGE','PACKAGE BODY','SYNONYM','TYPE','TYPE BODY') union
select 'alter public synonym '||object_name||' compile;' from dba_objects where status <> 'VALID' and object_type ='SYNONYM';
set pages 100
PROMPT
PROMPT FAILED LOGIN ATTEMPTS: [Last 24H]
PROMPT ^^^^^^^^^^^^^^^^^^^^^
PROMPT
col OS_USERNAME for a20
col USERNAME for a25
col TERMINAL for a30
col ACTION_NAME for a20
col TIMESTAMP for a21
col USERHOST for a40
select /*+ parallel 2 */ to_char (EXTENDED_TIMESTAMP,'DD-MON-YYYY HH24:MI:SS') TIMESTAMP,OS_USERNAME,USERNAME,TERMINAL,USERHOST,ACTION_NAME
from DBA_AUDIT_SESSION
where returncode = 1017
and timestamp > (sysdate -1)
order by 1;
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^
PROMPT RMAN BACKUP OPERATIONS: [LAST 24H]
PROMPT ^^^^^^^^^^^^^^^^^^^^^^
col START_TIME for a15
col END_TIME for a15
col TIME_TAKEN_DISPLAY for a10
col INPUT_BYTES_DISPLAY heading "DATA SIZE" for a10
col OUTPUT_BYTES_DISPLAY heading "Backup Size" for a11
col OUTPUT_BYTES_PER_SEC_DISPLAY heading "Speed/s" for a10
col output_device_type heading "Device_TYPE" for a11
SELECT to_char (start_time,'DD-MON-YY HH24:MI') START_TIME, to_char(end_time,'DD-MON-YY HH24:MI') END_TIME, time_taken_display, status,
input_type, output_device_type,input_bytes_display, output_bytes_display, output_bytes_per_sec_display ,COMPRESSION_RATIO
FROM v\$rman_backup_job_details
WHERE end_time > sysdate -1;
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^
PROMPT SCHEDULED JOBS STATUS:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^
PROMPT
PROMPT DBMS_JOBS:
PROMPT ^^^^^^^^^^
set linesize 200
col LAST_RUN for a25
col NEXT_RUN for a25
select job,schema_user,failures,to_char(LAST_DATE,'DD-Mon-YYYY hh24:mi:ss')LAST_RUN,to_char(NEXT_DATE,'DD-Mon-YYYY hh24:mi:ss')NEXT_RUN from dba_jobs;
PROMPT
PROMPT DBMS_SCHEDULER:
PROMPT ^^^^^^^^^^^^^^^^
col OWNER for a15
col STATE for a15
col FAILURE_COUNT for 9999 heading 'Fail'
col "DURATION(d:hh:mm:ss)" for a22
col REPEAT_INTERVAL for a70
col "LAST_RUN || REPEAT_INTERVAL" for a65
col "DURATION(d:hh:mm:ss)" for a12
--col LAST_START_DATE for a40
select JOB_NAME,OWNER,ENABLED,STATE,FAILURE_COUNT,to_char(LAST_START_DATE,'DD-Mon-YYYY hh24:mi:ss')||' || '||REPEAT_INTERVAL "LAST_RUN || REPEAT_INTERVAL",
extract(day from last_run_duration) ||':'||
lpad(extract(hour from last_run_duration),2,'0')||':'||
lpad(extract(minute from last_run_duration),2,'0')||':'||
lpad(round(extract(second from last_run_duration)),2,'0') "DURATION(d:hh:mm:ss)"
from dba_scheduler_jobs order by ENABLED,STATE;
PROMPT
PROMPT AUTOTASK INTERNAL MAINTENANCE WINDOWS:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
col WINDOW_NAME for a17
col NEXT_RUN for a20
col ACTIVE for a6
col OPTIMIZER_STATS for a15
col SEGMENT_ADVISOR for a15
col SQL_TUNE_ADVISOR for a16
col HEALTH_MONITOR for a15
SELECT WINDOW_NAME,TO_CHAR(WINDOW_NEXT_TIME,'DD-MM-YYYY HH24:MI:SS') NEXT_RUN,AUTOTASK_STATUS STATUS,WINDOW_ACTIVE ACTIVE,OPTIMIZER_STATS,SEGMENT_ADVISOR,SQL_TUNE_ADVISOR,HEALTH_MONITOR FROM DBA_AUTOTASK_WINDOW_CLIENTS;
PROMPT
PROMPT FAILED DBMS_SCHEDULER JOBS IN THE LAST 24H:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
col LOG_DATE for a36
col OWNER for a15
col JOB_NAME for a35
col STATUS for a11
col RUN_DURATION for a20
col ID for 99
select INSTANCE_ID ID,JOB_NAME,OWNER,LOG_DATE,STATUS,ERROR#,RUN_DURATION from DBA_SCHEDULER_JOB_RUN_DETAILS where LOG_DATE > sysdate-1 and STATUS='FAILED' order by JOB_NAME,LOG_DATE;
PROMPT ^^^^^^^^^^^^^^^^
PROMPT ADVISORS STATUS:
PROMPT ^^^^^^^^^^^^^^^^
col CLIENT_NAME for a60
col window_group for a60
col STATUS for a15
SELECT client_name, status, consumer_group, window_group FROM dba_autotask_client ORDER BY client_name;
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT SQL TUNING ADVISOR:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT
PROMPT Last Execution of SQL TUNING ADVISOR:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
col TASK_NAME for a60
set long 2000000000
SELECT task_name, status, TO_CHAR(execution_end,'DD-MON-YY HH24:MI') Last_Execution FROM dba_advisor_executions where TASK_NAME='SYS_AUTO_SQL_TUNING_TASK' and execution_end>sysdate-1;
variable Findings_Report CLOB;
BEGIN
:Findings_Report :=DBMS_SQLTUNE.REPORT_AUTO_TUNING_TASK(
begin_exec => NULL,
end_exec => NULL,
type => 'TEXT',
level => 'TYPICAL',
section => 'ALL',
object_id => NULL,
result_limit => NULL);
END;
/
print :Findings_Report
PROMPT
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT MEMORY ADVISORS:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT
PROMPT SGA ADVISOR:
PROMPT ^^^^^^^^^^^^
col ESTD_DB_TIME for 99999999999999999
col ESTD_DB_TIME_FACTOR for 9999999999999999999999999999
select * from V\$SGA_TARGET_ADVICE where SGA_SIZE_FACTOR > .6 and SGA_SIZE_FACTOR < 1.6;
PROMPT
PROMPT Buffer Cache ADVISOR:
PROMPT ^^^^^^^^^^^^^^^^^^^^^
col ESTD_SIZE_MB for 9999999999999
col ESTD_PHYSICAL_READS for 99999999999999999999
col ESTD_PHYSICAL_READ_TIME for 99999999999999999999
select SIZE_FACTOR "%SIZE",SIZE_FOR_ESTIMATE ESTD_SIZE_MB,ESTD_PHYSICAL_READS,ESTD_PHYSICAL_READ_TIME,ESTD_PCT_OF_DB_TIME_FOR_READS
from V\$DB_CACHE_ADVICE where SIZE_FACTOR >.8 and SIZE_FACTOR<1.3;
PROMPT
PROMPT Shared Pool ADVISOR:
PROMPT ^^^^^^^^^^^^^^^^^^^^^
col SIZE_MB for 99999999999
col SIZE_FACTOR for 99999999
col ESTD_SIZE_MB for 99999999999999999999
col LIB_CACHE_SAVED_TIME for 99999999999999999999999999
select SHARED_POOL_SIZE_FOR_ESTIMATE SIZE_MB,SHARED_POOL_SIZE_FACTOR "%SIZE",SHARED_POOL_SIZE_FOR_ESTIMATE/1024/1024 ESTD_SIZE_MB,ESTD_LC_TIME_SAVED LIB_CACHE_SAVED_TIME,
ESTD_LC_LOAD_TIME PARSING_TIME from V\$SHARED_POOL_ADVICE
where SHARED_POOL_SIZE_FACTOR > .9 and SHARED_POOL_SIZE_FACTOR < 1.6;
PROMPT
PROMPT PGA ADVISOR:
PROMPT ^^^^^^^^^^^^
col SIZE_FACTOR for 999999999
col ESTD_SIZE_MB for 99999999999999999999
col MB_PROCESSED for 99999999999999999999
col ESTD_TIME for 99999999999999999999
select PGA_TARGET_FACTOR "%SIZE",PGA_TARGET_FOR_ESTIMATE/1024/1024 ESTD_SIZE_MB,BYTES_PROCESSED/1024/1024 MB_PROCESSED,
ESTD_TIME,ESTD_PGA_CACHE_HIT_PERCENTAGE PGA_HIT,ESTD_OVERALLOC_COUNT PGA_SHORTAGE
from V\$PGA_TARGET_ADVICE where PGA_TARGET_FACTOR > .7 and PGA_TARGET_FACTOR < 1.6;
PROMPT
PROMPT SEGMENT ADVISOR:
PROMPT ^^^^^^^^^^^^^^^^
select'Task Name : ' || f.task_name || chr(10) ||
'Start Run Time : ' || TO_CHAR(execution_start, 'dd-mon-yy hh24:mi') || chr (10) ||
'Segment Name : ' || o.attr2 || chr(10) ||
'Segment Type : ' || o.type || chr(10) ||
'Partition Name : ' || o.attr3 || chr(10) ||
'Message : ' || f.message || chr(10) ||
'More Info : ' || f.more_info || chr(10) ||
'-------------------------------------------' Advice
FROM dba_advisor_findings f
,dba_advisor_objects o
,dba_advisor_executions e
WHERE o.task_id = f.task_id
AND o.object_id = f.object_id
AND f.task_id = e.task_id
AND e. execution_start > sysdate - 1
AND e.advisor_name = 'Segment Advisor'
ORDER BY f.task_name;
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT CURRENT OS / HARDWARE STATISTICS:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
select stat_name,value from v\$osstat;
PROMPT
PROMPT ^^^^^^^^^^^^^^^
PROMPT RESOURCE LIMIT:
PROMPT ^^^^^^^^^^^^^^^
col INITIAL_ALLOCATION for a20
col LIMIT_VALUE for a20
select * from gv\$resource_limit order by RESOURCE_NAME;
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^
PROMPT RECYCLEBIN OBJECTS#:
PROMPT ^^^^^^^^^^^^^^^^^^^^
set feedback off
select count(*) "RECYCLED_OBJECTS#",sum(space)*$blksize/1024/1024 "TOTAL_SIZE_MB" from dba_recyclebin group by 1;
set feedback on
PROMPT
PROMPT [Note: Consider Purging DBA_RECYCLEBIN for better performance]
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT FLASHBACK RESTORE POINTS:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^
select * from V\$RESTORE_POINT;
PROMPT
PROMPT ^^^^^^^^^^^^^^^
PROMPT HEALTH MONITOR:
PROMPT ^^^^^^^^^^^^^^^
select name,type,status,description,repair_script from V\$HM_RECOMMENDATION where time_detected > sysdate -1;
PROMPT ^^^^^^^^^^^^^^^^^^
PROMPT Monitored INDEXES:
PROMPT ^^^^^^^^^^^^^^^^^^
set linesize 180 pages 200
col Index_NAME for a40
col TABLE_NAME for a40
select io.name Index_NAME, t.name TABLE_NAME,decode(bitand(i.flags, 65536),0,'NO','YES') Monitoring,
decode(bitand(ou.flags, 1),0,'NO','YES') USED,ou.start_monitoring,ou.end_monitoring
from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#;
--PROMPT
--PROMPT To stop monitoring USED indexes use this command:
--prompt select 'ALTER INDEX RA.'||io.name||' NOMONITORING USAGE;' from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
--prompt and decode(bitand(i.flags, 65536),0,'NO','YES')='YES' and decode(bitand(ou.flags, 1),0,'NO','YES')='YES' order by 1
--prompt /
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^
PROMPT REDO LOG SWITCHES:
PROMPT ^^^^^^^^^^^^^^^^^^
set linesize 199
col day for a11
SELECT to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23"
from v\$log_history where first_time > sysdate-1
GROUP by to_char(first_time,'YYYY-MON-DD') order by 1 asc;
PROMPT
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PROMPT Modified Parameters Since Instance Startup:
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
col name for a45
col VALUE for a120
col DEPRECATED for a10
select NAME,VALUE,ISDEFAULT "DEFAULT",ISDEPRECATED "DEPRECATED" from v\$parameter where ISMODIFIED = 'SYSTEM_MOD' order by 1;
PROMPT
PROMPT ^^^^^^^^^^^^
PROMPT Cred Backup:
PROMPT ^^^^^^^^^^^^
col name for a35
col "CREATE_DATE||PASS_LAST_CHANGE" for a60
select name,PASSWORD HASH,CTIME ||' || '||PTIME "CREATE_DATE||PASS_LAST_CHANGE" from user\$ where PASSWORD is not null order by 1;
spool off
exit;
EOF
)
fi
# ###############################################
# Checking AUDIT RECORDS ON THE DATABASE:
# ###############################################
VAL70=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" << EOF
set pages 0 feedback off echo off;
SELECT (SELECT COUNT(*) FROM dba_audit_trail
where ACTION_NAME not like 'LOGO%' and ACTION_NAME not in ('SELECT','SET ROLE') and timestamp > SYSDATE-1)
+
(SELECT COUNT(*) FROM dba_fga_audit_trail WHERE timestamp > SYSDATE-1) AUD_REC_COUNT FROM dual;
exit;
EOF
)
VAL80=`echo $VAL70 | awk '{print $NF}'`
if [ ${VAL80} -ge ${AUDITRECOTHRESHOLD} ]
then
VAL90=$(${ORACLE_HOME}/bin/sqlplus -S "/ as sysdba" << EOF
set linesize 190 pages 100
spool ${LOG_DIR}/audit_records.log
col EXTENDED_TIMESTAMP for a36
col OWNER for a25
col OBJ_NAME for a25
col OS_USERNAME for a20
col USERNAME for a25
col USERHOST for a21
col ACTION_NAME for a25
col ACTION_OWNER_OBJECT for a55
prompt
prompt
prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt Audit records in the last 24Hours AUD$...
prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt
select extended_timestamp,OS_USERNAME,USERNAME,USERHOST,ACTION_NAME||' '||OWNER||' . '||OBJ_NAME ACTION_OWNER_OBJECT
from dba_audit_trail
where
ACTION_NAME not like 'LOGO%'
and ACTION_NAME not in ('SELECT','SET ROLE')
-- and USERNAME not in ('CRS_ADMIN','DBSNMP')
-- and OS_USERNAME not in ('workflow')
-- and OBJ_NAME not like '%TMP_%'
-- and OBJ_NAME not like 'WRKDETA%'
-- and OBJ_NAME not in ('PBCATTBL','SETUP','WRKIB','REMWORK')
and timestamp > SYSDATE-1 order by EXTENDED_TIMESTAMP;
prompt
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt Fine Grained Auditing Data ...
PROMPT ^^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt
col sql_text for a70
col time for a36
col USERHOST for a21
col db_user for a15
select to_char(timestamp,'DD-MM-YYYY HH24:MI:SS') as time,db_user,userhost,sql_text,SQL_BIND
from dba_fga_audit_trail
where
timestamp > SYSDATE-1
-- and policy_name='PAYROLL_TABLE'
order by EXTENDED_TIMESTAMP;
spool off
exit;
EOF
)
cat ${LOG_DIR}/audit_records.log >> ${DB_HEALTHCHK_RPT}
fi
mail -s "HEALTH CHECK REPORT: For Database [${DB_NAME_UPPER}] on Server: [${SRV_NAME}]" ${MAIL_LIST} < ${DB_HEALTHCHK_RPT}
echo "HEALTH CHECK REPORT FOR DATABASE [${DB_NAME_UPPER}] WAS SAVED TO: ${DB_HEALTHCHK_RPT}"
done
echo ""
# #############
# END OF SCRIPT
# #############
# REPORT BUGS to: mahmmoudadel@hotmail.com
# DOWNLOAD THE LATEST VERSION OF DATABASE ADMINISTRATION BUNDLE FROM:
# http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html
# DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".
########################################################################################################################################################
#########################################################################################################################################################
Reference Material
Linux Commands and Shell Scripting
Linux Commands and Shell Scripting
Tutorial 1 http://bit.ly/17yLhx2 (OR) http://www.doc.ic.ac.uk/~wjk/UnixIntro/
Tutorial 2 http://bit.ly/11tSgUr (OR) http://www.ee.surrey.ac.uk/Teaching/Unix/
Crash Course for Commands
http://bit.ly/11OlYGo (Shortened URL) for the actual web page below.
http://www.oracle.com/technetwork/articles/linux/calish-file-commands-085228.html
An Introduction to Linux Shell Scripting for DBAs
http://bit.ly/17MK2NJ (Shortened link) for the actual web page below.
http://www.oracle.com/technetwork/articles/linux/saternos-scripting-088882.html
AWK: A powerful command/tool in Linux ( Very well explained)
http://bit.ly/118KjVS (Shortened link) for the actual web page below.
http://www.oracle.com/technetwork/articles/dulaney-awk-095922.html
SED:Edit files and content from Command Prompt (Very well explained)
http://bit.ly/15t8kgp (Shortened link) for the below actual web page
http://www.oracle.com/technetwork/articles/dulaney-sed-098420.html
AmazionTech’s Blog
http://blog.Amazion.orgOracle DBA Book Copyright August - 2017 AmazionTech, Llc. http://www.Amazion.org . This is a confidential document and is distributed to registered trainees only. 298
Oracle DBA self-study Reading Material
http://bit.ly/ZyCWca (Shortened link) for the below actual web page
http://www.oracle.com/pls/db111/portal.all_books =>Goto Top left hand corner => "MASTER Book List" - Can be downloaded as PDFs)
RMAN CATALOG
It is a rman schema that contains all the backups stored by rman tool.
1. Create a database via dbca called rmancat
3. RMAN can be used either with or without a recovery catalog. A recovery catalog is a schema stored in a database that tracks backups and stores scripts for use in RMAN backup and recovery situations. Generally, an experienced DBA would suggest that the Enterprise Manager instance schema and RMAN catalog schema be placed in the same utility database on a server separate from the main servers. The RMAN schema generally only requires 15 megabyte per year per database backed up.
4. The RMAN schema owner is created in the RMAN database using the following steps:
5. 1. Start SQL*Plus and connect as a user with administrator privileges to the database containing the recovery catalog. For example, enter:
6. CONNECT SYS/oracle@catdb AS SYSDBA
7. 2. Create a user and schema for the recovery catalog. For example, enter:
8. CREATE USER rman IDENTIFIED BY
cat
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE tools
QUOTA UNLIMITED ON tools;
9. 3. Grant the recovery_catalog_owner role to the user. This role provides all of the privileges required to maintain and query the recovery catalog:
10. SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;
11. Once the owner user is created, the RMAN recovery catalog schema can be added:
12. 1. Connect to the database that contains the catalog owner. For example, using the RMAN user from the above example, enter the following from the operating system command line. The use of the CATALOG keyword tells Oracle this database contains the repository:
13. % rman CATALOG rman/cat@catdb
14. 2. It is also possible to connect from the RMAN utility prompt:
15. % rman
16. RMAN> CONNECT CATALOG rman/cat@catdb
17. 3. Now, the CREATE CATALOG command can be run to create the catalog. The creation of the catalog may take several minutes. If the catalog tablespace is this user's default tablespace, the command would look like the following:
18. CREATE CATALOG;
19. While the RMAN catalog can be created and used from either a 9i or 10g database, the Enterprise Manager Grid Control database must be a 9i database. This is true at least for release 1, although this may change with future releases.
20. Each database that the catalog will track must be registered.
21. Registering a Database with RMAN
22. The following process can be used to register a database with RMAN:
23. 1. Make sure the recovery catalog database is open.
24. 2. Connect RMAN to both the target database and recovery catalog database. For example, with a catalog database of RMANDB and user RMAN, owner of the catalog schema, and the target database, AULT1, which is the database to be backed up, database user SYS would issue:
25. % rman TARGET sys/oracle@ault1 CATALOG rman/cat@rmandb
26. 3. Once connected, if the target database is not mounted, it should be opened or mounted:
27. RMAN> STARTUP;
28. --or--
29. RMAN> STARTUP MOUNT;
30. 4. If this target database has not been registered, it should be registered it in the connected recovery catalog:
31. RMAN> REGISTER DATABASE;
32. The database can now be operated on using the RMAN utility.
33. Example RMAN Operations
34. The following is an example of the command line connection to a RAC environment, assuming the RAC instances are AULT1 and AULT2:
35. $ rman TARGET SYS/kr87m@ault2 CATALOG rman/cat@rmandb
36. The connection string, in this case AULT2, can only apply to a single instance, so the entry in the tnsnames.ora for the AULT2 connection would be:
37. ault2 =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = OFF)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST =
aultlinux2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ault)
(INSTANCE_NAME = ault2)
)
38. If the instances use archive logs, RAC requires that a channel connection be specified for each instance that will resolve to only one instance. For example, using the AULT1 and AULT2 instances from the previous example:
39. CONFIGURE
DEFAULT DEVICE TYPE TO sbt;
CONFIGURE DEVICE TYPE TO sbt PARALLELISM 2;
CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT = 'SYS/kr87m@ault1';
CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT = 'SYS/kr87m@ault2';
40. This configuration only has to be specified once for a RAC environment. It should be changed only if nodes are added or removed from the RAC configuration. For this reason, it is known as a persistent configuration, and it need never be changed for the life of the RAC system. This configuration requires that each of the specified nodes be open, the database is operational, or closed, the database shutdown. If one specified instance is not in the same state as the others, the backup will fail.
41. RMAN is also aware of the node affinity of the various database files. The node with the greatest access will be used to backup those datafiles that the instance has greatest affinity for. Node affinity can, however, be overridden with manual commands, as follows:
42. BACKUP
#Channel 1
gets datafiles 1,2,3
(DATAFILE
1,2,3 CHANNEL ORA_SBT_TAPE_1)
#Channel 2
gets datafiles 4,5,6,7
(DATAFILE
4,5,6,7 CHANNEL ORA_SBT_TAPE2)
43. The nodes chosen to backup an Oracle RAC cluster must have the ability to see all of the files that require backup. For example:
44. BACKUP DATABASE PLUS ARCHIVELOG;
45. The specified nodes must have access to all archive logs generated by all instances. This could entail some special considerations when configuring the Oracle RAC environment.
46. The essential steps for using RMAN in Oracle RAC are:
47. * Configure the snapshot control file location.
48. * Configure the control file autobackup feature.
49. * Configure the archiving scheme.
50. * Change the archivemode of the database, although this is optional.
51. * Monitor the archiver process.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ORACLE BACKGROUND PROCESSES
For Single Instance Databases, there are 5-Mandatory Background processes that must run each time an Oracle database is started. These include: DBWn,LGWR,CKPT,SMON,PMON & RECO. All other processes are optional. That means, it will be invoked if that particular feature is activated.
To find out background processes from database:
SQL> set linesize 250 pagesize 2000
SQL> select sid,program from v$session where type='BACKGROUND';
SID PROGRAM
---------- ------------------------------------------------
2 oracle@www.kida1.com (PMON)
3 oracle@www.kida1.com (PSP0)
4 oracle@www.kida1.com (VKTM)
5 oracle@www.kida1.com (GEN0)
6 oracle@www.kida1.com (DIAG)
7 oracle@www.kida1.com (DBRM)
8 oracle@www.kida1.com (DIA0)
9 oracle@www.kida1.com (MMAN)
10 oracle@www.kida1.com (DBW0)
11 oracle@www.kida1.com (LGWR)
12 oracle@www.kida1.com (CKPT)
13 oracle@www.kida1.com (SMON)
14 oracle@www.kida1.com (RECO)
15 oracle@www.kida1.com (MMON)
16 oracle@www.kida1.com (MMNL)
18 oracle@www.kida1.com (ARC0)
20 oracle@www.kida1.com (ARC1)
21 oracle@www.kida1.com (ARC2)
22 oracle@www.kida1.com (ARC3)
25 oracle@www.kida1.com (CTWR)
27 oracle@www.kida1.com (QMNC)
28 oracle@www.kida1.com (W002)
30 oracle@www.kida1.com (VKRM)
32 oracle@www.kida1.com (CJQ0)
35 oracle@www.kida1.com (W001)
39 oracle@www.kida1.com (SMCO)
41 oracle@www.kida1.com (W000)
44 oracle@www.kida1.com (Q000)
45 oracle@www.kida1.com (Q001)
29 rows selected.
To find out oracle running background process from the O/S: Ps –ef|grep ora
SQL> host
oracle@www.kida1.com:/u01/app/oracle/scripts$ps -ef|grep amadb
oracle 24380 1 0 22:07 ? 00:00:00 ora_pmon_amadb
oracle 24382 1 0 22:07 ? 00:00:00 ora_psp0_amadb
oracle 24384 1 1 22:07 ? 00:00:21 ora_vktm_amadb
oracle 24389 1 0 22:07 ? 00:00:00 ora_gen0_amadb
oracle 24391 1 0 22:07 ? 00:00:00 ora_diag_amadb
oracle 24393 1 0 22:07 ? 00:00:00 ora_dbrm_amadb
oracle 24395 1 0 22:07 ? 00:00:00 ora_dia0_amadb
oracle 24397 1 0 22:07 ? 00:00:00 ora_mman_amadb
oracle 24399 1 0 22:07 ? 00:00:00 ora_dbw0_amadb
oracle 24401 1 0 22:07 ? 00:00:00 ora_lgwr_amadb
oracle 24403 1 0 22:07 ? 00:00:00 ora_ckpt_amadb
oracle 24405 1 0 22:07 ? 00:00:00 ora_smon_amadb
oracle 24407 1 0 22:07 ? 00:00:00 ora_reco_amadb
oracle 24409 1 0 22:07 ? 00:00:00 ora_mmon_amadb
oracle 24411 1 0 22:07 ? 00:00:00 ora_mmnl_amadb
oracle 24413 1 0 22:07 ? 00:00:00 ora_d000_amadb
oracle 24415 1 0 22:07 ? 00:00:00 ora_s000_amadb
oracle 24426 1 0 22:07 ? 00:00:00 ora_arc0_amadb
oracle 24429 1 0 22:07 ? 00:00:00 ora_arc1_amadb
oracle 24431 1 0 22:07 ? 00:00:00 ora_arc2_amadb
oracle 24433 1 0 22:07 ? 00:00:00 ora_arc3_amadb
oracle 24436 1 0 22:07 ? 00:00:00 ora_ctwr_amadb
oracle 24438 1 0 22:07 ? 00:00:00 ora_qmnc_amadb
oracle 24452 1 0 22:07 ? 00:00:00 ora_cjq0_amadb
oracle 24457 1 0 22:07 ? 00:00:02 ora_vkrm_amadb
oracle 24481 1 0 22:07 ? 00:00:00 ora_smco_amadb
oracle 24492 1 0 22:08 ? 00:00:00 ora_q000_amadb
oracle 24494 1 0 22:08 ? 00:00:00 ora_q001_amadb
oracle 24831 24830 0 22:12 ? 00:00:00 oracleamadb (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 24848 1 0 22:12 ? 00:00:00 ora_w001_amadb
oracle 25903 25831 0 22:26 pts/4 00:00:00 grep --color=auto amadb
SQL> desc v$process
Name Null? Type
----------------------------------------- -------- ----------------------------
ADDR RAW(8)
PID NUMBER
SPID VARCHAR2(24)
PNAME VARCHAR2(5)
USERNAME VARCHAR2(15)
SERIAL# NUMBER
TERMINAL VARCHAR2(30)
PROGRAM VARCHAR2(48)
TRACEID VARCHAR2(255)
TRACEFILE VARCHAR2(513)
BACKGROUND VARCHAR2(1)
LATCHWAIT VARCHAR2(16)
LATCHSPIN VARCHAR2(16)
PGA_USED_MEM NUMBER
PGA_ALLOC_MEM NUMBER
PGA_FREEABLE_MEM NUMBER
PGA_MAX_MEM NUMBER
SQL> select pname,background from v$process;
PNAME B
----- -
PMON 1
PSP0 1
VKTM 1
GEN0 1
DIAG 1
DBRM 1
DIA0 1
MMAN 1
DBW0 1
LGWR 1
PNAME B
----- -
CKPT 1
SMON 1
RECO 1
MMON 1
MMNL 1
D000
S000
ARC0 1
ARC1 1
ARC2 1
PNAME B
----- -
ARC3 1
CTWR 1
QMNC 1
VKRM 1
J000
CJQ0 1
J001
W001 1
W002 1
SMCO 1
W000 1
PNAME B
----- -
Q000 1
Q001 1
35 rows selected.
SQL>
GRANTING privilege using ROLE [dba_role_privs,dba_sys_privs]
By default a user(hr/scott) can’t grant privilege to others because they don’t have ADMIN option.They will not be able to change password or unlock account. Only sys user with DBA role can:
oracle@www.kida1.com:/u01/app/oracle/scripts$ll *.sql
-rw-r--r--. 1 oracle oinstall 437 Dec 6 21:05 Grant_IT_ENGINEER_ROLE.sql
-rw-r--r--. 1 oracle oinstall 202 Dec 1 17:52 sh_users.sql
oracle@www.kida1.com:/u01/app/oracle/scripts$sql
SQL> startup
ORACLE instance started.
Total System Global Area 1553305600 bytes
Fixed Size 2253544 bytes
Variable Size 956304664 bytes
Database Buffers 587202560 bytes
Redo Buffers 7544832 bytes
Database mounted.
Database opened.
SQL> @Grant_IT_ENGINEER_ROLE.sql
Grant succeeded.
Grant succeeded.
Grant succeeded.
TROUBLESHOOTING [See page 270+]
1. Tnsnames.ora =>service name (4m vi $TNS_ADMIN/tnsnames.ora)
2. listener.ora =>net service name (4m vi $TNS_ADMIN/listener.ora)
3. Why are bind variables(:bv) important? Can you force literals to be converted into bind variables? =>YES.=>Literals with bind variables can save both MEMORY & CPU making OLTP applications faster and more scalable.
MySQL
MySQL Performance Tuning
Notes
A
INSTALL: MYSQL56 on your WINDOWS PC e.g. C:\ProgramData\MySQL\MySQL Server 5.6\my.ini
Buffer size: Mostly used for LOG files=>used to write LOG files to disk stored temporarily. So as information just get written to disk, they're temporarily stored in the BUFFER memory(Zone) to not interfer directly to allocated RAM size (300GB),etc. This increases performance of the database.
PERFORMANCE TUNING: comprises of both HARDWARE(i.e. surface of which your database runs on) and SOFTWARE(Database Memory,Buffer,Cache,Indexes,Tables,Reads(e.g. sequential reads,serail reads,scattered reads,uncommitted read,committed read, etc.),queue(enqueue:=>in line waiting) transactions,etc
***NOTE: To fix DISK I/O issues, RAID 0 is recommended. RAID (Redundant Array of Independent disk) is used with a RAID card in the motherboard to treat multiple HARD disks as 1-Whole Unit which in turns makes data input/output from the server Storage to the database faster, hence improving PERFORMANCE.
CACHE(InnoDB Cache)[Allocated Memory/RAM=268.00M for entire database),Buffer Pool instances#=8,Free Memory=260.36M
CACHE hit RATIO=95.15% (high)=>When the select statement was used to retrieve data from the database, the hit accuracy results was 95% which is a very good VALUE.You DON'T want your CACHE hit RATIO to be low. If it's too low, then it means your probably don't have enough memory(RAM) to store the information in the CACHE. Informtion is getting kicked-out as a result of low CACHE memory hence low cache hit ratio. Consider re-allocating more memory to your database>Cache memory increased.
Index Usage
PARTITIONING (improving PERFORMANCE of LARGE databases)
Happens for VERY LARGE tables with MILLIONS and MILLIONS of RECORDS but not all of the RECORDS are accessed that FREQUENTLY. So what we can do is to divide(partition) the table so that Less access Records could be PLACED in one SIDE(partition) while most frequently accessed RECORDS are placed in another side(the other partition). This improve data access time from the database since table scan will not be for the entire table but to the SELECTED partitions.Then we can implement different methods/strategies on that TABLE (particulary BACKUPs).If the other(older) records aren't changing (up) anymore, then we can probably BACK them up LESS frequently than the current RECORDs(actively accessed more frequently)
This makes you to end up with MULTIPLE partitions. See below:
How to determine that a TABLE has been partitioned. Check if it has 2 primary key columns(e.g. employees_id,store_id = the COMPOUND/Composite primary KEY)
Normally, a table should mostly have JUST 1-primary KEY column (i.e. id)
How to Create Tables and Partitions in MYSQL database
Applying Partitioning on an EXISTING table (e.g. EMPLOYEES) based on DATE
REMOVING a PARTITION from a table
UNDERSTANDING REPLICATION[ORACLE=DATAGUARD]
The whole idea about REPLICATION is that you(DBA) have MORE than 1-copy of the database 4m one SERVER to ANOTHER
As far as application, they can READ,WRITE on the MASTER Server and ONLY READ from the SLAVE
SYNCHRONIZATION=>Both copies of the DATABASES(in MASTER & in SLAVE servers) are in SYNC with each other. Not that one is updated(queries) and the other is not
Types of Synchronization(Data): Asynchronous REPLICATION(one-way, default)=>information/data is FIRST written to the MASTER server, then later on written to the SLAVE server
: Synchronous REPLICATION=>Whatever information/data written to the MASTER server is AT the SAME TIME written to the SLAVE(reason being that SLAVE server also has CLIENTs connecting to it, the needs live information/data updates to be current for both to be ALWAYS consistent)
REPLICATION CONFIGURATION
DATA EXPORT (Mysql)
To Migrate a FULL database to a new database(empty) shell,export FULL database is fine but you can't do a Selective restore(e.g. if there were say 2 invalid(objects) records, with the high-lighted option,you can restore ONLY those 2-records in the ENTIRE database exported but export will be slower. But in the "Export to Self-Contained File" option, Export will be FASTER but would not be able to do SELECTIVE restore e.g. in the case of the 2-invalid records(objects)
Importing vs RESTORING of data are ESSENTIALLY the same thing(make sure the database from which you're to import its(exported) dumpfile previously exist
Install MYSQL to create multiple instances in the SAME machine(host)/server.REASON is "I don't want another PHYSICAL server b/c the one I'm using got plenty of MEMORY,DISK storage,CPU that can accommodate more than 1-instances (i.e. multiple instances)
Backup and RECOVERY strategies(MySql)
Running Backup using sql script (mysql)
MySQL QUESTIONS and ANSWERS
A
Quizz
DISASTER RECOVERY
SCENARIOs
PostgreSQL
A
A
Download PostGreSQL zip file >Extract the .tar file>do install PostGreSQL
MSSQL max row size=8KB per row/record but PostGreSQL it's 1.6TB
PostGreSQL port# by default is 5432
Admin Console: PGIII Admin
Currently, the above database has no objects(no tables,schemas,views,etc). In order to load a USER-defined database, we do have to CREATE one and then do a RESTORE database
Now, go get the sample database from the location you stored in Windows(C:\DATA)>Restore
PostGreSQL Architecture: Database is where ALL the ACTUAL data are stored>PROPERTIES of postgres
The primary architecture components for ALL PostGreSQL databases includes:- the catalogs,the schema,etc. This means every time you create a NEW database, you'll get the SAME kind of structure(Catalogs,schema,etc) underneath the EXPANDED name of the PostgreSQL database
PostGreSQL Propers
Collations=>Sort orders in terms of ALPHABETIC order, etc
You can access the BELOW views to find out STATISTICS about your INDEXES, TABLES, etc
CATALOGS are SYSTEM supplied OBJECTS =>you(DBA) do not have to create anything under the SYSTEM CATALOGS (in MSSQL=>Dynamic Views, etc). They're used for Administrative purposes against other objects
When you're dealing with your actual data, you will find those under the SCHEMA
***NOTE: Public Schema means objects(tables,views,indexes, etc) under it will be accessed by anyone who logs into the DATABASE. However, you can create a private schema(e.g. TEST) and move objects that you don't want public users to view into TEST. That way, they're secured from unauthorized users.(Schema=User+his objects, eg. Kchando[document,desktop,pictures,etc] is a windows 10 schema(just demonstration)
We'll deal with objects such as TABLES,TRIGGERS,VIEWS,FUNCTIONS, etc
The above are some of the PRIMARY ARCHITECTURAL objects in postgreSQL database
PostgreSQL security
HOW to create a POSTGRES database: e.g. create database Test_DB1. It automatically adds ALL database ppties
**NOTE: User who initially creates the databases, OWNS all the OBJECTS (e.g. postgres=user,owns all tables,etc)
SCHEMA creation (Public Schema is created by default =>Default schema)
DATA Types and PRIMARY key Constraint
Adding Records into a table
SQL Language for PostgreSQL: Truncate will remove records in a table but not the definition(design) of the table itself. Alter=make changes. Drop table drop the table ENTIRELY…
Insert data into a TABLE
Update Statement (DML)
Contraints
A primary key will not accept NULLs but Unique constraints will:
Null values for Unique constraints e.g. You can DELETE the value in unique constraints but can't DELETE value for Primary key constraint. If you hit delete, it's going to say YOU CAN'T delete Pk (primary key) to allow null values
Check Constraint
Foreign Key Constraints(Refrenced Key)
Customer table: Customer ID is used to Uniquely identify each and every customer
Select query used to retrieved data from a table:
Group by:
How to create and Use Views: A view is nothing more than a SAVED query. This means if you want to create a view
You'll have to FIRST create a query,then SAVE it. The query should be RE-USABLE:
Indexes: Is it more frequently READ from or is it more frequently WRITTEN=>Performance issues. They speed up search
Of tables/objects,etc
Index Creation: ix_lastname (that is, sorting the customer table using customer's lastname as sorting option)
Roles and Users:
Role with ADMIN option =>Allows others in the ROLE (DBA) to be able to grant others access to the role(kenCDBA)
Schema Privileges
PostgreSQL command prompt
OR:
DATA FILES
BACKUP
Backup only TABLES? (yes!)
RESTORE (backed-up database)
Dependencies and Dependents: Dependencies means "these object(e.g table:employee) depends on Scott schema
REPORTS
FINAL REVIEW
QUESTIONS and ANSWERS
Testing
Exercises:
Answers:
Answers: