转: 清理监听日志处理的方法 以及监听安装设定 和 aleert 日志查看方法

 


 1:首先停止监听服务进程(tnslsnr)记录日志。 lsnrctl  set log_status off; 
2:将监听日志文件(listener.log)复制一份,以listener.log.yyyymmdd格式命名 cp listener.log listener.log.20150622
3、将监听日志文件(listener.log)清空。清空文件的方法有很多 cat /dev/null > listener.log
4:开启监听服务进程(tnslsnr)记录日志 lsnrctl set log_status on;

对于这种listener.log增长非常迅速的系统,可以关闭监听日志lsnrctl  set log_status off,不让监听写日志到文件。也可以写个job定期清理。

 

 

rq=` date +"%d" `
cp $ORACLE_HOME/network/log/listener.log $ORACLE_BACKUP/network/log/listener_$rq.log
su - oracle -c "lsnrctl set log_status off"
cp /dev/null $ORACLE_HOME/network/log/listener.log
su - oracle -c "lsnrctl set log_status on"

 

[oracle@test ~]$ lsnrctl                                                 
                                                              
LSNRCTL for Linux: Version 9.2.0.8.0 - Production on 26-JUN-2011 08:24:09                         
Copyright (c) 1991, 2006, Oracle Corporation. All rights reserved.                            
                                                              
Welcome to LSNRCTL, type "help" for information.                                        
LSNRCTL> set current_listener listener_demo92 -->设置当前监听器                             
Current Listener is listener_demo92                                            


DBesd3[/opt/oracle11gr1/product/11.1/bin][epprod] >./lsnrctl status dbr

Listener Parameter File   /etc/listener.ora
Listener Log File         /opt/oracle11gr1/product/11.1/log/diag/tnslsnr/DBesd3/dbr/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.36.112)(PORT=15021)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=REGISTER_dbr)))
Services Summary...
Service "dbr" has 1 instance(s).
  Instance "dbr", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully



监听安全设定:

SECURE_REGISTER_dtaruat
= (IPC) SECURE_CONTROL_listener=(TCPS,IPC) ADMIN_RESTRICTIONS_listener= ON DIAG_ADR_ENABLED_listener= OFF

 

 

方案2:

 

2. 查看监听日志大小

不管在Windows环境下,或者Linux环境下均可根据上文中日志位置或者根据日志文件名查找监听日志文件大小。

3. 日志文件的备份和重命名

Oracle监听器在运行时不允许对日志文件做删除,重命名操作,可以设置日志状态为ON或OFF来实现启用或关闭日志。

在日志文件过大的情况下,可使用不停止监听的情况下对日志文件重命名以实现备份。

a) Windows环境下

C:\> cd D:\oracle\product\10.2.0\db_1\NETWORK\log --切换到监听器日志文件所在目录

D:\oracle\product\10.2.0\db_1\NETWORK\log> lsnrctl set log_status off --暂停或脱机记录日志文件

D:\oracle\product\10.2.0\db_1\NETWORK\log> rename listener.log listener.old150424 --重命名日志文件,一般加上日期

D:\oracle\product\10.2.0\db_1\NETWORK\log> lsnrctl set log_status on --联机监听器日志文件,会自动重新创建一个新的日志文件

b) Unix/Linux环境下

LSNRCTL>
LSNRCTL>set current_listener db

LSNRCTL>show  log_status
LSNRCTL>set log_status off

 

#mv listener.log listener.old150424

$lsnrctl set log_status on

 

4.45:/tmp/peng/rman

 

手工清理aud (大量的aud 文件)

cd 

 find . -name '*.aud'  -print  | xargs rm -f

 

2.sample

 

#!/bin/bash
#
# Script used to cleanup any Oracle environment.
#

#
# Rotates:      Listener Logs
#
# Scheduling:  0 19 5 * *  /home/oracle/_cron/cls_oracle/cls_network.sh -d 31 > #/home/oracle/_cron/cls_oracle/cls_oracle.log 2>&1
#  echo "       -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters."
# 
#
# History: 
#
*  *  *  *  *
#分 时 日 月 周
## /var full will lead issue 




RM="rm -f"
RMDIR="rm -rf"
LS="ls -l"
MV="mv"
TOUCH="touch"
TESTTOUCH="echo touch"
TESTMV="echo mv"
TESTRM=$LS
TESTRMDIR=$LS
GZIP="gzip"
 

SUCCESS=0
FAILURE=1
TEST=0
HOSTNAME=`hostname`
ORAENV="oraenv"
TODAY=`date +%Y%m%d`
ORIGPATH=/usr/local/bin:$PATH
ORIGLD=$LD_LIBRARY_PATH
export PATH=$ORIGPATH

 

# Usage function.

f_usage(){

  echo "Usage: `basename $0` -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]"
  echo "       -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters."
  echo "       -n = Optional number of days to keep network log files."
  echo "       -h = Optional help mode."
  echo "       -t = Optional test mode. Does not delete any files."
}

 

if [ $# -lt 1 ]; then
  f_usage
  exit $FAILURE
fi



# Function used to check the validity of days.
f_checkdays(){
  if [ $1 -lt 1 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
  if [ $? -ne 0 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
} 

# Function used to cut log files.

f_cutlog(){

  # Set name of log file.
  LOG_FILE=$1
  CUT_FILE=${LOG_FILE}.${TODAY}
  FILESIZE=`ls -l $LOG_FILE | awk '{print $5}'`

 

  # Cut the log file if it has not been cut today.
  if [ -f $CUT_FILE ]; then
    echo "Log Already Cut Today: $CUT_FILE"
  elif [ ! -f $LOG_FILE ]; then
    echo "Log File Does Not Exist: $LOG_FILE"
  elif [ $FILESIZE -eq 0 ]; then
    echo "Log File Has Zero Size: $LOG_FILE"
  else

    # Cut file.
    echo "Cutting Log File: $LOG_FILE"
    $MV $LOG_FILE $CUT_FILE
	$GZIP  $CUT_FILE
    $TOUCH $LOG_FILE
  fi
}


# Function used to delete log files.
f_deletelog(){

  # Set name of log file.
  CLEAN_LOG=$1

  # Set time limit and confirm it is valid.
  CLEAN_DAYS=$2
  f_checkdays $CLEAN_DAYS
 
  # Delete old log files if they exist.
  find $CLEAN_LOG.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] -type f -mtime +$CLEAN_DAYS -exec $RM {} \; 2>/dev/null
}

  
# Use test mode if specified.

if [ $TEST -eq 1 ]

then
  RM=$TESTRM
  RMDIR=$TESTRMDIR
  MV=$TESTMV
  TOUCH=$TESTTOUCH
  echo "Running in TEST mode."
fi

 

# Set the number of days to the default if not explicitly set.

while getopts d:n:th OPT; do
  case $OPT in
    d) DDAYS=$OPTARG
       ;; 
    n) NDAYS=$OPTARG
       ;;
    t) TEST=1
       ;;
    h) f_usage
       exit 0
       ;;
    *) f_usage
       exit 2
       ;;
  esac
done
shift $(($OPTIND - 1))




# Ensure the default number of days is passed.

if [ -z "$DDAYS" ]; then
  echo "ERROR: The default days parameter is mandatory."
  f_usage
  exit $FAILURE
fi
f_checkdays $DDAYS

 

echo "`basename $0` Started `date`."


# Use test mode if specified.

if [ $TEST -eq 1 ]
then
  RM=$TESTRM
  RMDIR=$TESTRMDIR
  MV=$TESTMV
  TOUCH=$TESTTOUCH
  echo "Running in TEST mode."
fi



NDAYS=${NDAYS:-$DDAYS}; echo "Keeping network logs for $NDAYS days."; f_checkdays $NDAYS
  




# Clean Listener Log Files.

# Get the list of running listeners. It is assumed that if the listener is not running, the log file does not need to be cut.

ps -ef| grep tnslsnr | grep -v grep |awk '{print $9" "$10}'| while read LSNR; do    ##for unix

 

  # Derive the lsnrctl path from the tnslsnr process path.

  TNSLSNR=`echo $LSNR | awk '{print $1}'`
  ORACLE_PATH=`dirname $TNSLSNR`
  ORACLE_HOME=`dirname $ORACLE_PATH`
  PATH=$ORACLE_PATH:$ORIGPATH
  LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
  LSNRCTL=$ORACLE_PATH/lsnrctl
  echo "Listener Control Command: $LSNRCTL"

  

  # Derive the listener name from the running process.

  LSNRNAME=`echo $LSNR | awk '{print $2}' | tr "[:upper:]" "[:lower:]"`
  echo "Listener Name: $LSNRNAME"

  

  # Get the listener version.

  LSNRVER=`$LSNRCTL version | grep "LSNRCTL" | grep "Version" | awk '{print $5}' | awk -F. '{print $1}'`
  echo "Listener Version: $LSNRVER"

 

  # Get the TNS_ADMIN variable.

  echo "Initial TNS_ADMIN: $TNS_ADMIN"
  unset TNS_ADMIN
  TNS_ADMIN=`$LSNRCTL status $LSNRNAME | grep "Listener Parameter File" | awk '{print $4}'`

  if [ ! -z $TNS_ADMIN ]; then
    export TNS_ADMIN=`dirname $TNS_ADMIN`
  else
    export TNS_ADMIN=$ORACLE_HOME/network/admin
  fi

  echo "Network Admin Directory: $TNS_ADMIN"

 

  # If the listener is 11g, get the diagnostic dest, etc...

  if [ $LSNRVER -eq 11 ]; then
    # Get the listener log file directory. 

    LSNRDIAG=`$LSNRCTL<<EOF | grep log_directory | awk '{print $6}'
set current_listener $LSNRNAME
show log_directory
EOF`

    echo "Listener Diagnostic Directory: $LSNRDIAG"

 

    # Get the listener trace file name.

    LSNRLOG=`lsnrctl<<EOF | grep trc_directory | awk '{print $6"/"$1".log"}'
set current_listener $LSNRNAME
show trc_directory
EOF`
    echo "Listener Log File: $LSNRLOG"

 

  # If 10g or  12c more , do not use diagnostic dest.
  else
    # Get the listener log file location.
    LSNRLOG=`$LSNRCTL status $LSNRNAME | grep "Listener Log File" | awk '{print $4}'`
  fi

 

 

  # See if the listener is logging.
  if [ -z "$LSNRLOG" ]; then
    echo "Listener Logging is OFF. Not rotating the listener log."
  # See if the listener log exists.
  elif  [ ! -r "$LSNRLOG" ]; then
    echo "Listener Log Does Not Exist: $LSNRLOG"
  # See if the listener log has been cut today.
  elif [ -f $LSNRLOG.$TODAY ]; then
    echo "Listener Log Already Cut Today: $LSNRLOG.$TODAY"
  # Cut the listener log if the previous two conditions were not met.
  else

 

    # Remove old 11g+ listener log XML files.

    if [ ! -z "$LSNRDIAG" ] && [ -d "$LSNRDIAG" ]; then
      echo "Cleaning Listener Diagnostic Dest: $LSNRDIAG"
      find $LSNRDIAG -type f -name "log\_[0-9]*.xml" -mtime +$NDAYS -exec $RM {} \; 2>/dev/null
    fi
    

    # Disable logging.

    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status off
EOF
 
    # Cut the listener log file.
    f_cutlog $LSNRLOG
     # Enable logging.
    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status on
EOF
 
    # Delete old listener logs.
    f_deletelog $LSNRLOG $NDAYS
  fi
done
 

echo "`basename $0` Finished `date`."

 

exit

 

 

sample2:

#!/bin/bash
#
# Script used to cleanup any Oracle environment.
#
# Cleans:      audit_log_dest
#              background_dump_dest  parameter -d
#              core_dump_dest  parameter -d
#              user_dump_dest  parameter -d
#
# Rotates:     Alert Logs  parameter -d
#              Listener Logs -d
#
# clean  :     archivelog  parameter -l
# Scheduling:  00 00 * * * /home/oracle/_cron/cls_oracle/cls_oracle.sh -d 31 -l 50 > /home/oracle/_cron/cls_oracle/cls_oracle.log 2>
&1
#
# Created By:  Tommy Wang  2012-09-10
#
# History: 
#

. /etc/profile
 
RM="rm -f"
RMDIR="rm -rf"
LS="ls -l"
MV="mv"
TOUCH="touch"
TESTTOUCH="echo touch"
TESTMV="echo mv"
TESTRM=$LS
TESTRMDIR=$LS
 
SUCCESS=0
FAILURE=1
TEST=0
HOSTNAME=`hostname`
ORAENV="oraenv"
TODAY=`date +%Y%m%d`
ORIGPATH=/usr/local/bin:$PATH
ORIGLD=$LD_LIBRARY_PATH
export PATH=$ORIGPATH


export ORACLE_BASE=/oracle11g
ERASE=^H
export HOME=/home/oracle
export ORACLE_HOME=/opt/oracle11g/product/11.1.0
 
# Usage function. function need imput value 
f_usage(){
  echo "Usage: `basename $0` -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]"
  echo "       -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters."
  echo "       -a = Optional number of days to keep audit logs."
  echo "       -b = Optional number of days to keep background dumps."
  echo "       -c = Optional number of days to keep core dumps."
  echo "       -n = Optional number of days to keep network log files."
  echo "       -r = Optional number of days to keep clusterware log files."
  echo "       -u = Optional number of days to keep user dumps."
  echo "       -l = Optional number of days to keep archive logs."
  echo "       -h = Optional help mode."
  echo "       -t = Optional test mode. Does not delete any files."
}
 
if [ $# -lt 1 ]; then
  f_usage
  exit $FAILURE
fi
 
# Function used to check the validity of days.
f_checkdays(){
  if [ $1 -lt 1 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
  if [ $? -ne 0 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
} 
 
# Function used to cut log files.
f_cutlog(){
 
  # Set name of log file.
  LOG_FILE=$1
  CUT_FILE=${LOG_FILE}.${TODAY}
  FILESIZE=`ls -l $LOG_FILE | awk '{print $5}'`
 
  # Cut the log file if it has not been cut today.
  if [ -f $CUT_FILE ]; then
    echo "Log Already Cut Today: $CUT_FILE"
  elif [ ! -f $LOG_FILE ]; then
    echo "Log File Does Not Exist: $LOG_FILE"
  elif [ $FILESIZE -eq 0 ]; then
    echo "Log File Has Zero Size: $LOG_FILE"
  else
    # Cut file.
    echo "Cutting Log File: $LOG_FILE"
    $MV $LOG_FILE $CUT_FILE
    $TOUCH $LOG_FILE
  fi
}
 
# Function used to delete log files.
f_deletelog(){
 
  # Set name of log file.
  CLEAN_LOG=$1
 
  # Set time limit and confirm it is valid.
  CLEAN_DAYS=$2
  f_checkdays $CLEAN_DAYS
  
  # Delete old log files if they exist.
  find $CLEAN_LOG.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] -type f -mtime +$CLEAN_DAYS -exec $RM {} \; 2>/dev/null
}
  
# Function used to get database parameter values. -z “STRING” length is zero will be ture
f_getparameter(){
  if [ -z "$1" ]; then
    return
  fi
  PARAMETER=$1
set head off pagesize 0 feedback off linesize 200
  sqlplus -s /nolog <<EOF | awk -F= "/^a=/ {print \$2}"
whenever sqlerror exit 1
conn / as sysdba
select 'a='||value from v\$parameter where name = '$PARAMETER';
EOF
}

#Function used to delete archlog of lday ago
f_delete_arch(){
  if [ -z "$1" ]; then
    return
  fi
  arch=$1
$ORACLE_HOME/bin/rman target / <<EOF
crosscheck archivelog all;
delete noprompt expired archivelog all;
delete noprompt archivelog until time 'sysdate-$arch';
exit;
EOF
} 
 
# Function to get unique list of directories. to get first value of input value
f_getuniq(){
 
  if [ -z "$1" ]; then
    return
  fi
  ARRCNT=0
  MATCH=N
  x=0
  for e in `echo $1`; do
    if [ ${#ARRAY[*]} -gt 0 ]; then
 
      # See if the array element is a duplicate. if it is depuliacte ,feedback y ,
      # ${#ARRAY[*]} is total number of the array, ${ARRAY[$x]} is the value of the arrary
      # put oh in arrary
      while [ $x -lt  ${#ARRAY[*]} ]; do
        if [ "$e" = "${ARRAY[$x]}" ]; then
          MATCH=Y
        fi
      done
    fi
    if [ "$MATCH" = "N" ]; then
      ARRAY[$ARRCNT]=$e
      ARRCNT=`expr $ARRCNT+1`
    fi
    x=`expr $x + 1`
  done
### print arrry
  echo ${ARRAY[*]}
}
 
# Parse the command line options. setting parameter value
while getopts a:b:c:d:n:r:u:l:th OPT; do
  case $OPT in
    a) ADAYS=$OPTARG
       ;;
    b) BDAYS=$OPTARG
       ;;
    c) CDAYS=$OPTARG
       ;;
    d) DDAYS=$OPTARG
       ;;
    n) NDAYS=$OPTARG
       ;;
    r) RDAYS=$OPTARG
       ;;
    u) UDAYS=$OPTARG
       ;;
    l) LDAYS=$OPTARG
       ;;
    t) TEST=1
       ;;
    h) f_usage
       exit 0
       ;;
    *) f_usage
       exit 2
       ;;
  esac
done
shift $(($OPTIND - 1))
 
# Ensure the default number of days is passed.
if [ -z "$DDAYS" ]; then
  echo "ERROR: The default days parameter is mandatory."
  f_usage
  exit $FAILURE
fi
f_checkdays $DDAYS


#############RUNNING MODE 


echo "`basename $0` Started `date`." 
 
# Use test mode if specified.
if [ $TEST -eq 1 ]
then
  RM=$TESTRM
  RMDIR=$TESTRMDIR
  MV=$TESTMV
  TOUCH=$TESTTOUCH
  echo "Running in TEST mode."
fi
 
# Set the number of days to the default if not explicitly set.
ADAYS=${ADAYS:-$DDAYS}; echo "Keeping audit logs for $ADAYS days."; f_checkdays $ADAYS
BDAYS=${BDAYS:-$DDAYS}; echo "Keeping background logs for $BDAYS days."; f_checkdays $BDAYS
CDAYS=${CDAYS:-$DDAYS}; echo "Keeping core dumps for $CDAYS days."; f_checkdays $CDAYS
NDAYS=${NDAYS:-$DDAYS}; echo "Keeping network logs for $NDAYS days."; f_checkdays $NDAYS
RDAYS=${RDAYS:-$DDAYS}; echo "Keeping clusterware logs for $RDAYS days."; f_checkdays $RDAYS
UDAYS=${UDAYS:-$DDAYS}; echo "Keeping user logs for $UDAYS days."; f_checkdays $UDAYS
 
# Check for the oratab file.
if [ -f /var/opt/oracle/oratab ]; then
  ORATAB=/var/opt/oracle/oratab
elif [ -f /etc/oratab ]; then
  ORATAB=/etc/oratab
else
  echo "ERROR: Could not find oratab file."
  exit $FAILURE
fi
 
# Build list of distinct Oracle Home directories.
OH=`egrep -i ":Y|:N" $ORATAB | grep -v "^#" | grep -v "\*" | cut -d":" -f2 | sort | uniq`
 
# Exit if there are not Oracle Home directories.
if [ -z "$OH" ]; then
  echo "No Oracle Home directories to clean."
  exit $SUCCESS
fi
 
# Get the list of running databases.
SIDS=`ps -e -o args | grep pmon | grep -v grep | awk -F_ '{print $3}' | sort`
 
# Gather information for each running database.
for ORACLE_SID in `echo $SIDS`
do
 
  # Set the Oracle environment.
  ORAENV_ASK=NO
  export ORACLE_SID
  . $ORAENV
 
  if [ $? -ne 0 ]; then
    echo "Could not set Oracle environment for $ORACLE_SID."
  else
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
 
    ORAENV_ASK=YES
 
    echo "ORACLE_SID: $ORACLE_SID"
 
    # Get the audit_dump_dest. define ADUMPDIRS is array
    ADUMPDEST=`f_getparameter audit_dump_dest`
    if [ ! -z "$ADUMPDEST" ] && [ -d "$ADUMPDEST" 2>/dev/null ]; then
      echo "  Audit Dump Dest: $ADUMPDEST"
      ADUMPDIRS="$ADUMPDIRS $ADUMPDEST"
    fi
 
    # Get the background_dump_dest. define BDUMPDIRS is array
    BDUMPDEST=`f_getparameter background_dump_dest`
    echo "  Background Dump Dest: $BDUMPDEST"
    if [ ! -z "$BDUMPDEST" ] && [ -d "$BDUMPDEST" ]; then
      BDUMPDIRS="$BDUMPDIRS $BDUMPDEST"
    fi
 
    # Get the core_dump_dest.
    CDUMPDEST=`f_getparameter core_dump_dest`
    echo "  Core Dump Dest: $CDUMPDEST"
    if [ ! -z "$CDUMPDEST" ] && [ -d "$CDUMPDEST" ]; then
      CDUMPDIRS="$CDUMPDIRS $CDUMPDEST"
    fi
 
    # Get the user_dump_dest.
    UDUMPDEST=`f_getparameter user_dump_dest`
    echo "  User Dump Dest: $UDUMPDEST"
    if [ ! -z "$UDUMPDEST" ] && [ -d "$UDUMPDEST" ]; then
      UDUMPDIRS="$UDUMPDIRS $UDUMPDEST"
    fi
    
    # Do cleanup for each Oracle archivelog.
    f_delete_arch $LDAYS
  fi
done
 
# Do cleanup for each Oracle Home.
for ORAHOME in `f_getuniq "$OH"`
do
 
  # Get the standard audit directory if present.
  if [ -d $ORAHOME/rdbms/audit ]; then
     ADUMPDIRS="$ADUMPDIRS $ORAHOME/rdbms/audit"
  fi
 
  # Get the Cluster Ready Services Daemon (crsd) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/crsd ]; then
    CRSLOGDIRS="$CRSLOGDIRS $ORAHOME/log/$HOSTNAME/crsd"
  fi
 
  # Get the  Oracle Cluster Registry (OCR) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/client ]; then
    OCRLOGDIRS="$OCRLOGDIRS $ORAHOME/log/$HOSTNAME/client"
  fi
 
  # Get the Cluster Synchronization Services (CSS) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/cssd ]; then
    CSSLOGDIRS="$CSSLOGDIRS $ORAHOME/log/$HOSTNAME/cssd"
  fi
 
  # Get the Event Manager (EVM) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/evmd ]; then
    EVMLOGDIRS="$EVMLOGDIRS $ORAHOME/log/$HOSTNAME/evmd"
  fi
 
  # Get the RACG log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/racg ]; then
    RACGLOGDIRS="$RACGLOGDIRS $ORAHOME/log/$HOSTNAME/racg"
  fi
 
done
 
# Clean the audit_dump_dest directories.
if [ ! -z "$ADUMPDIRS" ]; then
  for DIR in `f_getuniq "$ADUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Audit Dump Directory: $DIR"
      find $DIR -type f -name "*.aud" -mtime +$ADAYS -exec $RM {} \; 2>/dev/null
    fi
  done
fi
 
# Clean the background_dump_dest directories.
if [ ! -z "$BDUMPDIRS" ]; then
  for DIR in `f_getuniq "$BDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Background Dump Destination Directory: $DIR"
      # Clean up old trace files.
      find $DIR -type f -name "*.tr[c,m]" -mtime +$BDAYS -exec $RM {} \; 2>/dev/null
      find $DIR -type d -name "cdmp*" -mtime +$BDAYS -exec $RMDIR {} \; 2>/dev/null
    fi
  
    if [ -d $DIR ]; then
      # Cut the alert log and clean old ones.
      for f in `find $DIR -type f -name "alert\_*.log" ! -name "alert_[0-9A-Z]*.[0-9]*.log" 2>/dev/null`; do
        echo "Alert Log: $f"
        f_cutlog $f
        f_deletelog $f $BDAYS
      done
    fi
  done
fi
 
# Clean the core_dump_dest directories.
if [ ! -z "$CDUMPDIRS" ]; then
  for DIR in `f_getuniq "$CDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Core Dump Destination: $DIR"
      find $DIR -type d -name "core*" -mtime +$CDAYS -exec $RMDIR {} \; 2>/dev/null
    fi
  done
fi
 
# Clean the user_dump_dest directories.
if [ ! -z "$UDUMPDIRS" ]; then
  for DIR in `f_getuniq "$UDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning User Dump Destination: $DIR"
      find $DIR -type f -name "*.trc" -mtime +$UDAYS -exec $RM {} \; 2>/dev/null
    fi
  done
fi
 
# Cluster Ready Services Daemon (crsd) Log Files
for DIR in `f_getuniq "$CRSLOGDIRS $OCRLOGDIRS $CSSLOGDIRS $EVMLOGDIRS $RACGLOGDIRS"`; do
  if [ -d $DIR ]; then
    echo "Cleaning Clusterware Directory: $DIR"
    find $DIR -type f -name "*.log" -mtime +$RDAYS -exec $RM {} \; 2>/dev/null
  fi
done
 
# Clean Listener Log Files.
# Get the list of running listeners. It is assumed that if the listener is not running, the log file does not need to be cut.
##ps -e -o args | grep tnslsnr | grep -v grep | while read LSNR; do
## for linux 
ps -ef| grep tnslsnr | grep -v grep |awk '{print $8" "$9}'| while read LSNR; do
## for unix 
###ps -ef| grep tnslsnr | grep -v grep |awk '{print $9" "$10}'| while read LSNR; do 
 
  # Derive the lsnrctl path from the tnslsnr process path.
  TNSLSNR=`echo $LSNR | awk '{print $1}'`
  ORACLE_PATH=`dirname $TNSLSNR`
  ORACLE_HOME=`dirname $ORACLE_PATH`
  PATH=$ORACLE_PATH:$ORIGPATH
  LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
  LSNRCTL=$ORACLE_PATH/lsnrctl
  echo "Listener Control Command: $LSNRCTL"
  
  # Derive the listener name from the running process.
  LSNRNAME=`echo $LSNR | awk '{print $2}' | tr "[:upper:]" "[:lower:]"`
  echo "Listener Name: $LSNRNAME"
  
  # Get the listener version.
  LSNRVER=`$LSNRCTL version | grep "LSNRCTL" | grep "Version" | awk '{print $5}' | awk -F. '{print $1}'`
  echo "Listener Version: $LSNRVER"
 
  # Get the TNS_ADMIN variable. config listener.ora directory
  echo "Initial TNS_ADMIN: $TNS_ADMIN"
  unset TNS_ADMIN
  TNS_ADMIN=`$LSNRCTL status $LSNRNAME | grep "Listener Parameter File" | awk '{print $4}'`
  if [ ! -z $TNS_ADMIN ]; then
    export TNS_ADMIN=`dirname $TNS_ADMIN`
  else
    export TNS_ADMIN=$ORACLE_HOME/network/admin
  fi
  echo "Network Admin Directory: $TNS_ADMIN"
 
  # If the listener is 11g, get the diagnostic dest, etc...
  if [ $LSNRVER -ge 11 ]; then
    
    # Get the listener log file directory. xml directory
    LSNRDIAG=`$LSNRCTL<<EOF | grep log_directory | awk '{print $6}'
set current_listener $LSNRNAME
show log_directory
EOF`
    echo "Listener Diagnostic Directory: $LSNRDIAG"
 
    # Get the listener trace file name. log directory
    LSNRLOG=`lsnrctl<<EOF | grep trc_directory | awk '{print $6"/"$1".log"}'
set current_listener $LSNRNAME
show trc_directory
EOF`
    echo "Listener Log File: $LSNRLOG"
 
  # If 10g or lower, do not use diagnostic dest. 
  else
    # Get the listener log file location.
    LSNRLOG=`$LSNRCTL status $LSNRNAME | grep "Listener Log File" | awk '{print $4}'`
  fi
 
 
  # See if the listener is logging.
  if [ -z "$LSNRLOG" ]; then
    echo "Listener Logging is OFF. Not rotating the listener log."
  # See if the listener log exists.
  elif  [ ! -r "$LSNRLOG" ]; then
    echo "Listener Log Does Not Exist: $LSNRLOG"
  # See if the listener log has been cut today.
  elif [ -f $LSNRLOG.$TODAY ]; then
    echo "Listener Log Already Cut Today: $LSNRLOG.$TODAY"
  # Cut the listener log if the previous two conditions were not met.
  else
 
    # Remove old 11g+ listener log XML files.
    if [ ! -z "$LSNRDIAG" ] && [ -d "$LSNRDIAG" ]; then
      echo "Cleaning Listener Diagnostic Dest: $LSNRDIAG"
      find $LSNRDIAG -type f -name "log\_[0-9]*.xml" -mtime +$NDAYS -exec $RM {} \; 2>/dev/null
    fi
    
    # Disable logging.
    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status off
EOF
 
    # Cut the listener log file.
    f_cutlog $LSNRLOG
 
    # Enable logging.
    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status on
EOF
 
    # Delete old listener logs.
    f_deletelog $LSNRLOG $NDAYS
 
  fi
done


#######use rman to delete all archive log

#  TNSLSNR=`echo $LSNR | awk '{print $1}'`
#  ORACLE_PATH=`dirname $TNSLSNR`
#  ORACLE_HOME=`dirname $ORACLE_PATH`
#  PATH=$ORACLE_PATH:$ORIGPATH
#  LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
#  LSNRCTL=$ORACLE_PATH/lsnrctl
#  echo "Listener Control Command: $LSNRCTL"

 
echo "`basename $0` Finished `date`."
 
exit

 

 

 

 

3.scripts

http://blog.itpub.net/29812751/viewspace-1821607/

#!/bin/bash
#
# Script used to cleanup any Oracle environment.
#
# Cleans:      audit_log_dest
#              background_dump_dest
#              core_dump_dest
#              user_dump_dest
#
# Rotates:     Alert Logs
#              Listener Logs
#
# Scheduling:  00 00 * * * /home/oracle/_cron/cls_oracle/cls_oracle.sh -d 31 > /home/oracle/_cron/cls_oracle/cls_oracle.log 2>
&1
#
# Created By:  Tommy Wang  2012-09-10
#
# History: 
#
 
RM="rm -f"
RMDIR="rm -rf"
LS="ls -l"
MV="mv"
TOUCH="touch"
TESTTOUCH="echo touch"
TESTMV="echo mv"
TESTRM=$LS
TESTRMDIR=$LS
 
SUCCESS=0
FAILURE=1
TEST=0
HOSTNAME=`hostname`
ORAENV="oraenv"
TODAY=`date +%Y%m%d`
ORIGPATH=/usr/local/bin:$PATH
ORIGLD=$LD_LIBRARY_PATH
export PATH=$ORIGPATH
 
# Usage function.
f_usage(){
  echo "Usage: `basename $0` -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]"
  echo "       -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters."
  echo "       -a = Optional number of days to keep audit logs."
  echo "       -b = Optional number of days to keep background dumps."
  echo "       -c = Optional number of days to keep core dumps."
  echo "       -n = Optional number of days to keep network log files."
  echo "       -r = Optional number of days to keep clusterware log files."
  echo "       -u = Optional number of days to keep user dumps."
  echo "       -h = Optional help mode."
  echo "       -t = Optional test mode. Does not delete any files."
}
 
if [ $# -lt 1 ]; then
  f_usage
  exit $FAILURE
fi
 
# Function used to check the validity of days.
f_checkdays(){
  if [ $1 -lt 1 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
  if [ $? -ne 0 ]; then
    echo "ERROR: Number of days is invalid."
    exit $FAILURE
  fi
} 
 
# Function used to cut log files.
f_cutlog(){
 
  # Set name of log file.
  LOG_FILE=$1
  CUT_FILE=${LOG_FILE}.${TODAY}
  FILESIZE=`ls -l $LOG_FILE | awk '{print $5}'`
 
  # Cut the log file if it has not been cut today.
  if [ -f $CUT_FILE ]; then
    echo "Log Already Cut Today: $CUT_FILE"
  elif [ ! -f $LOG_FILE ]; then
    echo "Log File Does Not Exist: $LOG_FILE"
  elif [ $FILESIZE -eq 0 ]; then
    echo "Log File Has Zero Size: $LOG_FILE"
  else
    # Cut file.
    echo "Cutting Log File: $LOG_FILE"
    $MV $LOG_FILE $CUT_FILE
    $TOUCH $LOG_FILE
  fi
}
 
# Function used to delete log files.
f_deletelog(){
 
  # Set name of log file.
  CLEAN_LOG=$1
 
  # Set time limit and confirm it is valid.
  CLEAN_DAYS=$2
  f_checkdays $CLEAN_DAYS
  
  # Delete old log files if they exist.
  find $CLEAN_LOG.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] -type f -mtime +$CLEAN_DAYS -exec $RM {} \; 2>/dev/null
}
  
# Function used to get database parameter values.
f_getparameter(){
  if [ -z "$1" ]; then
    return
  fi
  PARAMETER=$1
  sqlplus -s /nolog <<EOF | awk -F= "/^a=/ {print \$2}"
set head off pagesize 0 feedback off linesize 200
whenever sqlerror exit 1
conn / as sysdba
select 'a='||value from v\$parameter where name = '$PARAMETER';
EOF
}
 
# Function to get unique list of directories.
f_getuniq(){
 
  if [ -z "$1" ]; then
    return
  fi
 
  ARRCNT=0
  MATCH=N
  x=0
 
  for e in `echo $1`; do
    if [ ${#ARRAY[*]} -gt 0 ]; then
 
      # See if the array element is a duplicate.
      while [ $x -lt  ${#ARRAY[*]} ]; do
        if [ "$e" = "${ARRAY[$x]}" ]; then
          MATCH=Y
        fi
      done
    fi
    if [ "$MATCH" = "N" ]; then
      ARRAY[$ARRCNT]=$e
      ARRCNT=`expr $ARRCNT+1`
    fi
    x=`expr $x + 1`
  done
  echo ${ARRAY[*]}
}
 
# Parse the command line options.
while getopts a:b:c:d:n:r:u:th OPT; do
  case $OPT in
    a) ADAYS=$OPTARG
       ;;
    b) BDAYS=$OPTARG
       ;;
    c) CDAYS=$OPTARG
       ;;
    d) DDAYS=$OPTARG
       ;;
    n) NDAYS=$OPTARG
       ;;
    r) RDAYS=$OPTARG
       ;;
    u) UDAYS=$OPTARG
       ;;
    t) TEST=1
       ;;
    h) f_usage
       exit 0
       ;;
    *) f_usage
       exit 2
       ;;
  esac
done
shift $(($OPTIND - 1))
 
# Ensure the default number of days is passed.
if [ -z "$DDAYS" ]; then
  echo "ERROR: The default days parameter is mandatory."
  f_usage
  exit $FAILURE
fi
f_checkdays $DDAYS
 
echo "`basename $0` Started `date`."
 
# Use test mode if specified.
if [ $TEST -eq 1 ]
then
  RM=$TESTRM
  RMDIR=$TESTRMDIR
  MV=$TESTMV
  TOUCH=$TESTTOUCH
  echo "Running in TEST mode."
fi
 
# Set the number of days to the default if not explicitly set.
ADAYS=${ADAYS:-$DDAYS}; echo "Keeping audit logs for $ADAYS days."; f_checkdays $ADAYS
BDAYS=${BDAYS:-$DDAYS}; echo "Keeping background logs for $BDAYS days."; f_checkdays $BDAYS
CDAYS=${CDAYS:-$DDAYS}; echo "Keeping core dumps for $CDAYS days."; f_checkdays $CDAYS
NDAYS=${NDAYS:-$DDAYS}; echo "Keeping network logs for $NDAYS days."; f_checkdays $NDAYS
RDAYS=${RDAYS:-$DDAYS}; echo "Keeping clusterware logs for $RDAYS days."; f_checkdays $RDAYS
UDAYS=${UDAYS:-$DDAYS}; echo "Keeping user logs for $UDAYS days."; f_checkdays $UDAYS
 
# Check for the oratab file.
if [ -f /var/opt/oracle/oratab ]; then
  ORATAB=/var/opt/oracle/oratab
elif [ -f /etc/oratab ]; then
  ORATAB=/etc/oratab
else
  echo "ERROR: Could not find oratab file."
  exit $FAILURE
fi
 
# Build list of distinct Oracle Home directories.
OH=`egrep -i ":Y|:N" $ORATAB | grep -v "^#" | grep -v "\*" | cut -d":" -f2 | sort | uniq`
 
# Exit if there are not Oracle Home directories.
if [ -z "$OH" ]; then
  echo "No Oracle Home directories to clean."
  exit $SUCCESS
fi
 
# Get the list of running databases.
SIDS=`ps -e -o args | grep pmon | grep -v grep | awk -F_ '{print $3}' | sort`
 
# Gather information for each running database.
for ORACLE_SID in `echo $SIDS`
do
 
  # Set the Oracle environment.
  ORAENV_ASK=NO
  export ORACLE_SID
  . $ORAENV
 
  if [ $? -ne 0 ]; then
    echo "Could not set Oracle environment for $ORACLE_SID."
  else
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
 
    ORAENV_ASK=YES
 
    echo "ORACLE_SID: $ORACLE_SID"
 
    # Get the audit_dump_dest.
    ADUMPDEST=`f_getparameter audit_dump_dest`
    if [ ! -z "$ADUMPDEST" ] && [ -d "$ADUMPDEST" 2>/dev/null ]; then
      echo "  Audit Dump Dest: $ADUMPDEST"
      ADUMPDIRS="$ADUMPDIRS $ADUMPDEST"
    fi
 
    # Get the background_dump_dest.
    BDUMPDEST=`f_getparameter background_dump_dest`
    echo "  Background Dump Dest: $BDUMPDEST"
    if [ ! -z "$BDUMPDEST" ] && [ -d "$BDUMPDEST" ]; then
      BDUMPDIRS="$BDUMPDIRS $BDUMPDEST"
    fi
 
    # Get the core_dump_dest.
    CDUMPDEST=`f_getparameter core_dump_dest`
    echo "  Core Dump Dest: $CDUMPDEST"
    if [ ! -z "$CDUMPDEST" ] && [ -d "$CDUMPDEST" ]; then
      CDUMPDIRS="$CDUMPDIRS $CDUMPDEST"
    fi
 
    # Get the user_dump_dest.
    UDUMPDEST=`f_getparameter user_dump_dest`
    echo "  User Dump Dest: $UDUMPDEST"
    if [ ! -z "$UDUMPDEST" ] && [ -d "$UDUMPDEST" ]; then
      UDUMPDIRS="$UDUMPDIRS $UDUMPDEST"
    fi
  fi
done
 
# Do cleanup for each Oracle Home.
for ORAHOME in `f_getuniq "$OH"`
do
 
  # Get the standard audit directory if present.
  if [ -d $ORAHOME/rdbms/audit ]; then
     ADUMPDIRS="$ADUMPDIRS $ORAHOME/rdbms/audit"
  fi
 
  # Get the Cluster Ready Services Daemon (crsd) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/crsd ]; then
    CRSLOGDIRS="$CRSLOGDIRS $ORAHOME/log/$HOSTNAME/crsd"
  fi
 
  # Get the  Oracle Cluster Registry (OCR) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/client ]; then
    OCRLOGDIRS="$OCRLOGDIRS $ORAHOME/log/$HOSTNAME/client"
  fi
 
  # Get the Cluster Synchronization Services (CSS) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/cssd ]; then
    CSSLOGDIRS="$CSSLOGDIRS $ORAHOME/log/$HOSTNAME/cssd"
  fi
 
  # Get the Event Manager (EVM) log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/evmd ]; then
    EVMLOGDIRS="$EVMLOGDIRS $ORAHOME/log/$HOSTNAME/evmd"
  fi
 
  # Get the RACG log directory if present.
  if [ -d $ORAHOME/log/$HOSTNAME/racg ]; then
    RACGLOGDIRS="$RACGLOGDIRS $ORAHOME/log/$HOSTNAME/racg"
  fi
 
done
 
# Clean the audit_dump_dest directories.
if [ ! -z "$ADUMPDIRS" ]; then
  for DIR in `f_getuniq "$ADUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Audit Dump Directory: $DIR"
      find $DIR -type f -name "*.aud" -mtime +$ADAYS -exec $RM {} \; 2>/dev/null
    fi
  done
fi
 
# Clean the background_dump_dest directories.
if [ ! -z "$BDUMPDIRS" ]; then
  for DIR in `f_getuniq "$BDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Background Dump Destination Directory: $DIR"
      # Clean up old trace files.
      find $DIR -type f -name "*.tr[c,m]" -mtime +$BDAYS -exec $RM {} \; 2>/dev/null
      find $DIR -type d -name "cdmp*" -mtime +$BDAYS -exec $RMDIR {} \; 2>/dev/null
    fi
  
    if [ -d $DIR ]; then
      # Cut the alert log and clean old ones.
      for f in `find $DIR -type f -name "alert\_*.log" ! -name "alert_[0-9A-Z]*.[0-9]*.log" 2>/dev/null`; do
        echo "Alert Log: $f"
        f_cutlog $f
        f_deletelog $f $BDAYS
      done
    fi
  done
fi
 
# Clean the core_dump_dest directories.
if [ ! -z "$CDUMPDIRS" ]; then
  for DIR in `f_getuniq "$CDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning Core Dump Destination: $DIR"
      find $DIR -type d -name "core*" -mtime +$CDAYS -exec $RMDIR {} \; 2>/dev/null
    fi
  done
fi
 
# Clean the user_dump_dest directories.
if [ ! -z "$UDUMPDIRS" ]; then
  for DIR in `f_getuniq "$UDUMPDIRS"`; do
    if [ -d $DIR ]; then
      echo "Cleaning User Dump Destination: $DIR"
      find $DIR -type f -name "*.trc" -mtime +$UDAYS -exec $RM {} \; 2>/dev/null
    fi
  done
fi
 
# Cluster Ready Services Daemon (crsd) Log Files
for DIR in `f_getuniq "$CRSLOGDIRS $OCRLOGDIRS $CSSLOGDIRS $EVMLOGDIRS $RACGLOGDIRS"`; do
  if [ -d $DIR ]; then
    echo "Cleaning Clusterware Directory: $DIR"
    find $DIR -type f -name "*.log" -mtime +$RDAYS -exec $RM {} \; 2>/dev/null
  fi
done
 
# Clean Listener Log Files.
# Get the list of running listeners. It is assumed that if the listener is not running, the log file does not need to be cut.
ps -e -o args | grep tnslsnr | grep -v grep | while read LSNR; do
 
  # Derive the lsnrctl path from the tnslsnr process path.
  TNSLSNR=`echo $LSNR | awk '{print $1}'`
  ORACLE_PATH=`dirname $TNSLSNR`
  ORACLE_HOME=`dirname $ORACLE_PATH`
  PATH=$ORACLE_PATH:$ORIGPATH
  LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORIGLD
  LSNRCTL=$ORACLE_PATH/lsnrctl
  echo "Listener Control Command: $LSNRCTL"
  
  # Derive the listener name from the running process.
  LSNRNAME=`echo $LSNR | awk '{print $2}' | tr "[:upper:]" "[:lower:]"`
  echo "Listener Name: $LSNRNAME"
  
  # Get the listener version.
  LSNRVER=`$LSNRCTL version | grep "LSNRCTL" | grep "Version" | awk '{print $5}' | awk -F. '{print $1}'`
  echo "Listener Version: $LSNRVER"
 
  # Get the TNS_ADMIN variable.
  echo "Initial TNS_ADMIN: $TNS_ADMIN"
  unset TNS_ADMIN
  TNS_ADMIN=`$LSNRCTL status $LSNRNAME | grep "Listener Parameter File" | awk '{print $4}'`
  if [ ! -z $TNS_ADMIN ]; then
    export TNS_ADMIN=`dirname $TNS_ADMIN`
  else
    export TNS_ADMIN=$ORACLE_HOME/network/admin
  fi
  echo "Network Admin Directory: $TNS_ADMIN"
 
  # If the listener is 11g, get the diagnostic dest, etc...
  if [ $LSNRVER -ge 11 ]; then
    
    # Get the listener log file directory. 
    LSNRDIAG=`$LSNRCTL<<EOF | grep log_directory | awk '{print $6}'
set current_listener $LSNRNAME
show log_directory
EOF`
    echo "Listener Diagnostic Directory: $LSNRDIAG"
 
    # Get the listener trace file name.
    LSNRLOG=`lsnrctl<<EOF | grep trc_directory | awk '{print $6"/"$1".log"}'
set current_listener $LSNRNAME
show trc_directory
EOF`
    echo "Listener Log File: $LSNRLOG"
 
  # If 10g or lower, do not use diagnostic dest.
  else
    # Get the listener log file location.
    LSNRLOG=`$LSNRCTL status $LSNRNAME | grep "Listener Log File" | awk '{print $4}'`
  fi
 
 
  # See if the listener is logging.
  if [ -z "$LSNRLOG" ]; then
    echo "Listener Logging is OFF. Not rotating the listener log."
  # See if the listener log exists.
  elif  [ ! -r "$LSNRLOG" ]; then
    echo "Listener Log Does Not Exist: $LSNRLOG"
  # See if the listener log has been cut today.
  elif [ -f $LSNRLOG.$TODAY ]; then
    echo "Listener Log Already Cut Today: $LSNRLOG.$TODAY"
  # Cut the listener log if the previous two conditions were not met.
  else
 
    # Remove old 11g+ listener log XML files.
    if [ ! -z "$LSNRDIAG" ] && [ -d "$LSNRDIAG" ]; then
      echo "Cleaning Listener Diagnostic Dest: $LSNRDIAG"
      find $LSNRDIAG -type f -name "log\_[0-9]*.xml" -mtime +$NDAYS -exec $RM {} \; 2>/dev/null
    fi
    
    # Disable logging.
    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status off
EOF
 
    # Cut the listener log file.
    f_cutlog $LSNRLOG
 
    # Enable logging.
    $LSNRCTL <<EOF
set current_listener $LSNRNAME
set log_status on
EOF
 
    # Delete old listener logs.
    f_deletelog $LSNRLOG $NDAYS
 
  fi
done
 
echo "`basename $0` Finished `date`."
 
exit




####sample 5;

#!/bin/bash
#sampe cron and make sure 755 permmision about the sh
##linx
##29 11 * * * su - oracle1 -c /tmp/1.sh > /tmp/2.log
##29 11 * * * su - grid -c /tmp/1.sh > /tmp/1.log
##aix 26day of month running
## 09 15 26 * * su - opoid -c sh /db/oid/oracleapp/shell/trace_clean.sh > /tmp/trace_clean.log
## 09 15 26 * * su - grid -c sh /db/oid/oracleapp/shell/trace_clean.sh > /tmp/trace_clean_1.log
###define profile
. /etc/profile

f_profile(){
cd $HOME
OSNAME=`uname`
case $OSNAME in
SunOS) OSNAME=1 ;;
HP-UX) OSNAME=2 ;;
AIX) OSNAME=3 ;;
Linux) OSNAME=4 ;;
esac
##linux
if [ $OSNAME -eq 4 ]
then
. ./.bash_profile
##aix
elif [ $OSNAME -eq 3 ]
then
. ./.profile

fi
}

##define listener_name listener_name name must lower-case event if is upper-case actualy

 

##LSNRNAME=sccms
f_lsnrname(){
cd $HOME
OSNAME=`uname`
case $OSNAME in
SunOS) OSNAME=1 ;;
HP-UX) OSNAME=2 ;;
AIX) OSNAME=3 ;;
Linux) OSNAME=4 ;;
esac
##linux
if [ $OSNAME -eq 4 ]
then
. ./.bash_profile
LSNRNAME=`ps -ef | grep tnslsnr | grep -v grep | grep -v -i scan | awk '{print $9}'`
LSNROWNER=`ps -ef | grep tnslsnr | grep -v grep | grep -v -i scan|head -n 1|awk '{print $1}'`
if [ $LSNROWNER = 'grid' ]
then
echo "*****running the listener clenn in grid user *******"

fi

##aix
elif [ $OSNAME -eq 3 ]
then
. ./.profile
LSNRNAME=`ps -ef | grep tnslsnr | grep -v grep | grep -v -i scan | awk '$3==1 {print $10}'`
LSNROWNER=`ps -ef | grep tnslsnr | grep -v grep | grep -v -i scan|head -n 1|awk '{print $1}'`
if [ $LSNROWNER = 'grid' ]
then
echo "*****running the listener clenn in grid user *******"
fi
fi
}

###define listener log history keep large day
NDAYS=90

####define trace clean day
CLEAN_DAYS=5
###define listener log maitaince log date
TODAY=`date +%Y%m%d`

##define ORACLE_HOME
#ORACLE_HOME=/db/ccms/app/product/database/11g
#PATH=$ORACLE_HOME/bin:$PATH


f_getparameter(){
if [ -z "$1" ]; then
return
fi
PARAMETER=$1
sqlplus -s /nolog <<EOF | awk -F= "/^a=/ {print \$2}"
set head off pagesize 0 feedback off linesize 200
whenever sqlerror exit 1
conn / as sysdba
select 'a='||value from v\$parameter where name = '$PARAMETER';
EOF
}

 

f_cutlog(){
# Set name of log file.
LOG_FILE_0=$1
##lowstring
LOG_FILE=`echo ${LOG_FILE_0}| tr 'A-Z' 'a-z'`
CUT_FILE=${LOG_FILE}.${TODAY}
##FILESIZE=`du -sk $LOG_FILE`
# Cut the log file if it has not been cut today.
if [ -f $CUT_FILE ]; then
echo "Log Already Cut Today: $CUT_FILE"
elif [ ! -f $LOG_FILE ]; then
echo "Log File Does Not Exist: $LOG_FILE"
else
# Cut file.
echo "Cutting Log File: $LOG_FILE"
mv $LOG_FILE $CUT_FILE
gzip $CUT_FILE
touch $LOG_FILE
fi
}

### Function used to delete log files.
f_deletelog(){

# Set name of log file.
CLEAN_LOG=$1

# Set time limit and confirm it is valid.
CLEAN_DAYS=$2

LOG_DIR=`dirname $CLEAN_LOG`
cd $LOG_DIR
# Delete old log files if they exist.
find . -name $LSNRNAME.log.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].gz -mtime +$CLEAN_DAYS -exec rm {} \; 2>/dev/null
}

 

f_listener(){

# Get the listener version.
LSNRVER=`sqlplus -v| awk -F. '{print $1}' |awk '{print $3}'`
echo "Listener Version: $LSNRVER"


if [ $LSNRVER -ge 11 ]; then

# Get the listener log file directory. xml directory
LSNRDIAG=`lsnrctl<<EOF | grep log_directory | awk '{print $6}'
set current_listener $LSNRNAME
show log_directory
EOF`
echo "Listener Diagnostic Directory: $LSNRDIAG"

# Get the listener trace file name. log directory
LSNRLOG=`lsnrctl<<EOF | grep trc_directory | awk '{print $6"/"$1".log"}'
set current_listener $LSNRNAME
show trc_directory
EOF`
echo "Listener Log file : $LSNRLOG"

else
# Get the listener log file location.
LSNRLOG=`lsnrctl status $LSNRNAME | grep "Listener Log File" | awk '{print $4}'`
echo "Listener Log file : $LSNRLOG"
fi

###bgein to push listener.xml log_xml
if [ $LSNRVER -ge 11 ]; then
cd $LSNRDIAG
pwd=`pwd`
echo $pwd
if [ $LSNRDIAG = $pwd ]
then
find $LSNRDIAG -type f -name "log\_[0-9]*.xml" -exec rm {} \; 2>/dev/null
echo '-------------------clean------'$LSNRDIAG 'path'
fi
else
echo "next"
fi

####bgein to push listener.log

# Disable logging.
lsnrctl <<EOF
set current_listener $LSNRNAME
set log_status off
EOF

# Cut the listener log file.
f_cutlog $LSNRLOG
echo '-------------------cat ------'$LSNRLOG''


# Enable logging.
lsnrctl <<EOF
set current_listener $LSNRNAME
set log_status on
EOF

# Delete old listener logs.
f_deletelog $LSNRLOG $NDAYS
echo '-------------------clean old listener logs of '$NDAYS' ago ------'$LSNRLOG''
}

 

 

audit_log()
{ #---audit_logè??ú?t
#audit_log=$(strings $ORACLE_HOME/dbs/spfile$ORACLE_SID.ora|grep -i audit_file_dest|awk -F'=' '{print $NF}'|sed "s/'//g")
audit_log=`f_getparameter audit_file_dest`
cd $audit_log
pwd=`pwd`
if [ $audit_log = $pwd ]
then
###ò?D10???°???μ?ù?t
##ls | xargs -n 10 rm -rf
find . -type f -name "*.aud" -exec rm {} \; 2>/dev/null
echo '-------------------clean------'$audit_log 'path'
fi
}

 


clean_db_trace_log()
{ #----alert.logòíμ??t?à
cd $alert_log
pwd=`pwd`
if [ $alert_log = $pwd ]
then
#echo `ls |grep -v alert `| xargs -n 10 rm -rf
find . -type f -name "*.tr[c,m]" -mtime +$CLEAN_DAYS -exec rm {} \; 2>/dev/null
find . -type d -name "cdmp*" -exec rmdir {} \; 2>/dev/null
find . -type d -name "core*" -exec rmdir {} \; 2>/dev/null
echo '-------------------clean------'$alert_log' path'
fi
}


alert_log()
{ #----alert.logòíμ??t?à
alert_log=`f_getparameter background_dump_dest`
clean_db_trace_log
alert_log=`f_getparameter user_dump_dest`
clean_db_trace_log
alert_log=`f_getparameter core_dump_dest`
clean_db_trace_log
}

main()
{
if [ `ps -ef|grep -i smon|grep -v grep|wc -l` -ge 1 ]
then
echo '----------------'`date`'------------------?a??à---------------------------'
#/*??27o??alert_log{ò3£·??listener(?àlog)*,udit_log(?ú¢log_xml/
#date_=`date +%d`
# if [ $date_ -eq 27 ]
# then
f_profile
alert_log
audit_log
f_lsnrname
f_listener

# fi
echo '----------------'`date`'------------------?á?à---------------------------'
fi
}

main

 

 






###ref:

1.bash for linux
ksh for unix


2.bash == and =
ksh =


3.function:

bash:


function :
bash函数有三种写法:
[plain] view plain copy
function func1 () { 
echo "This is an example of bash function 1" 


function func2 { 
echo "This is an example of bash function 2" 



func3 () { 
echo "This is an example of bash function 3" 
}

但是,第一种写法在有的 bash版本下会报错。

 

 

ksh:
http://blog.csdn.net/shangboerds/article/details/48711561

#!/bin/ksh 
################### 函数必须先定义后使用 

# 定义函数方式 1: ksh 语法 
function fun_test1 { 
print "This is function fun_test1."; 


# 定义函数方式 2: POSIX 语法 
fun_test2 (){ 
print "This is function fun_test2."; 


# 参数 
function fun_test3 { 
print "Enter function $0"; # $0 表示函数名 
print "The first parameter is $1"; # $1 引用第一个参数 
print "The second parameter is $2"; # $2 引用第二个参数 
print "This function has $# parameters"; # $# 表示参数的个数 
print "You are calling $0 $*"; # $* 是个数组,存储所有的参数 
print "You are calling $0 $@"; # $@ 是个数组,存储所有的参数 
}

 

 


4.bash ksh function 必须在最开头定义,不然function 找不到sqlplus 执行路径:

ORACLE_HOME=/datalv03/10205/10g
PATH=$ORACLE_HOME/bin:$PATH

 

5.测试环境,创建0_ksh.sh, 按照步骤从上往下测试,注意回退(##复现问题)和前进,因此需要2倍的代码量。
0_ksh.sh

 

 

6  test 语法 :

准确来说是适用对象不同:
-eq 和 = 都可以用来条件测试进行判断两个操作对象是否相同,但是有如下区别:
-eq 适用于整数数字,不能进行字符串的条件测试
= 既适用于数字,又适用于字符串。
我做了下测试:

$ [ 1 -eq 1 ] && echo "ok"
ok
$ [ 1 = 1 ] && echo "ok"
ok
$ [ "a" -eq "a" ] && echo "ok"
sh: [: a: 需要整数表达式
$ [ "a" = "a" ] && echo "ok"
ok
楼主想进一步了解的话可以搜索下"shell 条件测试"
2015年06月02日回答评论赞赏编辑

 

检测文件属性的相关操作符
如果文件存在,并且具有相应的属性,如下的操作符都会返回true:

-b FILE
FILE exists and is block special
-c FILE
FILE exists and is character special
-d FILE
FILE exists and is a directory
-e FILE
FILE exists
-f FILE
FILE exists and is a regular file
-g FILE
FILE exists and is set-group-ID
-G FILE
FILE exists and is owned by the effective group ID
-h FILE
FILE exists and is a symbolic link (same as -L)
-k FILE
FILE exists and has its sticky bit set
-L FILE
FILE exists and is a symbolic link (same as -h)
-O FILE
FILE exists and is owned by the effective user ID
-p FILE
FILE exists and is a named pipe
-r FILE
FILE exists and read permission is granted
-s FILE
FILE exists and has a size greater than zero
-S FILE
FILE exists and is a socket
-t FD file descriptor FD is opened on a terminal
-u FILE
FILE exists and its set-user-ID bit is set
-w FILE
FILE exists and write permission is granted
-x FILE
FILE exists and execute (or search) permission is granted
以上命令,从man test复制而来。

 

 

 

#########sample 感谢yang 

分析最近几天的alert 日志报错

 

#!/bin/bash
export LANG=en_US

 


DB_VERSION=`$ORACLE_HOME/bin/sqlplus -v | grep 'SQL\*Plus: Release' | awk '{print $3}' | awk -F. '{print $1}'`
case "$DB_VERSION" in
19)
echo ""
ALERT_FORMAT="NEW"
;;
11)
echo ""
ALERT_FORMAT="OLD"
;;
*)
echo ""
echo "unsupport version"
echo ""
exit 1
;;
esac


ORAINST_COUNT=`ps -ef | grep pmon | grep -v grep | grep -v '+ASM' | wc -l`
case "$ORAINST_COUNT" in
0)
echo ""
echo "Oracle instance not running"
exit 1
;;
1)
echo ""
ORACLE_SID=`ps -ef | grep pmon | grep -v grep | grep -v '+ASM' | awk -F_ '{print $3}'`
;;
*)
echo ""
echo "ORACLE SID list: "
ps -ef | grep pmon | grep -v grep | grep -v '+ASM' | awk -F_ '{print $3}'
read -p "Please input ORACLE_SID you want to check [$ORACLE_SID]: " READ_TEMP2
if [ -n "$READ_TEMP2" ]; then
ORACLE_SID="$READ_TEMP2"
fi
echo ""
;;
esac


DAYS_COUNT=1
read -p "How many day(s) you want to check? [1]: " READ_TEMP2
if [ -n "$READ_TEMP2" ]; then
DAYS_COUNT="$READ_TEMP2"
fi

expr $DAYS_COUNT + 0 >> /dev/null 2>&1
if [ $? -eq 0 ]; then
echo ""
else
echo ""
echo "input error"
echo ""
exit 1
fi

ALERTPATH=`$ORACLE_HOME/bin/adrci exec = "show incdir" | grep "ADR Home" | grep rdbms | grep -v dummy | awk '{print $4}' | sed -e 's/:/\/trace/' | grep -i $ORACLE_SID`
ALERTLOG=`ls $ALERTPATH/alert_$ORACLE_SID.log`


if [ "$ALERT_FORMAT" = "NEW" ]; then
i=0
while [ $i -lt $DAYS_COUNT ]; do
let i++
DATE_TEMP=$(date '+%Y-%m-%d' -d "-`expr $DAYS_COUNT - $i` days")
ALERT_LINE[$i]=`grep -n "$DATE_TEMP"T $ALERTPATH/alert_$ORACLE_SID.log | head -1 | awk -F: '{print $1}'`
done
let i++
ALERT_LINE[$i]=`wc -l $ALERTPATH/alert_$ORACLE_SID.log | awk '{print $1}'`

elif [ "$ALERT_FORMAT" = "OLD" ]; then
i=0
while [ $i -lt $DAYS_COUNT ]; do
let i++
DATE_TEMP1=$(date '+%a %b %d' -d "-`expr $DAYS_COUNT - $i` days")
DATE_TEMP2=$(date '+%Y' -d "-`expr $DAYS_COUNT - $i` days")
ALERT_LINE[$i]=`grep -n "$DATE_TEMP1" $ALERTPATH/alert_$ORACLE_SID.log| grep $DATE_TEMP2 | head -1 | awk -F: '{print $1}'`
done
let i++
ALERT_LINE[$i]=`wc -l $ALERTPATH/alert_$ORACLE_SID.log | awk '{print $1}'`
fi


echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo " REPORT DATE: `date '+%Y-%m-%d %H:%M:%S'`"
echo " ORACLE SID: $ORACLE_SID"
echo "Alert log PATH: $ALERTLOG"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"

i=0
while [ $i -lt $DAYS_COUNT ]; do
let i++
j=`expr $i + 1`
DATE_TEMP=$(date '+%Y-%m-%d' -d "-`expr $DAYS_COUNT - $i` days")
START_LINE=${ALERT_LINE[$i]}
END_LINE=${ALERT_LINE[$j]}
echo $DATE_TEMP
echo "--------------------"
sed -n "$START_LINE,$END_LINE"p $ALERTPATH/alert_$ORACLE_SID.log | egrep -i 'ORA-|ERROR|WARNING|FAIL' | grep -v 'Patch Description'
echo "--------------------"
echo ""
echo ""
echo ""
echo ""
done

 

 

 

###sample oracle grid 11.2.04  adump 目录清理

 LINUX 版本手工删除方法如下:

cd /oracle/data1/srebdb_new_svc/creb/adump

查看文件的名字,用如下命令ls -U |less,是为了防止小文件过多,导致的ls 查不出来结果

ls -U |less

du -sh
535M .

find /oracle/data1/srebdb_new_svc/creb/adump -maxdepth 1 -name '*.aud' -mtime +30 -delete

du -sh
256M

 

find /u01/app/11.2.0/grid/rdbms/audit /u01/app/11.2.0/grid/rdbms/audit /u01/app/oracle/admin/+ASM1/adump -maxdepth 1 -name '*.aud' -mtime +30 -delete

 

AIX

find /u01/app/oracle/admin/+ASM1/adump  -name  "*.aud"  -mtime +30   -exec rm -f {} \;

自动删除方法如下:
DETAILS
Step 1 - Identify the ASM audit directories
There are three directories that may contain audit files. All three must be managed to control excessive growth.

Two default locations are based on environment variable settings when the ASM instance is started. To determine the default locations for your system, login as the Grid Infrastructure software owner (typically either oracle or grid), set your environment so that you can connect to the ASM instance, then run the 'echo' commands provided below. In this example, the two default audit directories are /u01/app/11.2.0/grid/rdbms/audit and /u01/app/oracle/admin/+ASM1/adump.


$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle

$ echo $ORACLE_HOME/rdbms/audit
/u01/app/11.2.0/grid/rdbms/audit

$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/oracle/admin/+ASM1/adump


The third ASM audit directory can be found by logging into the ASM instance with SQL*Plus and running this statement:


$ sqlplus '/ as sysasm'

SQL> select value from v$parameter where name = 'audit_file_dest';

VALUE
--------------------------------------------------------------------------------
/u01/app/11.2.0/grid/rdbms/audit

All three ASM audit directories will be managed with cron(8).

Step 2 - Give Grid Infrastructure software owner permission to use cron
Audit files are owned by the Grid Infrastructure software owner, which is typically either oracle or grid. Commands to move or remove audit files must be run as the Grid Infrastructure software owner. As root, add the Grid Infrastructure software owner to /etc/cron.allow file. The examples below use the user oracle.


# echo oracle >> /etc/cron.allow

 

Step 3 - Add command to crontab to manage audit files weekly
As the Grid Infrastructure software owner, add an entry to the crontab file. The following command will start a vi(P) command edit session to edit the existing crontab file or create a new crontab file if one does not already exist.


$ crontab -e


Add the following to this file as a single line:


0 2 * * sun /usr/bin/find /u01/app/11.2.0/grid/rdbms/audit /u01/app/11.2.0/grid/rdbms/audit /u01/app/oracle/admin/+ASM1/adump -maxdepth 1 -name '*.aud' -mtime +30 -delete

This crontab entry executes the find(1) command at 2AM every Sunday. The find(1) command deletes all audit files in the three ASM audit directories that are older than 30 days.

If you wish to retain audit files for a longer period of time, instead of deleting the audit files with the find(1) command, you can archive audit files to a different directory or storage device using a crontab entry like the following:


0 2 * * sun /usr/bin/find /u01/app/11.2.0/grid/rdbms/audit /u01/app/11.2.0/grid/rdbms/audit /u01/app/oracle/admin/+ASM1/adump -maxdepth 1 -name '*.aud' -mtime +30 -execdir /bin/mv {} /archived_audit_dir \;

This crontab entry executes the find(1) command at 2AM every Sunday. The find(1) command moves all audit files in the three ASM audit directories that are older than 30 days to /archived_audit_dir.


Save and exit the crontab file using vi commands (<ESC> :wq), then verify crontab contents.

$ crontab -l
0 2 * * sun /usr/bin/find /u01/app/11.2.0/grid/rdbms/audit /u01/app/11.2.0/grid/rdbms/audit /u01/app/oracle/admin/+ASM1/adump -maxdepth 1 -name '*.aud' -mtime +30 -delete

 

Troubleshooting
If old audit files are not being removed, perform the following steps:

To monitor that cron(8) is executing the /usr/bin/find command on schedule and as the correct Grid Infrastructure software owner, review the /var/log/cron file as the root user for an entry like the following:
Feb 20 02:00:01 ####dbnn crond[6936]: (oracle) CMD (/usr/bin/find /u01/app/11.2.0/grid/rdbms/audit /u01/app/11.2.0/grid/rdbms/audit /u01/app/oracle/admin/+ASM1/adump -maxdepth 1 -name '*.aud' -mtime +60 -delete)

 

posted @ 2016-11-10 16:51  feiyun8616  阅读(1331)  评论(0编辑  收藏  举报