Shell编程
接上篇:Linux拓展篇(克隆虚拟机)
Shell概述
Shell是一个命令行解释器,接收应用程序、用户命令,然后调用操作系统内核;
Shell是一个功能强大的编程语言,易编写、易调试、灵活性强;
shell脚本默认后缀名是(.sh),若不写后缀名,系统判定是一个可执行文件,并且内容是按照shell编程规范去编写,亦可!
编写第一个入门脚本(hello world)
脚本以 #!/bin/bash 开头(指定解析器)
-- 当前目录下创建有个文件夹 [root@hadoop100 ~]# mkdir scripts -- 进入到指定文件夹 [root@hadoop100 ~]# cd scripts/ -- 创建文件 [root@hadoop100 scripts]# touch hello.sh -- 对文件内容进行编辑 [root@hadoop100 scripts]# vim hello.sh
脚本内容也是一行一行进行解析的
-- 当前行被注释,指脚本默认使用 /bin/bash 去解析 #!/bin/bash -- 输出 “hello world” echo "hello world"
脚本常用的执行方式
1、采用 bash、sh 命令加脚本的绝对路径或者相对路径去执行脚本,不需要赋予脚本的可执行权限(sh是bash的一个软链接)(可执行权限:+x)
打开一个新的子shell进程,将路径作为一个参数传入,是使用bash解析器去执行脚本,该脚本不需要可执行权限
2、直接使用脚本的绝对路径或者相对路径去执行脚本,需要赋予脚本的可执行权限(可执行权限:+x)
打开一个新的子shell进程去执行路径下的脚本,是脚本自己去执行,该脚本需要可执行权限
[root@hadoop100 scripts]# bash hello.sh hello world [root@hadoop100 scripts]# cd ~ [root@hadoop100 ~]# bash scripts/hello.sh hello world [root@hadoop100 ~]# bash /root/scripts/hello.sh hello world [root@hadoop100 ~]# sh scripts/hello.sh hello world [root@hadoop100 ~]# sh /root/scripts/hello.sh hello world [root@hadoop100 ~]# ll scripts/ 总用量 4 -rw-r--r--. 1 root root 31 10月 9 14:13 hello.sh [root@hadoop100 ~]# ll scripts/ [root@hadoop100 ~]# /root/scripts/hello.sh -bash: /root/scripts/hello.sh: 权限不够 [root@hadoop100 ~]# scripts/hello.sh -bash: scripts/hello.sh: 权限不够 -- 给当前hello.sh脚本文件新增可执行权限(x) [root@hadoop100 ~]# chmod +x scripts/hello.sh [root@hadoop100 ~]# ll scripts/ 总用量 4 -rwxr-xr-x. 1 root root 31 10月 9 14:13 hello.sh [root@hadoop100 ~]# /root/scripts/hello.sh hello world [root@hadoop100 ~]# scripts/hello.sh hello world
3、(了解)使用source、.(点)命令加脚本的绝对路径或者相对路径去执行脚本,不需要赋予脚本的可执行权限(source命令是bash的一个内嵌命令)
使用当前的shell进程去执行路径下的脚本,是shell的内嵌命令,该脚本不需要可执行权限
[root@hadoop100 ~]# . /root/scripts/hello.sh hello world [root@hadoop100 ~]# . scripts/hello.sh hello world [root@hadoop100 ~]# source /root/scripts/hello.sh hello world [root@hadoop100 ~]# source scripts/hello.sh hello world [root@hadoop100 ~]# type source source 是 shell 内嵌 [root@hadoop100 ~]# type . . 是 shell 内嵌
-- ps:查询当前相关进程,-f:显示完整格式的进程列表 [root@hadoop100 ~]# ps -f UID PID PPID C STIME TTY TIME CMD root 2723 2707 0 16:32 pts/0 00:00:00 -bash root 2779 2723 0 16:33 pts/0 00:00:00 ps -f -- 输入bash命令 [root@hadoop100 ~]# bash -- 重新打开了一个bash进程(子shell) [root@hadoop100 ~]# ps -f UID PID PPID C STIME TTY TIME CMD root 2723 2707 0 16:32 pts/0 00:00:00 -bash root 2780 2723 1 16:33 pts/0 00:00:00 bash root 2808 2780 0 16:33 pts/0 00:00:00 ps -f [root@hadoop100 ~]# exit exit [root@hadoop100 ~]#
系统预定义变量
常用系统有$HOME、$PWD、$USER、$SHELL等等
使用env、printenv命令可以查看所有的全局环境变量
[root@hadoop100 ~]# echo $HOME /root [root@hadoop100 ~]# echo $PWD /root [root@hadoop100 ~]# echo $USER root [root@hadoop100 ~]# echo $SHELL /bin/bash [root@hadoop100 ~]# env XDG_SESSION_ID=10 HOSTNAME=hadoop100 SELINUX_ROLE_REQUESTED= TERM=xterm-256color SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=192.168.181.1 58563 22 SELINUX_USE_CURRENT_RANGE= SSH_TTY=/dev/pts/0 USER=root LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45: MAIL=/var/spool/mail/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin PWD=/root LANG=zh_CN.UTF-8 SELINUX_LEVEL_REQUESTED= HISTCONTROL=ignoredups SHLVL=1 HOME=/root LOGNAME=root XDG_DATA_DIRS=/root/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share SSH_CONNECTION=192.168.181.1 58563 192.168.181.100 22 LESSOPEN=||/usr/bin/lesspipe.sh %s XDG_RUNTIME_DIR=/run/user/0 _=/usr/bin/env
-- printenv:打印全局环境变量,USER:指定打印哪个,因此不需要$符号 [root@hadoop100 ~]# printenv USER root [root@hadoop100 ~]# printenv SHELL /bin/bash [root@hadoop100 ~]# printenv XDG_SESSION_ID=10 HOSTNAME=hadoop100 SELINUX_ROLE_REQUESTED= TERM=xterm-256color SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=192.168.181.1 58563 22 SELINUX_USE_CURRENT_RANGE= SSH_TTY=/dev/pts/0 USER=root LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45: MAIL=/var/spool/mail/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin PWD=/root LANG=zh_CN.UTF-8 SELINUX_LEVEL_REQUESTED= HISTCONTROL=ignoredups SHLVL=1 HOME=/root LOGNAME=root XDG_DATA_DIRS=/root/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share SSH_CONNECTION=192.168.181.1 58563 192.168.181.100 22 LESSOPEN=||/usr/bin/lesspipe.sh %s XDG_RUNTIME_DIR=/run/user/0 _=/usr/bin/printenv
系统预定义的变量不仅仅是全局环境变量,还有局部环境变量
使用set命令可以看到当前shell进程下的所有变量(全局+局部),变量中也包含了一些函数在里面;
[root@hadoop100 ~]# set ABRT_DEBUG_LOG=/dev/null BASH=/bin/bash BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath BASH_ALIASES=() BASH_ARGC=() BASH_ARGV=() BASH_CMDS=() BASH_COMPLETION_COMPAT_DIR=/etc/bash_completion.d BASH_LINENO=() BASH_SOURCE=() BASH_VERSINFO=([0]="4" [1]="2" [2]="46" [3]="2" [4]="release" [5]="x86_64-redhat-linux-gnu") BASH_VERSION='4.2.46(2)-release' COLUMNS=120 DIRSTACK=() EUID=0 GLUSTER_BARRIER_OPTIONS=$'\n {enable},\n {disable}\n' GLUSTER_COMMAND_TREE=$'\n{gluster [\n \n {volume [\n {add-brick\n {__VOLNAME}\n },\n {barrier\n {__VOLNAME\n [ \n {enable},\n {disable}\n ]\n }\n },\n {clear-locks\n {__VOLNAME}\n },\n {create},\n {delete\n {__VOLNAME}\n },\n {geo-replication\n [ \n {__VOLNAME [\n {__SLAVEURL [\n {create [\n {push-pem\n {force}\n },\n {force}\n ]\n },\n {start {force} },\n {status {detail} },\n {config},\n {pause {force} },\n {resume {force} },\n {stop {force} },\n {delete {force} }\n ]\n },\n {status}\n ]\n },\n {status}\n ]\n },\n {heal\n {__VOLNAME}\n },\n {help},\n {info\n {__VOLNAME}\n },\n {list},\n {log\n {__VOLNAME}\n },\n {profile\n {__VOLNAME\n [ \n {start},\n {info [\n {peek},\n {incremental\n {peek}\n },\n {cumulative},\n {clear},\n ]\n },\n {stop}\n ]\n }\n },\n {quota\n {__VOLNAME\n [ \n {enable},\n {disable},\n {list},\n {remove},\n {default-soft-limit},\n {limit-usage},\n {alert-time},\n {soft-timeout},\n {hard-timeout}\n ]\n }\n },\n {rebalance\n {__VOLNAME}\n },\n {remove-brick\n {__VOLNAME}\n },\n {replace-brick\n {__VOLNAME}\n },\n {reset\n {__VOLNAME\n [ ]\n }\n },\n {set\n {__VOLNAME\n [ ]\n }\n },\n {start\n {__VOLNAME\n {force}\n }\n },\n {statedump\n {__VOLNAME}\n },\n {status\n {__VOLNAME}\n },\n {stop\n {__VOLNAME\n {force}\n }\n },\n {sync\n {__HOSTNAME}\n },\n {top\n {__VOLNAME\n [ \n {open\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {write\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {opendir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {readdir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {clear\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n },\n {write-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n }\n ]\n }\n }\n ]\n }\n ,\n {peer [\n {probe\n {__HOSTNAME}\n },\n {detach\n {__HOSTNAME\n {force}\n }\n },\n {status}\n ]\n },\n {pool\n {list}\n },\n {help}\n ]\n}' GLUSTER_FINAL_LIST= GLUSTER_GEO_REPLICATION_OPTIONS=$'\n {__VOLNAME [\n {__SLAVEURL [\n {create [\n {push-pem\n {force}\n },\n {force}\n ]\n },\n {start {force} },\n {status {detail} },\n {config},\n {pause {force} },\n {resume {force} },\n {stop {force} },\n {delete {force} }\n ]\n },\n {status}\n ]\n },\n {status}\n' GLUSTER_GEO_REPLICATION_SUBOPTIONS=$'\n' GLUSTER_LIST= GLUSTER_PROFILE_OPTIONS=$'\n {start},\n {info [\n {peek},\n {incremental\n {peek}\n },\n {cumulative},\n {clear},\n ]\n },\n {stop}\n' GLUSTER_QUOTA_OPTIONS=$'\n {enable},\n {disable},\n {list},\n {remove},\n {default-soft-limit},\n {limit-usage},\n {alert-time},\n {soft-timeout},\n {hard-timeout}\n' GLUSTER_TOP=0 GLUSTER_TOP_OPTIONS=$'\n {open\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {write\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {opendir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {readdir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {clear\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n },\n {write-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n }\n' GLUSTER_TOP_SUBOPTIONS1=$'\n {nfs},\n {brick},\n {list-cnt}\n' GLUSTER_TOP_SUBOPTIONS2=$'\n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n' GLUSTER_VOLUME_OPTIONS=$'\n {volume [\n {add-brick\n {__VOLNAME}\n },\n {barrier\n {__VOLNAME\n [ \n {enable},\n {disable}\n ]\n }\n },\n {clear-locks\n {__VOLNAME}\n },\n {create},\n {delete\n {__VOLNAME}\n },\n {geo-replication\n [ \n {__VOLNAME [\n {__SLAVEURL [\n {create [\n {push-pem\n {force}\n },\n {force}\n ]\n },\n {start {force} },\n {status {detail} },\n {config},\n {pause {force} },\n {resume {force} },\n {stop {force} },\n {delete {force} }\n ]\n },\n {status}\n ]\n },\n {status}\n ]\n },\n {heal\n {__VOLNAME}\n },\n {help},\n {info\n {__VOLNAME}\n },\n {list},\n {log\n {__VOLNAME}\n },\n {profile\n {__VOLNAME\n [ \n {start},\n {info [\n {peek},\n {incremental\n {peek}\n },\n {cumulative},\n {clear},\n ]\n },\n {stop}\n ]\n }\n },\n {quota\n {__VOLNAME\n [ \n {enable},\n {disable},\n {list},\n {remove},\n {default-soft-limit},\n {limit-usage},\n {alert-time},\n {soft-timeout},\n {hard-timeout}\n ]\n }\n },\n {rebalance\n {__VOLNAME}\n },\n {remove-brick\n {__VOLNAME}\n },\n {replace-brick\n {__VOLNAME}\n },\n {reset\n {__VOLNAME\n [ ]\n }\n },\n {set\n {__VOLNAME\n [ ]\n }\n },\n {start\n {__VOLNAME\n {force}\n }\n },\n {statedump\n {__VOLNAME}\n },\n {status\n {__VOLNAME}\n },\n {stop\n {__VOLNAME\n {force}\n }\n },\n {sync\n {__HOSTNAME}\n },\n {top\n {__VOLNAME\n [ \n {open\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {write\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {opendir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {readdir\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {clear\n [ \n {nfs},\n {brick},\n {list-cnt}\n ]\n },\n {read-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n },\n {write-perf\n [ \n {bs\n {__SIZE\n {count}\n }\n },\n {brick},\n {list-cnt}\n ]\n }\n ]\n }\n }\n ]\n }\n' GROUPS=() HISTCONTROL=ignoredups HISTFILE=/root/.bash_history HISTFILESIZE=1000 HISTSIZE=1000 HOME=/root HOSTNAME=hadoop100 HOSTTYPE=x86_64 ID=0 IFS=$' \t\n' LANG=zh_CN.UTF-8 LESSOPEN='||/usr/bin/lesspipe.sh %s' LINES=30 LOGNAME=root LPATHDIR=/root/.cache/abrt LS_COLORS='rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:' MACHTYPE=x86_64-redhat-linux-gnu MAIL=/var/spool/mail/root MAILCHECK=60 OPTERR=1 OPTIND=1 OSTYPE=linux-gnu PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin PIPESTATUS=([0]="0") PPID=3152 PROMPT_COMMAND='printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' PS1='[\u@\h \W]\$ ' PS2='> ' PS4='+ ' PWD=/root SELINUX_LEVEL_REQUESTED= SELINUX_ROLE_REQUESTED= SELINUX_USE_CURRENT_RANGE= SHELL=/bin/bash SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor SHLVL=1 SINCE=1665477595 SINCEFILE=/root/.cache/abrt/lastnotification SSH_CLIENT='192.168.181.1 58843 22' SSH_CONNECTION='192.168.181.1 58843 192.168.181.100 22' SSH_TTY=/dev/pts/0 TERM=xterm-256color TMPPATH=/root/.cache/abrt/lastnotification.FgmtwIi1 UID=0 USER=root XDG_DATA_DIRS=/root/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share XDG_RUNTIME_DIR=/run/user/0 XDG_SESSION_ID=12 _=clear _backup_glob='@(#*#|*@(~|.@(bak|orig|rej|swp|dpkg*|rpm@(orig|new|save))))' _xspecs=([freeamp]="!*.@(mp3|ogg|pls|m3u)" [cdiff]="!*.@(dif?(f)|?(d)patch)?(.@([gx]z|bz2|lzma))" [bibtex]="!*.aux" [rgview]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [oowriter]="!*.@(sxw|stw|sxg|sgl|doc?([mx])|dot?([mx])|rtf|txt|htm|html|?(f)odt|ott|odm)" [chromium-browser]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [tex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [zathura]="!*.@(cb[rz7t]|djv?(u)|?(e)ps|pdf)" [netscape]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [acroread]="!*.[pf]df" [makeinfo]="!*.texi*" [kwrite]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [gview]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [qiv]="!*.@(gif|jp?(e)g|tif?(f)|png|p[bgp]m|bmp|x[bp]m|rle|rgb|pcx|fits|pm|svg)" [lrunzip]="!*.lrz" [bzcat]="!*.?(t)bz?(2)" [amaya]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [pdftex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [hbpp]="!*.@([Pp][Rr][Gg]|[Cc][Ll][Pp])" [rpm2cpio]="!*.[rs]pm" [view]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [unxz]="!*.@(?(t)xz|tlz|lzma)" [ly2dvi]="!*.ly" [mozilla]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [modplugplay]="!*.@(669|abc|am[fs]|d[bs]m|dmf|far|it|mdl|m[eo]d|mid?(i)|mt[2m]|okta|p[st]m|s[3t]m|ult|umx|wav|xm)" [lzgrep]="!*.@(tlz|lzma)" [pyflakes]="!*.py" [dillo]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [aaxine]="!*@(.@(mp?(e)g|MP?(E)G|wma|avi|AVI|asf|vob|VOB|bin|dat|divx|DIVX|vcd|ps|pes|fli|flv|FLV|fxm|FXM|viv|rm|ram|yuv|mov|MOV|qt|QT|wmv|mp[234]|MP[234]|m4[pv]|M4[PV]|mkv|MKV|og[gmv]|OG[GMV]|t[ps]|T[PS]|m2t?(s)|M2T?(S)|wav|WAV|flac|FLAC|asx|ASX|mng|MNG|srt|m[eo]d|M[EO]D|s[3t]m|S[3T]M|it|IT|xm|XM)|+([0-9]).@(vdr|VDR))?(.part)" [dvipdfmx]="!*.dvi" [advi]="!*.dvi" [ggv]="!*.@(@(?(e)ps|?(E)PS|pdf|PDF)?(.gz|.GZ|.bz2|.BZ2|.Z))" [lzmore]="!*.@(tlz|lzma)" [lzless]="!*.@(tlz|lzma)" [kdvi]="!*.@(dvi|DVI)?(.@(gz|Z|bz2))" [poedit]="!*.po" [firefox]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [gv]="!*.@(@(?(e)ps|?(E)PS|pdf|PDF)?(.gz|.GZ|.bz2|.BZ2|.Z))" [madplay]="!*.mp3" [lbzcat]="!*.?(t)bz?(2)" [lilypond]="!*.ly" [gtranslator]="!*.po" [jadetex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [sxemacs]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [epdfview]="!*.pdf" [gpdf]="!*.[pf]df" [kghostview]="!*.@(@(?(e)ps|?(E)PS|pdf|PDF)?(.gz|.GZ|.bz2|.BZ2|.Z))" [pbzcat]="!*.?(t)bz?(2)" [texi2dvi]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [ps2pdf12]="!*.@(?(e)ps|pdf)" [ee]="!*.@(gif|jp?(e)g|miff|tif?(f)|pn[gm]|p[bgp]m|bmp|xpm|ico|xwd|tga|pcx)" [lzcat]="!*.@(tlz|lzma)" [lbunzip2]="!*.?(t)bz?(2)" [ps2pdf13]="!*.@(?(e)ps|pdf)" [vim]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [ps2pdf14]="!*.@(?(e)ps|pdf)" [dvips]="!*.dvi" [lzfgrep]="!*.@(tlz|lzma)" [hbrun]="!*.[Hh][Rr][Bb]" [kbabel]="!*.po" [rview]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [kaffeine]="!*@(.@(mp?(e)g|MP?(E)G|wma|avi|AVI|asf|vob|VOB|bin|dat|divx|DIVX|vcd|ps|pes|fli|flv|FLV|fxm|FXM|viv|rm|ram|yuv|mov|MOV|qt|QT|wmv|mp[234]|MP[234]|m4[pv]|M4[PV]|mkv|MKV|og[gmv]|OG[GMV]|t[ps]|T[PS]|m2t?(s)|M2T?(S)|wav|WAV|flac|FLAC|asx|ASX|mng|MNG|srt|m[eo]d|M[EO]D|s[3t]m|S[3T]M|it|IT|xm|XM|iso|ISO)|+([0-9]).@(vdr|VDR))?(.part)" [xv]="!*.@(gif|jp?(e)g|tif?(f)|png|p[bgp]m|bmp|x[bp]m|rle|rgb|pcx|fits|pm|?(e)ps)" [rgvim]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [dvitype]="!*.dvi" [oodraw]="!*.@(sxd|std|sda|sdd|?(f)odg|otg)" [elinks]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [playmidi]="!*.@(mid?(i)|cmf)" [realplay]="!*.@(rm?(j)|ra?(m)|smi?(l))" [xine]="!*@(.@(mp?(e)g|MP?(E)G|wma|avi|AVI|asf|vob|VOB|bin|dat|divx|DIVX|vcd|ps|pes|fli|flv|FLV|fxm|FXM|viv|rm|ram|yuv|mov|MOV|qt|QT|wmv|mp[234]|MP[234]|m4[pv]|M4[PV]|mkv|MKV|og[gmv]|OG[GMV]|t[ps]|T[PS]|m2t?(s)|M2T?(S)|wav|WAV|flac|FLAC|asx|ASX|mng|MNG|srt|m[eo]d|M[EO]D|s[3t]m|S[3T]M|it|IT|xm|XM)|+([0-9]).@(vdr|VDR))?(.part)" [xpdf]="!*.[pf]df" [gqmpeg]="!*.@(mp3|ogg|pls|m3u)" [lzegrep]="!*.@(tlz|lzma)" [aviplay]="!*.@(avi|asf|wmv)" [latex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [rvim]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [bunzip2]="!*.?(t)bz?(2)" [ogg123]="!*.@(ogg|m3u|flac|spx)" [ps2pdfwr]="!*.@(?(e)ps|pdf)" [znew]="*.Z" [harbour]="!*.@([Pp][Rr][Gg]|[Cc][Ll][Pp])" [lokalize]="!*.po" [kate]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [xemacs]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [dragon]="!*@(.@(mp?(e)g|MP?(E)G|wma|avi|AVI|asf|vob|VOB|bin|dat|divx|DIVX|vcd|ps|pes|fli|flv|FLV|fxm|FXM|viv|rm|ram|yuv|mov|MOV|qt|QT|wmv|mp[234]|MP[234]|m4[pv]|M4[PV]|mkv|MKV|og[gmv]|OG[GMV]|t[ps]|T[PS]|m2t?(s)|M2T?(S)|wav|WAV|flac|FLAC|asx|ASX|mng|MNG|srt|m[eo]d|M[EO]D|s[3t]m|S[3T]M|it|IT|xm|XM|iso|ISO)|+([0-9]).@(vdr|VDR))?(.part)" [unlzma]="!*.@(tlz|lzma)" [pdflatex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [vi]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [mozilla-firefox]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [ooimpress]="!*.@(sxi|sti|pps?(x)|ppt?([mx])|pot?([mx])|?(f)odp|otp)" [gvim]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [uncompress]="!*.Z" [kid3-qt]="!*.@(mp[234c]|og[ag]|@(fl|a)ac|m4[abp]|spx|tta|w?(a)v|wma|aif?(f)|asf|ape)" [xanim]="!*.@(mpg|mpeg|avi|mov|qt)" [unpigz]="!*.@(Z|[gGd]z|t[ag]z)" [portecle]="!@(*.@(ks|jks|jceks|p12|pfx|bks|ubr|gkr|cer|crt|cert|p7b|pkipath|pem|p10|csr|crl)|cacerts)" [oocalc]="!*.@(sxc|stc|xls?([bmx])|xlw|xlt?([mx])|[ct]sv|?(f)ods|ots)" [emacs]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [fbxine]="!*@(.@(mp?(e)g|MP?(E)G|wma|avi|AVI|asf|vob|VOB|bin|dat|divx|DIVX|vcd|ps|pes|fli|flv|FLV|fxm|FXM|viv|rm|ram|yuv|mov|MOV|qt|QT|wmv|mp[234]|MP[234]|m4[pv]|M4[PV]|mkv|MKV|og[gmv]|OG[GMV]|t[ps]|T[PS]|m2t?(s)|M2T?(S)|wav|WAV|flac|FLAC|asx|ASX|mng|MNG|srt|m[eo]d|M[EO]D|s[3t]m|S[3T]M|it|IT|xm|XM)|+([0-9]).@(vdr|VDR))?(.part)" [lynx]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [ps2pdf]="!*.@(?(e)ps|pdf)" [kpdf]="!*.@(?(e)ps|pdf)" [oomath]="!*.@(sxm|smf|mml|odf)" [compress]="*.Z" [iceweasel]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [mpg321]="!*.mp3" [mpg123]="!*.mp3" [zcat]="!*.@(Z|[gGd]z|t[ag]z)" [unzip]="!*.@(zip|[ejsw]ar|exe|pk3|wsz|zargo|xpi|s[tx][cdiw]|sx[gm]|o[dt][tspgfc]|od[bm]|oxt|epub|apk|do[ct][xm]|p[op]t[mx]|xl[st][xm])" [pbunzip2]="!*.?(t)bz?(2)" [kid3]="!*.@(mp[234c]|og[ag]|@(fl|a)ac|m4[abp]|spx|tta|w?(a)v|wma|aif?(f)|asf|ape)" [pdfjadetex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [dvipdf]="!*.dvi" [gharbour]="!*.@([Pp][Rr][Gg]|[Cc][Ll][Pp])" [modplug123]="!*.@(669|abc|am[fs]|d[bs]m|dmf|far|it|mdl|m[eo]d|mid?(i)|mt[2m]|okta|p[st]m|s[3t]m|ult|umx|wav|xm)" [dvipdfm]="!*.dvi" [oobase]="!*.odb" [texi2html]="!*.texi*" [zipinfo]="!*.@(zip|[ejsw]ar|exe|pk3|wsz|zargo|xpi|s[tx][cdiw]|sx[gm]|o[dt][tspgfc]|od[bm]|oxt|epub|apk|do[ct][xm]|p[op]t[mx]|xl[st][xm])" [epiphany]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [gunzip]="!*.@(Z|[gGd]z|t[ag]z)" [google-chrome]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [okular]="!*.@(okular|@(?(e|x)ps|?(E|X)PS|[pf]df|[PF]DF|dvi|DVI|cb[rz]|CB[RZ]|djv?(u)|DJV?(U)|dvi|DVI|gif|jp?(e)g|miff|tif?(f)|pn[gm]|p[bgp]m|bmp|xpm|ico|xwd|tga|pcx|GIF|JP?(E)G|MIFF|TIF?(F)|PN[GM]|P[BGP]M|BMP|XPM|ICO|XWD|TGA|PCX|epub|EPUB|odt|ODT|fb?(2)|FB?(2)|mobi|MOBI|g3|G3|chm|CHM)?(.?(gz|GZ|bz2|BZ2)))" [slitex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [galeon]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [bzme]="!*.@(zip|z|gz|tgz)" [xfig]="!*.fig" [xzcat]="!*.@(?(t)xz|tlz|lzma)" [timidity]="!*.@(mid?(i)|rmi|rcp|[gr]36|g18|mod|xm|it|x3m|s[3t]m|kar)" [dviselect]="!*.dvi" [xdvi]="!*.@(dvi|DVI)?(.@(gz|Z|bz2))" ) colors=/root/.dircolors __HOSTNAME () { local zero=0; local ret=0; local cur_word="$2"; if [ "$1" == "X" ]; then return; else if [ "$1" == "match" ]; then return 0; else if [ "$1" == "complete" ]; then COMPREPLY=($(compgen -A hostname -- $cur_word)); fi; fi; fi; return 0 } __SIZE () { return 0 } __SLAVEURL () { return 0 } __VOLNAME () { local zero=0; local ret=0; local cur_word="$2"; local list=""; if [ "X$1" == "X" ]; then return; else if [ "$1" == "match" ]; then return 0; else if [ "$1" == "complete" ]; then if ! pidof glusterd > /dev/null 2>&1; then list=''; else list=`gluster volume list 2> /dev/null`; fi; else return 0; fi; fi; fi; COMPREPLY=($(compgen -W "$list" -- $cur_word)); return 0 } __expand_tilde_by_ref () { if [[ ${!1} == \~* ]]; then if [[ ${!1} == */* ]]; then eval $1="${!1/%\/*}"/'${!1#*/}'; else eval $1="${!1}"; fi; fi } __get_cword_at_cursor_by_ref () { local cword words=(); __reassemble_comp_words_by_ref "$1" words cword; local i cur index=$COMP_POINT lead=${COMP_LINE:0:$COMP_POINT}; if [[ $index -gt 0 && ( -n $lead && -n ${lead//[[:space:]]} ) ]]; then cur=$COMP_LINE; for ((i = 0; i <= cword; ++i )) do while [[ ${#cur} -ge ${#words[i]} && "${cur:0:${#words[i]}}" != "${words[i]}" ]]; do cur="${cur:1}"; ((index--)); done; if [[ $i -lt $cword ]]; then local old_size=${#cur}; cur="${cur#"${words[i]}"}"; local new_size=${#cur}; index=$(( index - old_size + new_size )); fi; done; [[ -n $cur && ! -n ${cur//[[:space:]]} ]] && cur=; [[ $index -lt 0 ]] && index=0; fi; local "$2" "$3" "$4" && _upvars -a${#words[@]} $2 "${words[@]}" -v $3 "$cword" -v $4 "${cur:0:$index}" } __ltrim_colon_completions () { if [[ "$1" == *:* && "$COMP_WORDBREAKS" == *:* ]]; then local colon_word=${1%"${1##*:}"}; local i=${#COMPREPLY[*]}; while [[ $((--i)) -ge 0 ]]; do COMPREPLY[$i]=${COMPREPLY[$i]#"$colon_word"}; done; fi } __parse_options () { local option option2 i IFS=' ,/|'; option=; for i in $1; do case $i in ---*) break ;; --?*) option=$i; break ;; -?*) [[ -n $option ]] || option=$i ;; *) break ;; esac; done; [[ -n $option ]] || return 0; IFS=' '; if [[ $option =~ (\[((no|dont)-?)\]). ]]; then option2=${option/"${BASH_REMATCH[1]}"/}; option2=${option2%%[<{().[]*}; printf '%s\n' "${option2/=*/=}"; option=${option/"${BASH_REMATCH[1]}"/"${BASH_REMATCH[2]}"}; fi; option=${option%%[<{().[]*}; printf '%s\n' "${option/=*/=}" } __reassemble_comp_words_by_ref () { local exclude i j line ref; if [[ -n $1 ]]; then exclude="${1//[^$COMP_WORDBREAKS]}"; fi; eval $3=$COMP_CWORD; if [[ -n $exclude ]]; then line=$COMP_LINE; for ((i=0, j=0; i < ${#COMP_WORDS[@]}; i++, j++)) do while [[ $i -gt 0 && ${COMP_WORDS[$i]} == +([$exclude]) ]]; do [[ $line != [' ']* ]] && (( j >= 2 )) && ((j--)); ref="$2[$j]"; eval $2[$j]=\${!ref}\${COMP_WORDS[i]}; [[ $i == $COMP_CWORD ]] && eval $3=$j; line=${line#*"${COMP_WORDS[$i]}"}; [[ $line == [' ']* ]] && ((j++)); (( $i < ${#COMP_WORDS[@]} - 1)) && ((i++)) || break 2; done; ref="$2[$j]"; eval $2[$j]=\${!ref}\${COMP_WORDS[i]}; line=${line#*"${COMP_WORDS[i]}"}; [[ $i == $COMP_CWORD ]] && eval $3=$j; done; [[ $i == $COMP_CWORD ]] && eval $3=$j; else eval $2=\( \"\${COMP_WORDS[@]}\" \); fi } _allowed_groups () { if _complete_as_root; then local IFS=' '; COMPREPLY=($( compgen -g -- "$1" )); else local IFS=' '; COMPREPLY=($( compgen -W "$( id -Gn 2>/dev/null || groups 2>/dev/null )" -- "$1" )); fi } _allowed_users () { if _complete_as_root; then local IFS=' '; COMPREPLY=($( compgen -u -- "${1:-$cur}" )); else local IFS=' '; COMPREPLY=($( compgen -W "$( id -un 2>/dev/null || whoami 2>/dev/null )" -- "${1:-$cur}" )); fi } _available_fcoe_interfaces () { if [ "${1:-}" = -a ]; then COMPREPLY=($( for f in /sys/class/net/* ; do if grep -q up $f/operstate ; then echo ${f##*/} ; fi ; done 2>/dev/null )); else COMPREPLY=($( for f in /sys/class/net/* ; do echo ${f##*/} ; done 2>/dev/null )); fi; COMPREPLY=($( compgen -W '${COMPREPLY[@]/%[[:punct:]]/}' -- "$cur" )) } _available_interfaces () { local cmd PATH=$PATH:/sbin; if [[ ${1:-} == -w ]]; then cmd="iwconfig"; else if [[ ${1:-} == -a ]]; then cmd="{ ifconfig || ip link show up; }"; else cmd="{ ifconfig -a || ip link show; }"; fi; fi; COMPREPLY=($( eval $cmd 2>/dev/null | awk '/^[^ \t]/ { if ($1 ~ /^[0-9]+:/) { print $2 } else { print $1 } }' )); COMPREPLY=($( compgen -W '${COMPREPLY[@]/%[[:punct:]]/}' -- "$cur" )) } _bpftool () { local cur prev words objword; _init_completion || return; case $prev in help | hex | opcodes | visual) return 0 ;; tag) _bpftool_get_prog_tags; return 0 ;; file | pinned) _filedir; return 0 ;; batch) COMPREPLY=($( compgen -W 'file' -- "$cur" )); return 0 ;; esac; local object command cmdword; for ((cmdword=1; cmdword < ${#words[@]}-1; cmdword++ )) do [[ -n $object ]] && command=${words[cmdword]} && break; [[ ${words[cmdword]} != -* ]] && object=${words[cmdword]}; done; if [[ -z $object ]]; then case $cur in -*) local c='--version --json --pretty'; COMPREPLY=($( compgen -W "$c" -- "$cur" )); return 0 ;; *) COMPREPLY=($( compgen -W "$( bpftool help 2>&1 | command sed -e '/OBJECT := /!d' -e 's/.*{//' -e 's/}.*//' -e 's/|//g' )" -- "$cur" )); COMPREPLY+=($( compgen -W 'batch help' -- "$cur" )); return 0 ;; esac; fi; [[ $command == help ]] && return 0; case $object in prog) case $prev in id) _bpftool_get_prog_ids; return 0 ;; esac; local PROG_TYPE='id pinned tag'; case $command in show | list) [[ $prev != "$command" ]] && return 0; COMPREPLY=($( compgen -W "$PROG_TYPE" -- "$cur" )); return 0 ;; dump) case $prev in $command) COMPREPLY+=($( compgen -W "xlated jited" -- "$cur" )); return 0 ;; xlated | jited) COMPREPLY=($( compgen -W "$PROG_TYPE" -- "$cur" )); return 0 ;; *) _bpftool_once_attr 'file'; if _bpftool_search_list 'xlated'; then COMPREPLY+=($( compgen -W 'opcodes visual' -- "$cur" )); else COMPREPLY+=($( compgen -W 'opcodes' -- "$cur" )); fi; return 0 ;; esac ;; pin) if [[ $prev == "$command" ]]; then COMPREPLY=($( compgen -W "$PROG_TYPE" -- "$cur" )); else _filedir; fi; return 0 ;; load) _filedir; return 0 ;; *) [[ $prev == $object ]] && COMPREPLY=($( compgen -W 'dump help pin load \ show list' -- "$cur" )) ;; esac ;; map) local MAP_TYPE='id pinned'; case $command in show | list | dump) case $prev in $command) COMPREPLY=($( compgen -W "$MAP_TYPE" -- "$cur" )); return 0 ;; id) _bpftool_get_map_ids; return 0 ;; *) return 0 ;; esac ;; lookup | getnext | delete) case $prev in $command) COMPREPLY=($( compgen -W "$MAP_TYPE" -- "$cur" )); return 0 ;; id) _bpftool_get_map_ids; return 0 ;; key) COMPREPLY+=($( compgen -W 'hex' -- "$cur" )) ;; *) _bpftool_once_attr 'key'; return 0 ;; esac ;; update) case $prev in $command) COMPREPLY=($( compgen -W "$MAP_TYPE" -- "$cur" )); return 0 ;; id) _bpftool_map_update_get_id; return 0 ;; key) COMPREPLY+=($( compgen -W 'hex' -- "$cur" )) ;; value) case $(_bpftool_map_update_map_type) in array_of_maps | hash_of_maps) local MAP_TYPE='id pinned'; COMPREPLY+=($( compgen -W "$MAP_TYPE" -- "$cur" )); return 0 ;; prog_array) local PROG_TYPE='id pinned tag'; COMPREPLY+=($( compgen -W "$PROG_TYPE" -- "$cur" )); return 0 ;; *) COMPREPLY+=($( compgen -W 'hex' -- "$cur" )); return 0 ;; esac; return 0 ;; *) _bpftool_once_attr 'key'; local UPDATE_FLAGS='any exist noexist'; for ((idx=3; idx < ${#words[@]}-1; idx++ )) do if [[ ${words[idx]} == 'value' ]]; then _bpftool_one_of_list "$UPDATE_FLAGS"; return 0; fi; done; for ((idx=3; idx < ${#words[@]}-1; idx++ )) do if [[ ${words[idx]} == 'key' ]]; then _bpftool_once_attr 'value'; return 0; fi; done; return 0 ;; esac ;; pin) if [[ $prev == "$command" ]]; then COMPREPLY=($( compgen -W "$PROG_TYPE" -- "$cur" )); else _filedir; fi; return 0 ;; *) [[ $prev == $object ]] && COMPREPLY=($( compgen -W 'delete dump getnext help \ lookup pin show list update' -- "$cur" )) ;; esac ;; cgroup) case $command in show | list) _filedir; return 0 ;; attach | detach) local ATTACH_TYPES='ingress egress sock_create sock_ops \ device'; local ATTACH_FLAGS='multi override'; local PROG_TYPE='id pinned tag'; case $prev in $command) _filedir; return 0 ;; ingress | egress | sock_create | sock_ops | device) COMPREPLY=($( compgen -W "$PROG_TYPE" -- "$cur" )); return 0 ;; id) _bpftool_get_prog_ids; return 0 ;; *) if ! _bpftool_search_list "$ATTACH_TYPES"; then COMPREPLY=($( compgen -W "$ATTACH_TYPES" -- "$cur" )); else if [[ "$command" == "attach" ]]; then _bpftool_one_of_list "$ATTACH_FLAGS"; fi; fi; return 0 ;; esac ;; *) [[ $prev == $object ]] && COMPREPLY=($( compgen -W 'help attach detach \ show list' -- "$cur" )) ;; esac ;; esac } _bpftool_get_map_ids () { COMPREPLY+=($( compgen -W "$( bpftool -jp map 2>&1 | command sed -n 's/.*"id": \(.*\),$/\1/p' )" -- "$cur" )) } _bpftool_get_prog_ids () { COMPREPLY+=($( compgen -W "$( bpftool -jp prog 2>&1 | command sed -n 's/.*"id": \(.*\),$/\1/p' )" -- "$cur" )) } _bpftool_get_prog_tags () { COMPREPLY+=($( compgen -W "$( bpftool -jp prog 2>&1 | command sed -n 's/.*"tag": "\(.*\)",$/\1/p' )" -- "$cur" )) } _bpftool_map_update_get_id () { local idx value; for ((idx=7; idx < ${#words[@]}-1; idx++ )) do if [[ ${words[idx]} == "value" ]]; then value=1; break; fi; done; [[ $value -eq 0 ]] && _bpftool_get_map_ids && return 0; local type=$(_bpftool_map_update_map_type); case $type in array_of_maps | hash_of_maps) _bpftool_get_map_ids; return 0 ;; prog_array) _bpftool_get_prog_ids; return 0 ;; *) return 0 ;; esac } _bpftool_map_update_map_type () { local keyword ref; for ((idx=3; idx < ${#words[@]}-1; idx++ )) do if [[ ${words[$((idx-2))]} == "update" ]]; then keyword=${words[$((idx-1))]}; ref=${words[$((idx))]}; fi; done; [[ -z $ref ]] && return 0; local type; type=$(bpftool -jp map show $keyword $ref | command sed -n 's/.*"type": "\(.*\)",$/\1/p'); printf $type } _bpftool_once_attr () { local w idx found; for w in $*; do found=0; for ((idx=3; idx < ${#words[@]}-1; idx++ )) do if [[ $w == ${words[idx]} ]]; then found=1; break; fi; done; [[ $found -eq 0 ]] && COMPREPLY+=($( compgen -W "$w" -- "$cur" )); done } _bpftool_one_of_list () { _bpftool_search_list $* && return 1; COMPREPLY+=($( compgen -W "$*" -- "$cur" )) } _bpftool_search_list () { local w idx; for w in $*; do for ((idx=3; idx < ${#words[@]}-1; idx++ )) do [[ $w == ${words[idx]} ]] && return 0; done; done; return 1 } _cd () { local cur prev words cword; _init_completion || return; local IFS=' ' i j k; compopt -o filenames; if [[ -z "${CDPATH:-}" || "$cur" == ?(.)?(.)/* ]]; then _filedir -d; return 0; fi; local -r mark_dirs=$(_rl_enabled mark-directories && echo y); local -r mark_symdirs=$(_rl_enabled mark-symlinked-directories && echo y); for i in ${CDPATH//:/' '}; do k="${#COMPREPLY[@]}"; for j in $( compgen -d $i/$cur ); do if [[ ( -n $mark_symdirs && -h $j || -n $mark_dirs && ! -h $j ) && ! -d ${j#$i/} ]]; then j+="/"; fi; COMPREPLY[k++]=${j#$i/}; done; done; _filedir -d; if [[ ${#COMPREPLY[@]} -eq 1 ]]; then i=${COMPREPLY[0]}; if [[ "$i" == "$cur" && $i != "*/" ]]; then COMPREPLY[0]="${i}/"; fi; fi; return 0 } _cd_devices () { COMPREPLY+=($( compgen -f -d -X "!*/?([amrs])cd*" -- "${cur:-/dev/}" )) } _command () { local offset i; offset=1; for ((i=1; i <= COMP_CWORD; i++ )) do if [[ "${COMP_WORDS[i]}" != -* ]]; then offset=$i; break; fi; done; _command_offset $offset } _command_offset () { local word_offset=$1 i j; for ((i=0; i < $word_offset; i++ )) do for ((j=0; j <= ${#COMP_LINE}; j++ )) do [[ "$COMP_LINE" == "${COMP_WORDS[i]}"* ]] && break; COMP_LINE=${COMP_LINE:1}; ((COMP_POINT--)); done; COMP_LINE=${COMP_LINE#"${COMP_WORDS[i]}"}; ((COMP_POINT-=${#COMP_WORDS[i]})); done; for ((i=0; i <= COMP_CWORD - $word_offset; i++ )) do COMP_WORDS[i]=${COMP_WORDS[i+$word_offset]}; done; for ((i; i <= COMP_CWORD; i++ )) do unset COMP_WORDS[i]; done; ((COMP_CWORD -= $word_offset)); COMPREPLY=(); local cur; _get_comp_words_by_ref cur; if [[ $COMP_CWORD -eq 0 ]]; then local IFS=' '; compopt -o filenames; COMPREPLY=($( compgen -d -c -- "$cur" )); else local cmd=${COMP_WORDS[0]} compcmd=${COMP_WORDS[0]}; local cspec=$( complete -p $cmd 2>/dev/null ); if [[ ! -n $cspec && $cmd == */* ]]; then cspec=$( complete -p ${cmd##*/} 2>/dev/null ); [[ -n $cspec ]] && compcmd=${cmd##*/}; fi; if [[ ! -n $cspec ]]; then compcmd=${cmd##*/}; _completion_loader $compcmd; cspec=$( complete -p $compcmd 2>/dev/null ); fi; if [[ -n $cspec ]]; then if [[ ${cspec#* -F } != $cspec ]]; then local func=${cspec#*-F }; func=${func%% *}; if [[ ${#COMP_WORDS[@]} -ge 2 ]]; then $func $cmd "${COMP_WORDS[${#COMP_WORDS[@]}-1]}" "${COMP_WORDS[${#COMP_WORDS[@]}-2]}"; else $func $cmd "${COMP_WORDS[${#COMP_WORDS[@]}-1]}"; fi; local opt; while [[ $cspec == *" -o "* ]]; do cspec=${cspec#*-o }; opt=${cspec%% *}; compopt -o $opt; cspec=${cspec#$opt}; done; else cspec=${cspec#complete}; cspec=${cspec%%$compcmd}; COMPREPLY=($( eval compgen "$cspec" -- '$cur' )); fi; else if [[ ${#COMPREPLY[@]} -eq 0 ]]; then _minimal; fi; fi; fi } _comp_iprconfig () { iprconfig="${COMP_WORDS[0]}"; cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; case "${prev}" in "-c") opts=$(${iprconfig} -l 2>/dev/null); COMPREPLY=($(compgen -W "${opts}" -- ${cur})) ;; "-k") COMPREPLY=($(compgen -o dirnames -- ${cur})) ;; *) opts=$(find /dev -printf "%f\n" | grep -G "^\(sd\|sg\)"); COMPREPLY=($(compgen -W "${opts}" -- ${cur})) ;; esac; return 0 } _complete_as_root () { [[ $EUID -eq 0 || -n ${root_command:-} ]] } _completion_loader () { local compfile=./completions; [[ $BASH_SOURCE == */* ]] && compfile="${BASH_SOURCE%/*}/completions"; compfile+="/${1##*/}"; [[ -f "$compfile" ]] && . "$compfile" &>/dev/null && return 124; complete -F _minimal "$1" && return 124 } _configured_interfaces () { if [[ -f /etc/debian_version ]]; then COMPREPLY=($( compgen -W "$( sed -ne 's|^iface \([^ ]\{1,\}\).*$|\1|p' /etc/network/interfaces )" -- "$cur" )); else if [[ -f /etc/SuSE-release ]]; then COMPREPLY=($( compgen -W "$( printf '%s\n' /etc/sysconfig/network/ifcfg-* | sed -ne 's|.*ifcfg-\(.*\)|\1|p' )" -- "$cur" )); else if [[ -f /etc/pld-release ]]; then COMPREPLY=($( compgen -W "$( command ls -B /etc/sysconfig/interfaces | sed -ne 's|.*ifcfg-\(.*\)|\1|p' )" -- "$cur" )); else COMPREPLY=($( compgen -W "$( printf '%s\n' /etc/sysconfig/network-scripts/ifcfg-* | sed -ne 's|.*ifcfg-\(.*\)|\1|p' )" -- "$cur" )); fi; fi; fi } _count_args () { local i cword words; __reassemble_comp_words_by_ref "$1" words cword; args=1; for i in "${words[@]:1:cword-1}"; do [[ "$i" != -* ]] && args=$(($args+1)); done } _cr_checksum_type () { COMPREPLY=($( compgen -W 'md5 sha1 sha256 sha512' -- "$1" )) } _cr_compress_type () { COMPREPLY=($( compgen -W "$( ${1:-createrepo} --compress-type=FOO / 2>&1 | sed -ne 's/,/ /g' -ne 's/.*[Cc]ompression.*://p' )" -- "$2" )) } _cr_createrepo () { COMPREPLY=(); case $3 in --version | -h | --help | -u | --baseurl | --distro | --content | --repo | --revision | -x | --excludes | --changelog-limit | --max-delta-rpm-size) return 0 ;; --basedir | -c | --cachedir | --update-md-path | -o | --outputdir | --oldpackagedirs) COMPREPLY=($( compgen -d -- "$2" )); return 0 ;; -g | --groupfile) COMPREPLY=($( compgen -f -o plusdirs -X '!*.xml' -- "$2" )); return 0 ;; -s | --checksum) _cr_checksum_type "$2"; return 0 ;; -i | --pkglist | --read-pkgs-list) COMPREPLY=($( compgen -f -o plusdirs -- "$2" )); return 0 ;; -n | --includepkg) COMPREPLY=($( compgen -f -o plusdirs -X '!*.rpm' -- "$2" )); return 0 ;; --retain-old-md) COMPREPLY=($( compgen -W '0 1 2 3 4 5 6 7 8 9' -- "$2" )); return 0 ;; --num-deltas) COMPREPLY=($( compgen -W '1 2 3 4 5 6 7 8 9' -- "$2" )); return 0 ;; --workers) local min=2 max=$( getconf _NPROCESSORS_ONLN 2>/dev/null ); [[ -z $max || $max -lt $min ]] && max=$min; COMPREPLY=($( compgen -W "{1..$max}" -- "$2" )); return 0 ;; --compress-type) _cr_compress_type "$1" "$2"; return 0 ;; esac; if [[ $2 == -* ]]; then COMPREPLY=($( compgen -W '--version --help --quiet --verbose --profile --excludes --basedir --baseurl --groupfile --checksum --pretty --cachedir --checkts --no-database --update --update-md-path --skip-stat --split --pkglist --includepkg --outputdir --skip-symlinks --changelog-limit --unique-md-filenames --simple-md-filenames --retain-old-md --distro --content --repo --revision --deltas --oldpackagedirs --num-deltas --read-pkgs-list --max-delta-rpm-size --workers --compress-type' -- "$2" )); else COMPREPLY=($( compgen -d -- "$2" )); fi } _cr_mergerepo () { COMPREPLY=(); case $3 in --version | -h | --help | -a | --archlist) return 0 ;; -r | --repo | -o | --outputdir) COMPREPLY=($( compgen -d -- "$2" )); return 0 ;; --compress-type) _cr_compress_type "" "$2"; return 0 ;; esac; COMPREPLY=($( compgen -W '--version --help --repo --archlist --no-database --outputdir --nogroups --noupdateinfo --compress-type' -- "$2" )) } _cr_modifyrepo () { COMPREPLY=(); case $3 in --version | -h | --help | --mdtype) return 0 ;; --compress-type) _cr_compress_type "" "$2"; return 0 ;; -s | --checksum) _cr_checksum_type "$2"; return 0 ;; esac; if [[ $2 == -* ]]; then COMPREPLY=($( compgen -W '--version --help --mdtype --remove --compress --no-compress --compress-type --checksum --unique-md-filenames --simple-md-filenames' -- "$2" )); return 0; fi; local i argnum=1; for ((i=1; i < ${#COMP_WORDS[@]}-1; i++ )) do if [[ ${COMP_WORDS[i]} != -* && ${COMP_WORDS[i-1]} != @(=|--@(md|compress-)type) ]]; then argnum=$(( argnum+1 )); fi; done; case $argnum in 1) COMPREPLY=($( compgen -f -o plusdirs -- "$2" )); return 0 ;; 2) COMPREPLY=($( compgen -d -- "$2" )); return 0 ;; esac } _dvd_devices () { COMPREPLY+=($( compgen -f -d -X "!*/?(r)dvd*" -- "${cur:-/dev/}" )) } _expand () { if [[ "$cur" == \~*/* ]]; then eval cur=$cur 2> /dev/null; else if [[ "$cur" == \~* ]]; then cur=${cur#\~}; COMPREPLY=($( compgen -P '~' -u "$cur" )); [[ ${#COMPREPLY[@]} -eq 1 ]] && eval COMPREPLY[0]=${COMPREPLY[0]}; return ${#COMPREPLY[@]}; fi; fi } _fcoeadm_options () { local cur prev prev_prev opts; COMPREPLY=(); cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-m --mode -c --create -d --destroy -r --reset -i --interface -t --target -l --lun -s --stats -S --Scan -h --help -v --version"; case "${prev}" in -c | --create | -d | --destroy | -r | --reset | -s | --stats | -S | --Scan | -i | --interface | -t | --target | -l | --lun) _available_fcoe_interfaces -a; return 0 ;; -m | --mode) COMPREPLY=(fabric vn2vn); COMPREPLY=($( compgen -W '${COMPREPLY[@]/%[[:punct:]]/}' -- "$cur" )); return 0 ;; esac; case "${cur}" in *) COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0 ;; esac; return 0 } _fcoemon_options () { local cur prev prev_prev opts; COMPREPLY=(); cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-f --foreground -d --debug -s --syslog -v --version -h --help"; case "${cur}" in *) COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0 ;; esac; return 0 } _filedir () { local i IFS=' ' xspec; _tilde "$cur" || return 0; local -a toks; local quoted x tmp; _quote_readline_by_ref "$cur" quoted; x=$( compgen -d -- "$quoted" ) && while read -r tmp; do toks+=("$tmp"); done <<< "$x"; if [[ "$1" != -d ]]; then xspec=${1:+"!*.@($1|${1^^})"}; x=$( compgen -f -X "$xspec" -- $quoted ) && while read -r tmp; do toks+=("$tmp"); done <<< "$x"; fi; [[ -n ${COMP_FILEDIR_FALLBACK:-} && -n "$1" && "$1" != -d && ${#toks[@]} -lt 1 ]] && x=$( compgen -f -- $quoted ) && while read -r tmp; do toks+=("$tmp"); done <<< "$x"; if [[ ${#toks[@]} -ne 0 ]]; then compopt -o filenames 2> /dev/null; COMPREPLY+=("${toks[@]}"); fi } _filedir_xspec () { local cur prev words cword; _init_completion || return; _tilde "$cur" || return 0; local IFS=' ' xspec=${_xspecs[${1##*/}]} tmp; local -a toks; toks=($( compgen -d -- "$(quote_readline "$cur")" | { while read -r tmp; do printf '%s\n' $tmp done } )); eval xspec="${xspec}"; local matchop=!; if [[ $xspec == !* ]]; then xspec=${xspec#!}; matchop=@; fi; xspec="$matchop($xspec|${xspec^^})"; toks+=($( eval compgen -f -X "!$xspec" -- "\$(quote_readline "\$cur")" | { while read -r tmp; do [[ -n $tmp ]] && printf '%s\n' $tmp done } )); if [[ ${#toks[@]} -ne 0 ]]; then compopt -o filenames; COMPREPLY=("${toks[@]}"); fi } _fstypes () { local fss; if [[ -e /proc/filesystems ]]; then fss="$( cut -d' ' -f2 /proc/filesystems ) $( awk '! /\*/ { print $NF }' /etc/filesystems 2>/dev/null )"; else fss="$( awk '/^[ \t]*[^#]/ { print $3 }' /etc/fstab 2>/dev/null ) $( awk '/^[ \t]*[^#]/ { print $3 }' /etc/mnttab 2>/dev/null ) $( awk '/^[ \t]*[^#]/ { print $4 }' /etc/vfstab 2>/dev/null ) $( awk '{ print $1 }' /etc/dfs/fstypes 2>/dev/null ) $( [[ -d /etc/fs ]] && command ls /etc/fs )"; fi; [[ -n $fss ]] && COMPREPLY+=($( compgen -W "$fss" -- "$cur" )) } _get_comp_words_by_ref () { local exclude flag i OPTIND=1; local cur cword words=(); local upargs=() upvars=() vcur vcword vprev vwords; while getopts "c:i:n:p:w:" flag "$@"; do case $flag in c) vcur=$OPTARG ;; i) vcword=$OPTARG ;; n) exclude=$OPTARG ;; p) vprev=$OPTARG ;; w) vwords=$OPTARG ;; esac; done; while [[ $# -ge $OPTIND ]]; do case ${!OPTIND} in cur) vcur=cur ;; prev) vprev=prev ;; cword) vcword=cword ;; words) vwords=words ;; *) echo "bash: $FUNCNAME(): \`${!OPTIND}': unknown argument" 1>&2; return 1 ;; esac; let "OPTIND += 1"; done; __get_cword_at_cursor_by_ref "$exclude" words cword cur; [[ -n $vcur ]] && { upvars+=("$vcur"); upargs+=(-v $vcur "$cur") }; [[ -n $vcword ]] && { upvars+=("$vcword"); upargs+=(-v $vcword "$cword") }; [[ -n $vprev && $cword -ge 1 ]] && { upvars+=("$vprev"); upargs+=(-v $vprev "${words[cword - 1]}") }; [[ -n $vwords ]] && { upvars+=("$vwords"); upargs+=(-a${#words[@]} $vwords "${words[@]}") }; (( ${#upvars[@]} )) && local "${upvars[@]}" && _upvars "${upargs[@]}" } _get_cword () { local LC_CTYPE=C; local cword words; __reassemble_comp_words_by_ref "$1" words cword; if [[ -n ${2//[^0-9]/} ]]; then printf "%s" "${words[cword-$2]}"; else if [[ "${#words[cword]}" -eq 0 || "$COMP_POINT" == "${#COMP_LINE}" ]]; then printf "%s" "${words[cword]}"; else local i; local cur="$COMP_LINE"; local index="$COMP_POINT"; for ((i = 0; i <= cword; ++i )) do while [[ "${#cur}" -ge ${#words[i]} && "${cur:0:${#words[i]}}" != "${words[i]}" ]]; do cur="${cur:1}"; ((index--)); done; if [[ "$i" -lt "$cword" ]]; then local old_size="${#cur}"; cur="${cur#${words[i]}}"; local new_size="${#cur}"; index=$(( index - old_size + new_size )); fi; done; if [[ "${words[cword]:0:${#cur}}" != "$cur" ]]; then printf "%s" "${words[cword]}"; else printf "%s" "${cur:0:$index}"; fi; fi; fi } _get_first_arg () { local i; arg=; for ((i=1; i < COMP_CWORD; i++ )) do if [[ "${COMP_WORDS[i]}" != -* ]]; then arg=${COMP_WORDS[i]}; break; fi; done } _get_pword () { if [[ $COMP_CWORD -ge 1 ]]; then _get_cword "${@:-}" 1; fi } _gids () { if type getent &>/dev/null; then COMPREPLY=($( compgen -W '$( getent group | cut -d: -f3 )' -- "$cur" )); else if type perl &>/dev/null; then COMPREPLY=($( compgen -W '$( perl -e '"'"'while (($gid) = (getgrent)[2]) { print $gid . "\n" }'"'"' )' -- "$cur" )); else COMPREPLY=($( compgen -W '$( cut -d: -f3 /etc/group )' -- "$cur" )); fi; fi } _gluster_completion () { GLUSTER_FINAL_LIST=`echo $GLUSTER_COMMAND_TREE | egrep -ao --color=never "([A-Za-z0-9_.-]+)|[[:space:]]+|." | egrep -v --color=never "^[[:space:]]*$" | _gluster_parse`; ARG="GLUSTER_FINAL_LIST"; _gluster_handle_list $ARG ${COMP_WORDS[COMP_CWORD]}; return } _gluster_does_match () { local token="$1"; local key="$2"; if [ "${token:0:1}" == "_" ]; then $token $2; return $?; fi; [ "$token" == "$key" ] && return 0; return 1 } _gluster_form_list () { local token=''; local top=0; local comma=''; local cur_word="$1"; read -r token; case $token in ']') ;; '{') _gluster_push; top=$?; read -r key; if [ "X$cur_word" == "X" -o "${cur_word:0:1}" == "${key:0:1}" -o "${key:0:1}" == "_" ]; then GLUSTER_LIST="$GLUSTER_LIST $key"; fi; _gluster_goto_end $top; read -r comma; if [ "$comma" == "," ]; then _gluster_form_list $cur_word; fi ;; *) _gluster_throw "Expected '{' but received $token" ;; esac; return } _gluster_goto_child () { local match_string="$1"; local token=''; local top=0; local comma=''; read -r token; case $token in '{') _gluster_push; top=$? ;; *) _gluster_throw "Expected '{' but received $token" ;; esac; read -r token; case `echo $token` in '[' | ']' | '{' | '}') _gluster_throw "Expected string but received $token" ;; _*) $token "match" $match_string; ret=$?; if [ $ret -eq 0 ]; then return; else _gluster_goto_end $top; read -r comma; if [ "$comma" == "," ]; then _gluster_goto_child $match_string; fi; fi ;; "$match_string") return ;; *) _gluster_goto_end $top; read -r comma; if [ "$comma" == "," ]; then _gluster_goto_child $match_string; fi ;; esac; return } _gluster_goto_end () { local prev_top=$1; local top=$1; local token=''; while [ $top -ge $prev_top ]; do read -r token; case $token in '{' | '[') _gluster_push; top=$? ;; '}' | ']') _gluster_pop; top=$? ;; esac; done; return } _gluster_handle_list () { local list="${!1}"; local cur_word=$2; local count=0; local i=0; for i in `echo $list`; do count=$((count + 1)); done; if [ $count -eq 1 ] && [ "${i:0:1}" == "_" ]; then $i "complete" $cur_word; else COMPREPLY=($(compgen -W "$list" -- $cur_word)); fi; return } _gluster_parse () { local i=0; local token=''; local tmp_token=''; local word=''; while [ $i -lt $COMP_CWORD ]; do read -r token; case $token in '[') _gluster_push; _gluster_goto_child ${COMP_WORDS[$i]} ;; '{') _gluster_push; read -r tmp_token; _gluster_does_match $tmp_token ${COMP_WORDS[$i]}; if [ $? -ne 0 ]; then _gluster_throw "No match"; fi ;; esac; i=$((i+1)); done; read -r token; if [ "$token" == '[' ]; then _gluster_push; _gluster_form_list ${COMP_WORDS[COMP_CWORD]}; else if [ "$token" == '{' ]; then read -r tmp_token; GLUSTER_LIST="$tmp_token"; fi; fi; echo $GLUSTER_LIST } _gluster_pop () { GLUSTER_TOP=$((GLUSTER_TOP - 1)); return $GLUSTER_TOP } _gluster_push () { GLUSTER_TOP=$((GLUSTER_TOP + 1)); return $GLUSTER_TOP } _gluster_throw () { COMPREPLY=''; exit } _have () { PATH=$PATH:/usr/sbin:/sbin:/usr/local/sbin type $1 &>/dev/null } _init_completion () { local exclude= flag outx errx inx OPTIND=1; while getopts "n:e:o:i:s" flag "$@"; do case $flag in n) exclude+=$OPTARG ;; e) errx=$OPTARG ;; o) outx=$OPTARG ;; i) inx=$OPTARG ;; s) split=false; exclude+== ;; esac; done; COMPREPLY=(); local redir="@(?([0-9])<|?([0-9&])>?(>)|>&)"; _get_comp_words_by_ref -n "$exclude<>&" cur prev words cword; _variables && return 1; if [[ $cur == $redir* || $prev == $redir ]]; then local xspec; case $cur in 2'>'*) xspec=$errx ;; *'>'*) xspec=$outx ;; *'<'*) xspec=$inx ;; *) case $prev in 2'>'*) xspec=$errx ;; *'>'*) xspec=$outx ;; *'<'*) xspec=$inx ;; esac ;; esac; cur="${cur##$redir}"; _filedir $xspec; return 1; fi; local i skip; for ((i=1; i < ${#words[@]}; 1)) do if [[ ${words[i]} == $redir* ]]; then [[ ${words[i]} == $redir ]] && skip=2 || skip=1; words=("${words[@]:0:i}" "${words[@]:i+skip}"); [[ $i -le $cword ]] && cword=$(( cword - skip )); else i=$(( ++i )); fi; done; [[ $cword -eq 0 ]] && return 1; prev=${words[cword-1]}; [[ -n ${split-} ]] && _split_longopt && split=true; return 0 } _installed_modules () { COMPREPLY=($( compgen -W "$( PATH="$PATH:/sbin" lsmod | awk '{if (NR != 1) print $1}' )" -- "$1" )) } _ip_addresses () { local PATH=$PATH:/sbin; COMPREPLY+=($( compgen -W "$( { LC_ALL=C ifconfig -a || ip addr show; } 2>/dev/null | sed -ne 's/.*addr:\([^[:space:]]*\).*/\1/p' -ne 's|.*inet[[:space:]]\{1,\}\([^[:space:]/]*\).*|\1|p' )" -- "$cur" )) } _ipa () { cur=${COMP_WORDS[COMP_CWORD]}; prev=${COMP_WORDS[COMP_CWORD-1]}; if [ $COMP_CWORD -eq 1 ]; then COMPREPLY=($( compgen -W "$(_ipa_commands)" $cur )); else if [ $COMP_CWORD -eq 2 ]; then case "$prev" in help) COMPREPLY=($( compgen -W "$(_ipa_commands) commands" $cur )) ;; esac; fi; fi } _ipa_commands () { ipa help commands 2> /dev/null | sed -r 's/^([-[:alnum:]]*).*/\1/' | grep '^[[:alnum:]]' } _itwebsettings () { local cur prev opts base; cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-help -list -get -info -set -reset -reset -headless -check -verbose"; COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0 } _javaws () { local cur prev opts base; cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-version -arg -param -property -update -verbose -nosecurity -noupdate -headless -strict -xml -allowredirect -Xnofork -Xignoreheaders -Xoffline -Xtrustnone -jnlp -html -browser -about -viewer -Xclearcache -license -help -Xtrustall"; if [[ $prev == *"-jnlp" || $prev == *"-html" ]]; then COMPREPLY=($(compgen -f -X "!*" -- ${cur})); if [ -d "$COMPREPLY" ]; then COMPREPLY="$COMPREPLY/"; fi; return 0; fi; if [[ $cur == "" ]]; then COMPREPLY=($(compgen -W "aa_file_or_url ${opts} zz_file_or_url" -- ${cur})); return 0; fi; if [[ $cur == "-"* ]]; then COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0; else COMPREPLY=($(compgen -f -X "!*" -- ${cur})); if [ -d "$COMPREPLY" ]; then COMPREPLY="$COMPREPLY/"; fi; return 0; fi } _kernel_versions () { COMPREPLY=($( compgen -W '$( command ls /lib/modules )' -- "$cur" )) } _known_hosts () { local cur prev words cword; _init_completion -n : || return; local options; [[ "$1" == -a || "$2" == -a ]] && options=-a; [[ "$1" == -c || "$2" == -c ]] && options+=" -c"; _known_hosts_real $options -- "$cur" } _known_hosts_real () { local configfile flag prefix; local cur curd awkcur user suffix aliases i host; local -a kh khd config; local OPTIND=1; while getopts "acF:p:" flag "$@"; do case $flag in a) aliases='yes' ;; c) suffix=':' ;; F) configfile=$OPTARG ;; p) prefix=$OPTARG ;; esac; done; [[ $# -lt $OPTIND ]] && echo "error: $FUNCNAME: missing mandatory argument CWORD"; cur=${!OPTIND}; let "OPTIND += 1"; [[ $# -ge $OPTIND ]] && echo "error: $FUNCNAME("$@"): unprocessed arguments:" $(while [[ $# -ge $OPTIND ]]; do printf '%s\n' ${!OPTIND}; shift; done); [[ $cur == *@* ]] && user=${cur%@*}@ && cur=${cur#*@}; kh=(); if [[ -n $configfile ]]; then [[ -r $configfile ]] && config+=("$configfile"); else for i in /etc/ssh/ssh_config ~/.ssh/config ~/.ssh2/config; do [[ -r $i ]] && config+=("$i"); done; fi; if [[ ${#config[@]} -gt 0 ]]; then local OIFS=$IFS IFS=' ' j; local -a tmpkh; tmpkh=($( awk 'sub("^[ \t]*([Gg][Ll][Oo][Bb][Aa][Ll]|[Uu][Ss][Ee][Rr])[Kk][Nn][Oo][Ww][Nn][Hh][Oo][Ss][Tt][Ss][Ff][Ii][Ll][Ee][ \t]+", "") { print $0 }' "${config[@]}" | sort -u )); IFS=$OIFS; for i in "${tmpkh[@]}"; do while [[ $i =~ ^([^\"]*)\"([^\"]*)\"(.*)$ ]]; do i=${BASH_REMATCH[1]}${BASH_REMATCH[3]}; j=${BASH_REMATCH[2]}; __expand_tilde_by_ref j; [[ -r $j ]] && kh+=("$j"); done; for j in $i; do __expand_tilde_by_ref j; [[ -r $j ]] && kh+=("$j"); done; done; fi; if [[ -z $configfile ]]; then for i in /etc/ssh/ssh_known_hosts /etc/ssh/ssh_known_hosts2 /etc/known_hosts /etc/known_hosts2 ~/.ssh/known_hosts ~/.ssh/known_hosts2; do [[ -r $i ]] && kh+=("$i"); done; for i in /etc/ssh2/knownhosts ~/.ssh2/hostkeys; do [[ -d $i ]] && khd+=("$i"/*pub); done; fi; if [[ ${#kh[@]} -gt 0 || ${#khd[@]} -gt 0 ]]; then awkcur=${cur//\//\\\/}; awkcur=${awkcur//\./\\\.}; curd=$awkcur; if [[ "$awkcur" == [0-9]*[.:]* ]]; then awkcur="^$awkcur[.:]*"; else if [[ "$awkcur" == [0-9]* ]]; then awkcur="^$awkcur.*[.:]"; else if [[ -z $awkcur ]]; then awkcur="[a-z.:]"; else awkcur="^$awkcur"; fi; fi; fi; if [[ ${#kh[@]} -gt 0 ]]; then COMPREPLY+=($( awk 'BEGIN {FS=","} /^\s*[^|\#]/ { sub("^@[^ ]+ +", ""); \ sub(" .*$", ""); \ for (i=1; i<=NF; ++i) { \ sub("^\\[", "", $i); sub("\\](:[0-9]+)?$", "", $i); \ if ($i !~ /[*?]/ && $i ~ /'"$awkcur"'/) {print $i} \ }}' "${kh[@]}" 2>/dev/null )); fi; if [[ ${#khd[@]} -gt 0 ]]; then for i in "${khd[@]}"; do if [[ "$i" == *key_22_$curd*.pub && -r "$i" ]]; then host=${i/#*key_22_/}; host=${host/%.pub/}; COMPREPLY+=($host); fi; done; fi; for ((i=0; i < ${#COMPREPLY[@]}; i++ )) do COMPREPLY[i]=$prefix$user${COMPREPLY[i]}$suffix; done; fi; if [[ ${#config[@]} -gt 0 && -n "$aliases" ]]; then local hosts=$( sed -ne 's/^[ \t]*[Hh][Oo][Ss][Tt]\([Nn][Aa][Mm][Ee]\)\{0,1\}['"$'\t '"']\{1,\}\([^#*?]*\)\(#.*\)\{0,1\}$/\2/p' "${config[@]}" ); COMPREPLY+=($( compgen -P "$prefix$user" -S "$suffix" -W "$hosts" -- "$cur" )); fi; if [[ -n ${COMP_KNOWN_HOSTS_WITH_AVAHI:-} ]] && type avahi-browse &>/dev/null; then COMPREPLY+=($( compgen -P "$prefix$user" -S "$suffix" -W "$( avahi-browse -cpr _workstation._tcp 2>/dev/null | awk -F';' '/^=/ { print $7 }' | sort -u )" -- "$cur" )); fi; COMPREPLY+=($( compgen -W "$( ruptime 2>/dev/null | awk '!/^ruptime:/ { print $1 }' )" -- "$cur" )); if [[ -n ${COMP_KNOWN_HOSTS_WITH_HOSTFILE-1} ]]; then COMPREPLY+=($( compgen -A hostname -P "$prefix$user" -S "$suffix" -- "$cur" )); fi; __ltrim_colon_completions "$prefix$user$cur"; return 0 } _lldpad_options () { local cur prev opts; COMPREPLY=(); cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-h -f -d -k -s -v -V"; case "${cur}" in *) COMPREPLY=($(compgen -W "${opts}" -- ${cur})) ;; esac; case "${prev}" in -f) _filedir; return 0 ;; esac; return 0 } _lldptool_options () { local cur prev opts cmds opts_and_cmds; COMPREPLY=(); cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; cmds="license -h help -v version -q quit -s stats -t get-tlv -T set-tlv -l get-lldp -L set-lldp"; opts="-i -V -n -a -d -r -R"; opts_and_cmds="$opts $cmds"; case "${cur}" in *) COMPREPLY=($(compgen -W "${opts_and_cmds}" -- ${cur})) ;; esac; case "${prev}" in -i) _available_interfaces; return 0 ;; esac; return 0 } _longopt () { local cur prev words cword split; _init_completion -s || return; case "${prev,,}" in --help | --usage | --version) return 0 ;; --*dir*) _filedir -d; return 0 ;; --*file* | --*path*) _filedir; return 0 ;; --+([-a-z0-9_])) local argtype=$( $1 --help 2>&1 | sed -ne "s|.*$prev\[\{0,1\}=[<[]\{0,1\}\([-A-Za-z0-9_]\{1,\}\).*|\1|p" ); case ${argtype,,} in *dir*) _filedir -d; return 0 ;; *file* | *path*) _filedir; return 0 ;; esac ;; esac; $split && return 0; if [[ "$cur" == -* ]]; then COMPREPLY=($( compgen -W "$( $1 --help 2>&1 | sed -ne 's/.*\(--[-A-Za-z0-9]\{1,\}=\{0,1\}\).*/\1/p' | sort -u )" -- "$cur" )); [[ $COMPREPLY == *= ]] && compopt -o nospace; else if [[ "$1" == @(mk|rm)dir ]]; then _filedir -d; else _filedir; fi; fi } _mac_addresses () { local re='\([A-Fa-f0-9]\{2\}:\)\{5\}[A-Fa-f0-9]\{2\}'; local PATH="$PATH:/sbin:/usr/sbin"; COMPREPLY+=($( { ifconfig -a || ip link show; } 2>/dev/null | sed -ne "s/.*[[:space:]]HWaddr[[:space:]]\{1,\}\($re\)[[:space:]].*/\1/p" -ne "s/.*[[:space:]]HWaddr[[:space:]]\{1,\}\($re\)[[:space:]]*$/\1/p" -ne "s|.*[[:space:]]\(link/\)\{0,1\}ether[[:space:]]\{1,\}\($re\)[[:space:]].*|\2|p" -ne "s|.*[[:space:]]\(link/\)\{0,1\}ether[[:space:]]\{1,\}\($re\)[[:space:]]*$|\2|p" )); COMPREPLY+=($( { arp -an || ip neigh show; } 2>/dev/null | sed -ne "s/.*[[:space:]]\($re\)[[:space:]].*/\1/p" -ne "s/.*[[:space:]]\($re\)[[:space:]]*$/\1/p" )); COMPREPLY+=($( sed -ne "s/^[[:space:]]*\($re\)[[:space:]].*/\1/p" /etc/ethers 2>/dev/null )); COMPREPLY=($( compgen -W '${COMPREPLY[@]}' -- "$cur" )); __ltrim_colon_completions "$cur" } _minimal () { local cur prev words cword split; _init_completion -s || return; $split && return; _filedir } _modules () { local modpath; modpath=/lib/modules/$1; COMPREPLY=($( compgen -W "$( command ls -RL $modpath 2>/dev/null | sed -ne 's/^\(.*\)\.k\{0,1\}o\(\.[gx]z\)\{0,1\}$/\1/p' )" -- "$cur" )) } _ncpus () { local var=NPROCESSORS_ONLN; [[ $OSTYPE == *linux* ]] && var=_$var; local n=$( getconf $var 2>/dev/null ); printf %s ${n:-1} } _parse_help () { eval local cmd=$( quote "$1" ); local line; { case $cmd in -) cat ;; *) LC_ALL=C "$( dequote "$cmd" )" ${2:---help} 2>&1 ;; esac } | while read -r line; do [[ $line == *([ ' '])-* ]] || continue; while [[ $line =~ ((^|[^-])-[A-Za-z0-9?][[:space:]]+)\[?[A-Z0-9]+\]? ]]; do line=${line/"${BASH_REMATCH[0]}"/"${BASH_REMATCH[1]}"}; done; __parse_options "${line// or /, }"; done } _parse_usage () { eval local cmd=$( quote "$1" ); local line match option i char; { case $cmd in -) cat ;; *) LC_ALL=C "$( dequote "$cmd" )" ${2:---usage} 2>&1 ;; esac } | while read -r line; do while [[ $line =~ \[[[:space:]]*(-[^]]+)[[:space:]]*\] ]]; do match=${BASH_REMATCH[0]}; option=${BASH_REMATCH[1]}; case $option in -?(\[)+([a-zA-Z0-9?])) for ((i=1; i < ${#option}; i++ )) do char=${option:i:1}; [[ $char != '[' ]] && printf '%s\n' -$char; done ;; *) __parse_options "$option" ;; esac; line=${line#*"$match"}; done; done } _pci_ids () { COMPREPLY+=($( compgen -W "$( PATH="$PATH:/sbin" lspci -n | awk '{print $3}')" -- "$cur" )) } _pgids () { COMPREPLY=($( compgen -W '$( command ps axo pgid= )' -- "$cur" )) } _pids () { COMPREPLY=($( compgen -W '$( command ps axo pid= )' -- "$cur" )) } _pnames () { COMPREPLY=($( compgen -X '<defunct>' -W '$( command ps axo command= | \ sed -e "s/ .*//" -e "s:.*/::" -e "s/:$//" -e "s/^[[(-]//" \ -e "s/[])]$//" | sort -u )' -- "$cur" )) } _policyeditor () { local cur prev opts base; cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-help -file -defaultfile -codebase -signedby -principals -verbose"; COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0 } _quote_readline_by_ref () { if [[ $1 == \'* ]]; then printf -v $2 %s "${1:1}"; else printf -v $2 %q "$1"; fi; [[ ${!2} == \$* ]] && eval $2=${!2} } _realcommand () { type -P "$1" > /dev/null && { if type -p realpath > /dev/null; then realpath "$(type -P "$1")"; else if type -p greadlink > /dev/null; then greadlink -f "$(type -P "$1")"; else if type -p readlink > /dev/null; then readlink -f "$(type -P "$1")"; else type -P "$1"; fi; fi; fi } } _rl_enabled () { [[ "$( bind -v )" = *$1+([[:space:]])on* ]] } _root_command () { local PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin; local root_command=$1; _command } _scl () { local cur prev opts; COMPREPLY=(); cur="${COMP_WORDS[COMP_CWORD]}"; prev="${COMP_WORDS[COMP_CWORD-1]}"; opts="-l --list"; if [[ ${cur} == -* ]]; then COMPREPLY=($(compgen -W "${opts}" -- ${cur})); return 0; fi; local collections=($(find /etc/scl/prefixes -maxdepth 1 -mindepth 1 -type f -exec basename {} \; | sort -u)); if ((COMP_CWORD == 1)); then local scriptlets=(); for col in ${collections[@]}; do local prefix=`cat /etc/scl/prefixes/$col`; scriptlets+=($(find $prefix/$col/* -maxdepth 1 -type f -exec basename {} \; | sort -u)); done; scriptlets_str=`echo ${scriptlets[@]} | sed 's/ /\n/g'| sort -u`; COMPREPLY=($(compgen -W "$scriptlets_str register deregister" -- ${cur})); return 0; fi; if [[ ${cur} == \'* || ${cur} == \"* ]]; then return 0; fi; if [ $prev == "register" ]; then compopt -o nospace; COMPREPLY=($(compgen -A directory ${cur})); return 0; fi; COMPREPLY=($(compgen -W "${collections[*]}" -- ${cur})); return 0 } _service () { local cur prev words cword; _init_completion || return; [[ $cword -gt 2 ]] && return 0; if [[ $cword -eq 1 && $prev == ?(*/)service ]]; then _services; [[ -e /etc/mandrake-release ]] && _xinetd_services; else local sysvdirs; _sysvdirs; COMPREPLY=($( compgen -W '`sed -e "y/|/ /" \ -ne "s/^.*\(U\|msg_u\)sage.*{\(.*\)}.*$/\2/p" \ ${sysvdirs[0]}/${prev##*/} 2>/dev/null` start stop' -- "$cur" )); fi } _services () { local sysvdirs; _sysvdirs; local restore_nullglob=$(shopt -p nullglob); shopt -s nullglob; COMPREPLY=($( printf '%s\n' ${sysvdirs[0]}/!($_backup_glob|functions) )); $restore_nullglob; COMPREPLY+=($( systemctl list-units --full --all 2>/dev/null | awk '$1 ~ /\.service$/ { sub("\\.service$", "", $1); print $1 }' )); COMPREPLY=($( compgen -W '${COMPREPLY[@]#${sysvdirs[0]}/}' -- "$cur" )) } _shells () { local shell rest; while read -r shell rest; do [[ $shell == /* && $shell == "$cur"* ]] && COMPREPLY+=($shell); done 2> /dev/null < /etc/shells } _signals () { local -a sigs=($( compgen -P "$1" -A signal "SIG${cur#$1}" )); COMPREPLY+=("${sigs[@]/#${1}SIG/${1}}") } _split_longopt () { if [[ "$cur" == --?*=* ]]; then prev="${cur%%?(\\)=*}"; cur="${cur#*=}"; return 0; fi; return 1 } _sysvdirs () { sysvdirs=(); [[ -d /etc/rc.d/init.d ]] && sysvdirs+=(/etc/rc.d/init.d); [[ -d /etc/init.d ]] && sysvdirs+=(/etc/init.d); [[ -f /etc/slackware-version ]] && sysvdirs=(/etc/rc.d) } _terms () { COMPREPLY+=($( compgen -W "$( sed -ne 's/^\([^[:space:]#|]\{2,\}\)|.*/\1/p' /etc/termcap 2>/dev/null )" -- "$cur" )); COMPREPLY+=($( compgen -W "$( { toe -a 2>/dev/null || toe 2>/dev/null; } | awk '{ print $1 }' | sort -u )" -- "$cur" )) } _tilde () { local result=0; if [[ $1 == \~* && $1 != */* ]]; then COMPREPLY=($( compgen -P '~' -u "${1#\~}" )); result=${#COMPREPLY[@]}; [[ $result -gt 0 ]] && compopt -o filenames 2> /dev/null; fi; return $result } _uids () { if type getent &>/dev/null; then COMPREPLY=($( compgen -W '$( getent passwd | cut -d: -f3 )' -- "$cur" )); else if type perl &>/dev/null; then COMPREPLY=($( compgen -W '$( perl -e '"'"'while (($uid) = (getpwent)[2]) { print $uid . "\n" }'"'"' )' -- "$cur" )); else COMPREPLY=($( compgen -W '$( cut -d: -f3 /etc/passwd )' -- "$cur" )); fi; fi } _upvar () { if unset -v "$1"; then if (( $# == 2 )); then eval $1=\"\$2\"; else eval $1=\(\"\${@:2}\"\); fi; fi } _upvars () { if ! (( $# )); then echo "${FUNCNAME[0]}: usage: ${FUNCNAME[0]} [-v varname" "value] | [-aN varname [value ...]] ..." 1>&2; return 2; fi; while (( $# )); do case $1 in -a*) [[ -n ${1#-a} ]] || { echo "bash: ${FUNCNAME[0]}: \`$1': missing" "number specifier" 1>&2; return 1 }; printf %d "${1#-a}" &>/dev/null || { echo "bash:" "${FUNCNAME[0]}: \`$1': invalid number specifier" 1>&2; return 1 }; [[ -n "$2" ]] && unset -v "$2" && eval $2=\(\"\${@:3:${1#-a}}\"\) && shift $((${1#-a} + 2)) || { echo "bash: ${FUNCNAME[0]}:" "\`$1${2+ }$2': missing argument(s)" 1>&2; return 1 } ;; -v) [[ -n "$2" ]] && unset -v "$2" && eval $2=\"\$3\" && shift 3 || { echo "bash: ${FUNCNAME[0]}: $1: missing" "argument(s)" 1>&2; return 1 } ;; *) echo "bash: ${FUNCNAME[0]}: $1: invalid option" 1>&2; return 1 ;; esac; done } _usb_ids () { COMPREPLY+=($( compgen -W "$( PATH="$PATH:/sbin" lsusb | awk '{print $6}' )" -- "$cur" )) } _user_at_host () { local cur prev words cword; _init_completion -n : || return; if [[ $cur == *@* ]]; then _known_hosts_real "$cur"; else COMPREPLY=($( compgen -u -- "$cur" )); fi; return 0 } _usergroup () { if [[ $cur = *\\\\* || $cur = *:*:* ]]; then return; else if [[ $cur = *\\:* ]]; then local prefix; prefix=${cur%%*([^:])}; prefix=${prefix//\\}; local mycur="${cur#*[:]}"; if [[ $1 == -u ]]; then _allowed_groups "$mycur"; else local IFS=' '; COMPREPLY=($( compgen -g -- "$mycur" )); fi; COMPREPLY=($( compgen -P "$prefix" -W "${COMPREPLY[@]}" )); else if [[ $cur = *:* ]]; then local mycur="${cur#*:}"; if [[ $1 == -u ]]; then _allowed_groups "$mycur"; else local IFS=' '; COMPREPLY=($( compgen -g -- "$mycur" )); fi; else if [[ $1 == -u ]]; then _allowed_users "$cur"; else local IFS=' '; COMPREPLY=($( compgen -u -- "$cur" )); fi; fi; fi; fi } _userland () { local userland=$( uname -s ); [[ $userland == @(Linux|GNU/*) ]] && userland=GNU; [[ $userland == $1 ]] } _variables () { if [[ $cur =~ ^(\$\{?)([A-Za-z0-9_]*)$ ]]; then [[ $cur == *{* ]] && local suffix=} || local suffix=; COMPREPLY+=($( compgen -P ${BASH_REMATCH[1]} -S "$suffix" -v -- "${BASH_REMATCH[2]}" )); return 0; fi; return 1 } _xfunc () { set -- "$@"; local srcfile=$1; shift; declare -F $1 &>/dev/null || { local compdir=./completions; [[ $BASH_SOURCE == */* ]] && compdir="${BASH_SOURCE%/*}/completions"; . "$compdir/$srcfile" }; "$@" } _xinetd_services () { local xinetddir=/etc/xinetd.d; if [[ -d $xinetddir ]]; then local restore_nullglob=$(shopt -p nullglob); shopt -s nullglob; local -a svcs=($( printf '%s\n' $xinetddir/!($_backup_glob) )); $restore_nullglob; COMPREPLY+=($( compgen -W '${svcs[@]#$xinetddir/}' -- "$cur" )); fi } _yu_builddep () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; _yum_complete_baseopts "$cur" "$prev" && return 0; case $prev in --target) declare -F _rpm_buildarchs &>/dev/null && _rpm_buildarchs; return 0 ;; esac; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '$( _yum_baseopts 2>/dev/null )' -- "$cur" )); return 0; fi; COMPREPLY=($( compgen -f -o plusdirs -X "!*.spec" -- "$cur" )); [[ $cur != */* && $cur != ~* ]] && _yum_list all "$cur" 2> /dev/null } _yu_debug_dump () { COMPREPLY=(); case $3 in -h | --help) return 0 ;; esac; if [[ $2 == -* ]]; then COMPREPLY=($( compgen -W '--help --norepos' -- "$2" )); return 0; fi; COMPREPLY=($( compgen -f -o plusdirs -- "$cur" )) } _yu_debuginfo_install () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; _yum_complete_baseopts "$cur" "$prev" && return 0; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '$( _yum_baseopts 2>/dev/null ) --no-debuginfo-plugin' -- "$cur" )); return 0; fi; _yum_list all "$cur" } _yu_init_completion () { if declare -F _get_comp_words_by_ref &>/dev/null; then _get_comp_words_by_ref -n = cur prev words; else cur=$1 prev=$2 words=("${COMP_WORDS[@]}"); fi; declare -F _split_longopt &>/dev/null && _split_longopt && split=true } _yu_package_cleanup () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; _yum_complete_baseopts "$cur" "$prev" 2> /dev/null && return 0; case $prev in --leaf-regex | --qf | --queryformat) return 0 ;; --count) COMPREPLY=($( compgen -W '1 2 3 4 5 6 7 8 9' -- "$cur" )); return 0 ;; esac; $split && return 0; COMPREPLY=($( compgen -W '$( _yum_baseopts 2>/dev/null ) --problems --queryformat --orphans --dupes --cleandupes --noscripts --leaves --all --leaf-regex --exclude-devel --exclude-bin --oldkernels --count --keepdevel' -- "$cur" )) } _yu_repo_graph () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help) return 0 ;; --repoid) _yum_helper repolist all "$cur" 2> /dev/null; return 0 ;; -c) COMPREPLY=($( compgen -f -o plusdirs -X '!*.conf' -- "$cur" )); return 0 ;; esac; $split && return 0; COMPREPLY=($( compgen -W '--help --repoid -c' -- "$cur" )) } _yu_repo_rss () { COMPREPLY=(); case $3 in -h | --help | -l | -t | -d | -r | -a) return 0 ;; -f) COMPREPLY=($( compgen -f -o plusdirs -X '!*.xml' -- "$cur" )); return 0 ;; -c) COMPREPLY=($( compgen -f -o plusdirs -X '!*.conf' -- "$cur" )); return 0 ;; esac; COMPREPLY=($( compgen -W '--help -f -l -t -d -r --tempcache -g -a -c' -- "$2" )); [[ $2 == -* ]] || _yum_helper repolist all "$2" 2> /dev/null || return 0 } _yu_repoclosure () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help | -a | --arch | --basearch | --repofrompath) return 0 ;; -c | --config) COMPREPLY=($( compgen -f -o plusdirs -X '!*.conf' -- "$cur" )); return 0 ;; -l | --lookaside | -r | --repoid) _yum_helper repolist all "$cur" 2> /dev/null; return 0 ;; -p | --pkg) _yum_list all "$cur" 2> /dev/null; return 0 ;; -g | --group) _yum_helper groups list all "$cur" 2> /dev/null; return 0 ;; esac; $split && return 0; COMPREPLY=($( compgen -W '--help --config --arch --basearch --builddeps --lookaside --repoid --tempcache --quiet --newest --repofrompath --pkg --group' -- "$cur" )) } _yu_repodiff () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help | --version | -n | --new | -o | --old | -a | --archlist) return 0 ;; esac; $split && return 0; COMPREPLY=($( compgen -W '--version --help --new --old --quiet --archlist --compare-arch --size --downgrade --simple' -- "$cur" )) } _yu_repomanage () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help) return 0 ;; -k | --keep) COMPREPLY=($( compgen -W '1 2 3 4 5 6 7 8 9' -- "$cur" )); return 0 ;; esac; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '--old --new --space --keep --nocheck --help' -- "$cur" )); return 0; fi; COMPREPLY=($( compgen -d -- "$cur" )) } _yu_repoquery () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; local word groupmode=false; for word in "${words[@]}"; do case $word in -g | --group) groupmode=true; break ;; esac; done; case $prev in -h | --help | --version | --qf | --queryformat | --archlist | --repofrompath | --setopt) return 0 ;; -f | --file) COMPREPLY=($( compgen -f -o plusdirs -- "$cur" )); return 0 ;; -l | --list | -i | --info | -R | --requires) if $groupmode; then _yum_helper groups list all "$cur" 2> /dev/null; else declare -F _yum_atgroups &>/dev/null && _yum_atgroups "$cur" || _yum_list all "$cur" 2> /dev/null; fi; return 0 ;; --grouppkgs) COMPREPLY=($( compgen -W 'all default optional mandatory' -- "$cur" )); return 0 ;; --pkgnarrow) COMPREPLY=($( compgen -W 'all available updates installed extras obsoletes recent repos' -- "$cur" )); return 0 ;; --repoid) _yum_helper repolist all "$cur" 2> /dev/null; return 0 ;; --enablerepo) _yum_helper repolist disabled "$cur" 2> /dev/null; return 0 ;; --disablerepo) _yum_helper repolist enabled "$cur" 2> /dev/null; return 0 ;; -c | --config) COMPREPLY=($( compgen -f -o plusdirs -X '!*.conf' -- "$cur" )); return 0 ;; --level) COMPREPLY=($( compgen -W '{1..9} all' -- "$cur" )); return 0 ;; --output) COMPREPLY=($( compgen -W 'text ascii-tree dot-tree' -- "$cur" )); return 0 ;; --search-fields) COMPREPLY=($( compgen -W 'name summary description' -- "$cur" )); return 0 ;; --installroot) COMPREPLY=($( compgen -d -- "$cur" )); return 0 ;; esac; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '--version --help --list --info --file --queryformat --groupmember --all --requires --provides --obsoletes --conflicts --changelog --location --nevra --envra --nvr --source --srpm --resolve --exactdeps --recursive --whatprovides --whatrequires --whatobsoletes --whatconflicts --group --grouppkgs --archlist --pkgnarrow --installed --show-duplicates --repoid --enablerepo --disablerepo --repofrompath --plugins --quiet --verbose --cache --tempcache --querytags --config --level --output --search --search-fields --setopt --installroot' -- "$cur" )); return 0; fi; declare -F _yum_atgroups &>/dev/null && _yum_atgroups "$cur" || _yum_list all "$cur" 2> /dev/null } _yu_verifytree () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help | -t | --testopia) return 0 ;; esac; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '--help --checkall --testopia --treeinfo' -- "$cur" )); return 0; fi; COMPREPLY=($( compgen -d -- "$cur" )) } _yu_yumdb () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; case $prev in -h | --help | -version) return 0 ;; -c | --config) COMPREPLY=($( compgen -f -o plusdirs -X '!*.conf' -- "$cur" )); return 0 ;; shell) COMPREPLY=($( compgen -f -o plusdirs -- "$cur" )); return 0 ;; esac; $split && return 0; if [ $COMP_CWORD -le 1 ]; then COMPREPLY=($( compgen -W 'get set del rename rename-force copy search exist unset info sync undeleted shell --version --help --noplugins --config' -- "$cur" )); fi } _yu_yumdownloader () { local cur prev words=() split=false; _yu_init_completion "$2" "$3"; _yum_complete_baseopts "$cur" "$prev" 2> /dev/null && return 0; case $prev in --destdir) COMPREPLY=($( compgen -d -- "$cur" )); return 0 ;; --archlist) return 0 ;; esac; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '$( _yum_baseopts 2>/dev/null ) --destdir --urls --resolve --source --archlist' -- "$cur" )); return 0; fi; _yum_list all "$cur" } _yum () { COMPREPLY=(); local yum=$1 cur=$2 prev=$3 words=("${COMP_WORDS[@]}"); declare -F _get_comp_words_by_ref &>/dev/null && _get_comp_words_by_ref -n = cur prev words; local cmds=(check check-update clean deplist distro-sync downgrade groups help history info install list load-transaction makecache provides reinstall remove repolist search shell update upgrade version); local i c cmd subcmd; for ((i=1; i < ${#words[@]}-1; i++ )) do [[ -n $cmd ]] && subcmd=${words[i]} && break; for c in ${cmds[@]} check-rpmdb distribution-synchronization erase group groupinfo groupinstall grouplist groupremove groupupdate grouperase install-na load-ts localinstall localupdate whatprovides; do [[ ${words[i]} == $c ]] && cmd=$c && break; done; done; case $cmd in check | check-rpmdb) COMPREPLY=($( compgen -W 'dependencies duplicates all' -- "$cur" )); return 0 ;; check-update | makecache | resolvedep) return 0 ;; clean) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'expire-cache packages headers metadata cache dbcache all' -- "$cur" )); return 0 ;; deplist) COMPREPLY=($( compgen -f -o plusdirs -X '!*.[rs]pm' -- "$cur" )); _yum_list all "$cur"; return 0 ;; distro-sync | distribution-synchronization) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'full different' -- "$cur" )); _yum_list installed "$cur"; return 0 ;; downgrade | reinstall) if ! _yum_atgroups "$cur"; then _yum_binrpmfiles "$cur"; _yum_list installed "$cur"; fi; return 0 ;; erase | remove) _yum_atgroups "$cur" || _yum_list installed "$cur"; return 0 ;; group*) if [[ ( $cmd == groups || $cmd == group ) && $prev == $cmd ]]; then COMPREPLY=($( compgen -W 'info install list remove summary' -- "$cur" )); else _yum_helper groups list all "$cur"; fi; return 0 ;; help) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W '${cmds[@]}' -- "$cur" )); return 0 ;; history) if [[ $prev == $cmd ]]; then COMPREPLY=($( compgen -W 'info list packages-list packages-info summary addon-info redo undo rollback new sync stats' -- "$cur" )); return 0; fi; case $subcmd in undo | repeat | addon | addon-info | rollback) if [[ $prev == $subcmd ]]; then COMPREPLY=($( compgen -W "last" -- "$cur" )); _yum_transactions; fi ;; redo) case $prev in redo) COMPREPLY=($( compgen -W "force-reinstall force-remove last" -- "$cur" )); _yum_transactions ;; reinstall | force-reinstall | remove | force-remove) COMPREPLY=($( compgen -W "last" -- "$cur" )); _yum_transactions ;; esac ;; package-list | pkg | pkgs | pkg-list | pkgs-list | package | packages | packages-list | pkg-info | pkgs-info | package-info | packages-info) _yum_list available "$cur" ;; info | list | summary) if [[ $subcmd != info ]]; then COMPREPLY=($( compgen -W "all" -- "$cur" )); [[ $cur != all ]] && _yum_list available "$cur"; else _yum_list available "$cur"; fi; _yum_transactions ;; sync | synchronize) _yum_list installed "$cur" ;; esac; return 0 ;; info) _yum_list all "$cur"; return 0 ;; install) if ! _yum_atgroups "$cur"; then _yum_binrpmfiles "$cur"; _yum_list available "$cur"; fi; return 0 ;; install-na) _yum_list available "$cur"; return 0 ;; list) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'all available updates installed extras obsoletes recent' -- "$cur" )); return 0 ;; load-transaction | load-ts) COMPREPLY=($( compgen -f -o plusdirs -X '!*.yumtx' -- "$cur" )); return 0 ;; localinstall | localupdate) _yum_binrpmfiles "$cur"; return 0 ;; provides | whatprovides) COMPREPLY=($( compgen -f -o plusdirs -- "$cur" )); return 0 ;; repolist) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'all enabled disabled' -- "$cur" )); return 0 ;; search) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'all' -- "$cur" )); return 0 ;; shell) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -f -o plusdirs -- "$cur" )); return 0 ;; update | upgrade) if ! _yum_atgroups "$cur"; then _yum_binrpmfiles "$cur"; _yum_list updates "$cur"; fi; return 0 ;; version) [[ $prev == $cmd ]] && COMPREPLY=($( compgen -W 'all installed available nogroups grouplist groupinfo' -- "$cur" )); return 0 ;; esac; local split=false; declare -F _split_longopt &>/dev/null && _split_longopt && split=true; _yum_complete_baseopts "$cur" "$prev" && return 0; $split && return 0; if [[ $cur == -* ]]; then COMPREPLY=($( compgen -W '$( _yum_baseopts )' -- "$cur" )); return 0; fi; COMPREPLY=($( compgen -W '${cmds[@]}' -- "$cur" )) } _yum_atgroups () { if [[ $1 == \@* ]]; then _yum_helper groups list all "${1:1}"; COMPREPLY=("${COMPREPLY[@]/#/@}"); return 0; fi; return 1 } _yum_baseopts () { local opts='--help --tolerant --cacheonly --config --randomwait --debuglevel --showduplicates --errorlevel --rpmverbosity --quiet --verbose --assumeyes --assumeno --version --installroot --enablerepo --disablerepo --exclude --disableexcludes --obsoletes --noplugins --nogpgcheck --skip-broken --color --releasever --setopt --downloadonly --downloaddir --disableincludes'; [[ $COMP_LINE == *--noplugins* ]] || opts+=" --disableplugin --enableplugin"; printf %s "$opts" } _yum_binrpmfiles () { COMPREPLY+=($( compgen -f -o plusdirs -X '!*.rpm' -- "$1" )); COMPREPLY=($( compgen -W '"${COMPREPLY[@]}"' -X '*.src.rpm' )); COMPREPLY=($( compgen -W '"${COMPREPLY[@]}"' -X '*.nosrc.rpm' )) } _yum_complete_baseopts () { case $2 in -d | --debuglevel | -e | --errorlevel) COMPREPLY=($( compgen -W '0 1 2 3 4 5 6 7 8 9 10' -- "$1" )); return 0 ;; --rpmverbosity) COMPREPLY=($( compgen -W 'info critical emergency error warn debug' -- "$1" )); return 0 ;; -c | --config) COMPREPLY=($( compgen -f -o plusdirs -X "!*.conf" -- "$1" )); return 0 ;; --installroot | --downloaddir) COMPREPLY=($( compgen -d -- "$1" )); return 0 ;; --enablerepo) _yum_helper repolist disabled "$1"; return 0 ;; --disablerepo) _yum_helper repolist enabled "$1"; return 0 ;; --disableexcludes | --disableincludes) _yum_helper repolist all "$1"; local main=; [[ $2 == *excludes ]] && main=main; COMPREPLY=($( compgen -W '${COMPREPLY[@]} all $main' -- "$1" )); return 0 ;; --enableplugin) _yum_plugins 0 "$1"; return 0 ;; --disableplugin) _yum_plugins 1 "$1"; return 0 ;; --color) COMPREPLY=($( compgen -W 'always auto never' -- "$1" )); return 0 ;; -R | --randomwait | -x | --exclude | -h | --help | --version | --releasever | --cve | --bz | --advisory | --tmprepo | --verify-filenames | --setopt) return 0 ;; --download-order) COMPREPLY=($( compgen -W 'default smallestfirst largestfirst' -- "$1" )); return 0 ;; --override-protection) _yum_list installed "$1"; return 0 ;; --verify-configuration-files) COMPREPLY=($( compgen -W '1 0' -- "$1" )); return 0 ;; esac; return 1 } _yum_helper () { local IFS=' '; if [[ -n "$YUM_CACHEDIR" && "$1 $2" == "list available" ]]; then for db in $(find "$YUM_CACHEDIR" -name primary_db.sqlite); do COMPREPLY+=($( sqlite3 "$db" "SELECT name||'.'||arch FROM packages WHERE name LIKE '$3%'" )); done; return; fi; COMPREPLY+=($( /usr/share/yum-cli/completion-helper.py -d 0 -C "$@" 2>/dev/null )) } _yum_list () { [[ $2 == */* || $2 == [.~-]* ]] && return; [[ $1 != "installed" && ${#2} -lt 1 ]] && return; _yum_helper list "$@" } _yum_plugins () { local val; [[ $1 -eq 1 ]] && val='\(1\|yes\|true\|on\)' || val='\(0\|no\|false\|off\)'; COMPREPLY+=($( compgen -W '$( command grep -il "^\s*enabled\s*=\s*$val" \ /etc/yum/pluginconf.d/*.conf 2>/dev/null \ | sed -ne "s|^.*/\([^/]\{1,\}\)\.conf$|\1|p" )' -- "$2" )) } _yum_transactions () { COMPREPLY+=($( compgen -W "$( $yum -d 0 -C history 2>/dev/null | sed -ne 's/^[[:space:]]*\([0-9]\{1,\}\).*/\1/p' )" -- "$cur" )) } command_not_found_handle () { local runcnf=1; local retval=127; [[ $- =~ i ]] || runcnf=0; [[ ! -S /run/dbus/system_bus_socket ]] && runcnf=0; [[ ! -x '/usr/libexec/packagekitd' ]] && runcnf=0; [[ -n ${COMP_CWORD-} ]] && runcnf=0; if [ $runcnf -eq 1 ]; then '/usr/libexec/pk-command-not-found' "$@"; retval=$?; else if [[ -n "${BASH_VERSION-}" ]]; then printf 'bash: %scommand not found\n' "${1:+$1: }" 1>&2; fi; fi; return $retval } dequote () { eval printf %s "$1" 2> /dev/null } quote () { local quoted=${1//\'/\'\\\'\'}; printf "'%s'" "$quoted" } quote_readline () { local quoted; _quote_readline_by_ref "$1" ret; printf %s "$ret" }
自定义变量
基本语法
1、定义变量:变量名=变量值(=号前后不能有空格,如果值有空格,值应该使用单引号或者双引号包起来)
2、撤销变量:unset 变量名
3、声明静态变量:readonly 变量名,不能unset
4、将局部变量导出为全局变量:export 变量名
相关规则
变量名称可以由字母、数字、下划线组成,但是不能以数字开头,环境变量名建议大写;
赋值时,=号两侧不能有空格,若值有空格,则必须与单引号、多引号一起联合使用;
在bash中,变量默认类型是字符串类型,无法直接进行赋值运算,使用$((运算表达式))、$[运算表达式],可以进行赋值运算;
-- 自定义局部变量:a=2 [root@hadoop100 scripts]# a=2 -- 将其输出 [root@hadoop100 scripts]# echo $a 2 -- 未被定义的局部变量 my_var [root@hadoop100 scripts]# echo $my_var [root@hadoop100 scripts]# my_var=hello [root@hadoop100 scripts]# echo $my_var hello -- 直接对局部变量进行更改 [root@hadoop100 scripts]# my_var=HELLO [root@hadoop100 scripts]# echo $my_var HELLO -- 定义局部变量时,=号之间不能有空格 [root@hadoop100 scripts]# my_var = HELLO bash: my_var: 未找到命令... -- 定义局部变量时,=号之间不能有空格,若变量值有空格,需要使用单引号、双引号 [root@hadoop100 scripts]# my_var=hello world bash: world: 未找到命令... -- 单引号、双引号都可以 [root@hadoop100 scripts]# my_var="hello world" [root@hadoop100 scripts]# echo $my_var hello world -- env命令查询所有全局变量,没有定义的my_var,因为是一个局部变量 [root@hadoop100 scripts]# env | grep my_var -- set命令查询所有变量,包含局部变量、全局变量 [root@hadoop100 scripts]# set | grep my_var my_var='hello world' -- 将局部变量my_var导出,变成全局变量 [root@hadoop100 scripts]# export my_var [root@hadoop100 scripts]# env | grep my_var my_var=hello world [root@hadoop100 scripts]# set | grep my_var my_var='hello world' [root@hadoop100 scripts]# echo $my_var hello world -- 进入到一个子shell进程中 [root@hadoop100 scripts]# bash [root@hadoop100 scripts]# ps -f UID PID PPID C STIME TTY TIME CMD root 3160 3152 0 16:48 pts/0 00:00:00 -bash root 3505 3160 1 17:07 pts/0 00:00:00 bash root 3533 3505 0 17:07 pts/0 00:00:00 ps -f -- 打印当前全局变量 my_var [root@hadoop100 scripts]# echo $my_var hello world -- 在子shell进程中定义局部变量 [root@hadoop100 scripts]# my_var="HELLO WORLD" [root@hadoop100 scripts]# echo $my_var HELLO WORLD [root@hadoop100 scripts]# exit exit -- 子shell进程中定义的局部变量不会影响到全局变量 [root@hadoop100 scripts]# echo $my_var hello world [root@hadoop100 scripts]# ps -f UID PID PPID C STIME TTY TIME CMD root 3160 3152 0 16:48 pts/0 00:00:00 -bash root 3542 3160 0 17:08 pts/0 00:00:00 ps -f [root@hadoop100 scripts]# bash [root@hadoop100 scripts]# ps -f UID PID PPID C STIME TTY TIME CMD root 3160 3152 0 16:48 pts/0 00:00:00 -bash root 3543 3160 0 17:08 pts/0 00:00:00 bash root 3571 3543 0 17:08 pts/0 00:00:00 ps -f [root@hadoop100 scripts]# echo $my_var hello world [root@hadoop100 scripts]# my_var="HELLO WORLD" [root@hadoop100 scripts]# echo $my_var HELLO WORLD -- 子shell进程中定义局部变量,并导出成全局变量 [root@hadoop100 scripts]# export my_var [root@hadoop100 scripts]# echo $my_var HELLO WORLD [root@hadoop100 scripts]# exit exit -- 依旧影响不到全局变量 [root@hadoop100 scripts]# echo $my_var hello world
编辑hello.sh脚本
#!/bin/bash echo "hello world" echo $my_var echo $new_var
-- my_var是一个全局变量 [root@hadoop100 scripts]# env | grep my_var my_var=hello world -- 创建一个局部变量 new_var [root@hadoop100 scripts]# new_var="hello linux" -- 使用bash命令执行脚本,使用子shell进程去执行,获取不到当前shell下的new_var变量 [root@hadoop100 scripts]# bash hello.sh hello world hello world -- 之间写路径也是一样,创建了一个子shell进程去执行,获取不到局部变量 new_var [root@hadoop100 scripts]# ./hello.sh -bash: ./hello.sh: 权限不够 [root@hadoop100 scripts]# chmod +x hello.sh [root@hadoop100 scripts]# ./hello.sh hello world hello world [root@hadoop100 scripts]# chmod -x hello.sh -- 使用source命令去执行脚本,在当前的shell进程下执行,可以获得当前局部变量new_var [root@hadoop100 scripts]# source hello.sh hello world hello world hello linux -- 将局部变量new_var导出为全局变量 [root@hadoop100 scripts]# export new_var -- 可以访问 [root@hadoop100 scripts]# bash hello.sh hello world hello world hello linux -- 可以访问 [root@hadoop100 scripts]# ./hello.sh -bash: ./hello.sh: 权限不够 [root@hadoop100 scripts]# chmod +x hello.sh [root@hadoop100 scripts]# ./hello.sh hello world hello world hello linux [root@hadoop100 scripts]# source hello.sh hello world hello world hello linux
[root@hadoop100 scripts]# var=1 [root@hadoop100 scripts]# echo $var 1 [root@hadoop100 scripts]# var=1+2 [root@hadoop100 scripts]# echo $var 1+2 [root@hadoop100 scripts]# var=$((1+2)) [root@hadoop100 scripts]# echo $var 3 [root@hadoop100 scripts]# var=$[2+3] [root@hadoop100 scripts]# echo $var 5 [root@hadoop100 scripts]# unset var [root@hadoop100 scripts]# echo $var [root@hadoop100 scripts]# var=$[2+3] [root@hadoop100 scripts]# echo $var 5 [root@hadoop100 scripts]# readonly var [root@hadoop100 scripts]# echo $var 5 [root@hadoop100 scripts]# var=3 -bash: var: 只读变量 [root@hadoop100 scripts]# unset var -bash: unset: var: 无法反设定: 只读 variable
特殊变量
$n
n为数字,$0代表脚本名称,$1-$9代表第一到第九个参数,十以上的参数需要用大括号包含,例如:${10}
编辑hello.sh脚本如下
#!/bin/bash echo "hello world" echo "hello $1"
传递第一个参数$1(xiaoming、xiaoliang)进行验证
[root@hadoop100 scripts]# ll 总用量 4 -rw-r--r--. 1 root root 47 10月 12 13:59 hello.sh [root@hadoop100 scripts]# chmod +x hello.sh [root@hadoop100 scripts]# ./hello.sh hello world hello [root@hadoop100 scripts]# ./hello.sh xiaoming hello world hello xiaoming [root@hadoop100 scripts]# ./hello.sh xiaoliang hello world hello xiaoliang
编辑hello.sh脚本如下
#!/bin/bash -- 使用单引号,$n不会被识别成一个参数,而是会原样输出 echo '===========$n===========' echo script name:$0 echo lst parameter:$1 echo 2nd parameter:$2
[root@hadoop100 scripts]# ll 总用量 4 -rwxr-xr-x. 1 root root 108 10月 12 15:39 hello.sh [root@hadoop100 scripts]# ./hello.sh abc def -- 脚本第一行原样输出 ===========$n=========== -- 脚本第二行获取到了脚本名称,输出当前脚本名称(带路径) script name:./hello.sh -- 脚本第三行获取到了第一个输入参数 lst parameter:abc -- 脚本第四行获取到了第二个输入参数 2nd parameter:def
$#
获取所有输入参数的个数,通常用于循环、判断当前脚本输入参数个数是否正确等
编辑hello.sh脚本
#!/bin/bash echo '===========$n===========' echo script name:$0 echo lst parameter:$1 echo 2nd parameter:$2 echo '===========$#===========' echo parameter numbers:$#
[root@hadoop100 scripts]# ./hello.sh ===========$n=========== script name:./hello.sh lst parameter: 2nd parameter: ===========$#=========== -- 不输入参数时,获取的参数个数是0 parameter numbers:0 [root@hadoop100 scripts]# ./hello.sh abc def ===========$n=========== script name:./hello.sh lst parameter:abc 2nd parameter:def ===========$#=========== -- 输入2个参数时,获取的参数个数是2 parameter numbers:2
$*、$@
$*是获取所有参数,将参数看成一个整体
$@是获取所有参数,将每个参数区别对待
编辑hello.sh
#!/bin/bash echo '===========$n===========' echo script name:$0 echo lst parameter:$1 echo 2nd parameter:$2 echo '===========$#===========' echo parameter numbers:$# echo '===========$*===========' echo $* echo '===========$@===========' echo $@
-- 此时没有进行循环,还看不出效果 [root@hadoop100 scripts]# ./hello.sh abc def ===========$n=========== script name:./hello.sh lst parameter:abc 2nd parameter:def ===========$#=========== parameter numbers:2 ===========$*=========== abc def ===========$@=========== abc def
$*、$@的区别、使用for循环测试
新建 for_test.sh脚本如下所示
#!/bin/bash echo '=========$*=========' for param in $* do echo $param done echo '=========$@=========' for param in $@ do echo $param done
[root@hadoop100 scripts]# ./for_test.sh a b c d e =========$*========= a b c d e =========$@========= a b c d e
当前输出$*与$@没有任何区别,但是将两个变量使用双引号包含起来时,$*会把所有参数看成一个整体,而$@则不会,编辑for_test.sh脚本如下所示
#!/bin/bash echo '=========$*=========' for param in "$*" do echo $param done echo '=========$@=========' for param in "$@" do echo $param done
[root@hadoop100 scripts]# ./for_test.sh a b c d e =========$*========= a b c d e =========$@========= a b c d e
$?
表示最后一次执行命令的返回状态,0(正常执行),若返回不是0则表示上次执行的命令不正确;
-- 正确执行脚本 [root@hadoop100 scripts]# ./hello.sh ===========$n=========== script name:./hello.sh lst parameter: 2nd parameter: ===========$#=========== parameter numbers:0 ===========$*=========== ===========$@=========== -- 返回0,说明上一次命令执行正确 [root@hadoop100 scripts]# echo $? 0 -- 直接输入脚本名称,会被识别成一个命令,但未找到 [root@hadoop100 scripts]# hello.sh bash: hello.sh: 未找到命令... -- 返回127,返回非0表示上一次命令执行不正确 [root@hadoop100 scripts]# echo $? 127 [root@hadoop100 scripts]# ll 总用量 4 -rwxr-xr-x. 1 root root 246 10月 12 15:53 hello.sh [root@hadoop100 scripts]# echo $? 0 -- 进入一个不存在的目录 [root@hadoop100 scripts]# cd /not -bash: cd: /not: 没有那个文件或目录 [root@hadoop100 scripts]# echo $? -- 返回1,返回非0表示上一次命令执行不正确 1
运算符
$((运算表达式))、$[运算表达式]
expr命令(了解)
-- 参数中间不加空格,会被识别为字符串 [root@hadoop100 scripts]# expr 1+2 1+2 [root@hadoop100 scripts]# expr 1 + 2 3 [root@hadoop100 scripts]# expr 5 - 1 4 -- *号在shell中具有特殊含义,例如通配符,作为乘法时需要使用转义 [root@hadoop100 scripts]# expr 5 * 2 expr: 语法错误 [root@hadoop100 scripts]# expr 5 \* 2 10 [root@hadoop100 scripts]# echo $((5*2)) 10 [root@hadoop100 scripts]# echo $[5*2] 10 -- 使用$((运算符))或者$[运算符]来进行赋值 [root@hadoop100 scripts]# b=$((2+3)) [root@hadoop100 scripts]# echo $b 5 [root@hadoop100 scripts]# b=$[3+3] [root@hadoop100 scripts]# echo $b 6 -- 使用expr进行赋值报错写法 [root@hadoop100 scripts]# b=expr 5 \* 2 bash: 5: 未找到命令... -- 使用expr进行赋值报错写法,被解析成字符串 [root@hadoop100 scripts]# b="expr 5 \* 2" [root@hadoop100 scripts]# echo $b expr 5 \* 2 -- 使用expr进行赋值,需要使用"命令置换",将后面的运算符使用 $(运算符) [root@hadoop100 scripts]# b=$(expr 5 \* 2) [root@hadoop100 scripts]# echo $b 10 -- 使用expr进行赋值,需要使用"命令置换",将后面的运算符使用 ``(反引号) [root@hadoop100 scripts]# b=`expr 5 \* 3` [root@hadoop100 scripts]# echo $b 15
[root@hadoop100 scripts]# s=$[(2+3)*4] [root@hadoop100 scripts]# echo $s 20
编写一个add.sh脚本,内容如下
-- 指定使用bash解析器 #!/bin/bash -- 定义sum变量并赋值:第一个参数与第二个参数的和 sum=$[$1+$2] -- 输出变量sum的值 echo sum=$sum
[root@hadoop100 scripts]# ll 总用量 8 -rwxr-xr-x. 1 root root 39 10月 12 16:32 add.sh -rwxr-xr-x. 1 root root 246 10月 12 15:53 hello.sh [root@hadoop100 scripts]# ./add.sh 12 13 sum=25
条件判断
基本语法:
1、test:[条件判断语句]
2、[条件判断语句]
条件判断语句前后应该有空格,条件非空即为true,例如 [ ithailin ] 返回true,[ ] 返回false
[root@hadoop100 scripts]# b=hello [root@hadoop100 scripts]# echo $b hello -- 判断 $b是否等于hello [root@hadoop100 scripts]# test $b = hello -- 没有返回,可以根据 $? 获取上一个命令是否执行成功 [root@hadoop100 scripts]# echo $? -- 返回0,表达式为真 0 [root@hadoop100 scripts]# test $b = Hello -- 返回1,表达式为假 [root@hadoop100 scripts]# echo $? 1 [root@hadoop100 scripts]# echo $b hello -- 是中括号,括号、=号,中间需要有空格 [root@hadoop100 scripts]# [ $b = hello ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ $b = Hello ] [root@hadoop100 scripts]# echo $? 1 -- 没有空格会被识别为一个值,返回真 [root@hadoop100 scripts]# [ $b=Hello ] [root@hadoop100 scripts]# echo $? 0 -- 中括号需要间隔空格 [root@hadoop100 scripts]# [$b = Hello] bash: [hello: 未找到命令... [root@hadoop100 scripts]# [ abcdefg ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ ] [root@hadoop100 scripts]# echo $? 1
不等于使用 != 比较
[root@hadoop100 scripts]# echo $b hello -- $b 不等于 hello,表达式为假,返回1 [root@hadoop100 scripts]# [ $b != hello ] [root@hadoop100 scripts]# echo $? 1 [root@hadoop100 scripts]# [ $b != Hello ] [root@hadoop100 scripts]# echo $? 0
字符串之间的比较
用符号”=“判断相等,用符号”!=“判断不等
两个整数之间的比较
-eq 等于(equal)
-ne 不等于(not equal)
-lt 小于(less than)
-le 小于等于(less equal)
-gt 大于(greater than)
-ge 大于等于(greater equal)
按照文件权限进行判断
-r 有读的权限(read)
-w 有写的权限(write)
-x 有执行的权限(execute)
按照文件类型进行判断
-e 文件存在(existence)
-f 文件存在并且是一个常规文件(file)
-d 文件存在并且是一个目录(directory)
[root@hadoop100 scripts]# ll 总用量 8 -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 246 10月 13 14:56 hello.sh [root@hadoop100 scripts]# [ 2 = 8 ] [root@hadoop100 scripts]# echo $? 1 [root@hadoop100 scripts]# [ 2 = 2 ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ ithailin = atguigu ] [root@hadoop100 scripts]# echo $? 1 [root@hadoop100 scripts]# [ ithailin != atguigu ] [root@hadoop100 scripts]# echo $? 0 -- 整数使用 = 号或者 != 来判断时,会被当成字符串来进行判断 [root@hadoop100 scripts]# [ 2 != 2 ] [root@hadoop100 scripts]# echo $? 1 -- 在shell中没有 大于(>),小于(<)符号,而是使用 -lt、-gt等一些符号 [root@hadoop100 scripts]# [ 2 -eq 3 ] [root@hadoop100 scripts]# echo $? 1 [root@hadoop100 scripts]# [ 2 -eq 2 ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ 2 -lt 5 ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ 2 -gt 5 ] [root@hadoop100 scripts]# echo $? 1 -- 按照文件权限进行判断 [root@hadoop100 scripts]# [ -r hello.sh ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ -x hello.sh ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# [ -w hello.sh ] [root@hadoop100 scripts]# echo $? 0 [root@hadoop100 scripts]# chmod -x hello.sh [root@hadoop100 scripts]# [ -x hello.sh ] [root@hadoop100 scripts]# echo $? 1 -- 根据文件类型进行判断,-e:当前文件是否存在 [root@hadoop100 scripts]# [ -e hello.sh ] [root@hadoop100 scripts]# echo $? 0 -- -f:是否是一个常规文件 [root@hadoop100 scripts]# [ -f hello.sh ] [root@hadoop100 scripts]# echo $? 0 -- -d:是否是一个目录 [root@hadoop100 scripts]# [ -d hello.sh ] [root@hadoop100 scripts]# echo $? 1
多条件判断 &&(逻辑与:前一个命令执行成功时,再执行后一个命令),||(逻辑或:前一个命令执行失败时,再执行后一个命令)
[root@hadoop100 scripts]# var=15 [root@hadoop100 scripts]# echo $var 15 -- 判断$var是否小于20 [root@hadoop100 scripts]# [ $var -lt 20 ] && echo "$var < 20" || echo "$var >= 20" 15 < 20 -- 判断 ithailin = atguigu,返回假,再判断 hello.sh是否是目录,返回假,再判断变量var是否大于10,返回真,因此返回0 [root@hadoop100 scripts]# [ ithailin = atguigu ] || [ -d hello.sh ] || [ $var -gt 10 ] [root@hadoop100 scripts]# echo $? 0
流程控制
if语句
-- fi是语句的结束符号 if [条件判断语句];then 程序块 fi -- 或者使用下面这个写法 if [条件判断语句] then 程序块 fi -- 上面是单分支的if语句,下面是多分支的if语句 if [条件判断语句] then 程序块 elif [条件判断语句] then 程序块 else 程序块 fi
-- ;(分号):表示一个命令的结束,后面可以跟其他的命令,一行中可以执行多个命令,中间用分号隔开 [root@hadoop100 ~]# cd scripts/; ll 总用量 8 -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rw-r--r--. 1 root root 246 10月 13 14:56 hello.sh
[root@hadoop100 scripts]# a=25 [root@hadoop100 scripts]# echo $a 25 [root@hadoop100 scripts]# if [ $a -gt 18 ]; then echo ok; fi ok [root@hadoop100 scripts]# a=15 [root@hadoop100 scripts]# echo $a 15 -- 因为a小于18,因此没有输出ok [root@hadoop100 scripts]# if [ $a -gt 18 ]; then echo ok; fi
新建脚本if_test.sh(注意:当前脚本若不传第一个参数 $1会报空 )
#!/bin/bash if [ $1 = ithailin ] then echo "welcome,ithailin" fi
[root@hadoop100 scripts]# vim if_test.sh [root@hadoop100 scripts]# chmod +x if_test.sh -- 由于没有传参数,$1为空,抛出异常 [root@hadoop100 scripts]# ./if_test.sh ./if_test.sh: 第 3 行:[: =: 期待一元表达式
防止参数 $1 空时bash报错的解决方案:编辑if_test.sh脚本如下
#!/bin/bash if [ "$1"x = "ithailin"x ] then echo "welcome,ithailin" fi
在$1上拼接x,并且在值“ithailin”也拼接x,防止参数为空bash报错,因为当参数为空时,进行判断 x = ithailinx,不会报错
if [ "$变量1"x = "$变量2"x ]中x的含义 问题:if [ "$变量1"x = "$变量2"x ]中x的含义是? 答:“x”字符可以为任意字符,用于防止变量为空时,某些版本的bash中会产生错误;
条件的组合
[root@hadoop100 scripts]# a=25 [root@hadoop100 scripts]# echo $a 25 [root@hadoop100 scripts]# if [ $a -gt 18 ] && [ $a -lt 35 ]; then echo ok; fi ok -- 多个条件可以组合到一起 -- -a:and -- -o:or [root@hadoop100 scripts]# if [ $a -gt 18 -a $a -lt 35 ]; then echo ok; fi ok
编辑if_test_sh脚本如下
#!/bin/bash if [ "$1"x = "ithailin"x ] then echo "welcome,ithailin" fi # 输入第二个参数表示年龄,判断属于哪个年龄段 if [ $2 -lt 18 ] then echo "未成年" else echo "成年人" fi
[root@hadoop100 scripts]# vim if_test.sh [root@hadoop100 scripts]# ./if_test.sh ithailin 15 welcome,ithailin 未成年 [root@hadoop100 scripts]# ./if_test.sh ithailin 20 welcome,ithailin 成年人
编辑if_test.sh脚本如下
#!/bin/bash if [ "$1"x = "ithailin"x ] then echo "welcome,ithailin" fi # 输入第二个参数表示年龄,判断属于哪个年龄段 if [ $2 -lt 18 ] then echo "未成年" elif [ $2 -lt 35 ] then echo "青年人" elif [ $2 -lt 60 ] then echo "中年人" else echo "老年人" fi
[root@hadoop100 scripts]# ./if_test.sh ithailin 15 welcome,ithailin 未成年 [root@hadoop100 scripts]# ./if_test.sh ithailin 25 welcome,ithailin 青年人 [root@hadoop100 scripts]# ./if_test.sh ithailin 36 welcome,ithailin 中年人 [root@hadoop100 scripts]# ./if_test.sh ithailin 67 welcome,ithailin 老年人
case语句
;;两个分号表示结尾。*)表示默认,与java中的default类似
case $变量名 in "值1") 程序块 ;; "值2") 程序块 ;; *) 程序块 ;; esac
新建脚本case_test.sh如下
#!/bin/bash case $1 in 1) echo "one" ;; 2) echo "two" ;; 3) echo "three" ;; *) echo "default number" esac
[root@hadoop100 scripts]# vim case_test.sh [root@hadoop100 scripts]# chmod +x case_test.sh [root@hadoop100 scripts]# ./case_test.sh 1 one [root@hadoop100 scripts]# ./case_test.sh 3 three [root@hadoop100 scripts]# ./case_test.sh 35 default number
for语句
-- 语法1 for ((变量初始值;循环控制条件;变量变化)) do 程序块 done
新建脚本sum_to.sh如下
1、在(())下,可以直接使用数学运算符
2、对变量进行赋值的时候可以直接使用变量名(sum),但使用变量时需要使用$符号加上变量名($sum、$i),进行加法运算时不能直接运算,而是需要在$[]里面运算
#!/bin/bash for ((i=1; i <= $1; i++ )) do sum=$[ $sum + $i ] done echo $sum
[root@hadoop100 scripts]# ./sum_to.sh 100 5050 [root@hadoop100 scripts]# ./sum_to.sh 10 55
-- 语法2 for 变量 in 值1 值2 值3... do 程序块 done
shell编程里面有一个隐藏的内部运算符,也就是花括号,表示一个序列,例如{1..100}则表示1,2,3...100(1到100)
[root@hadoop100 scripts]# for os in linux windows macos; do echo $os; done linux windows macos [root@hadoop100 scripts]# for i in {1..100}; do sum=$[$sum+$i]; done; echo $sum 5050
while语句
while [条件判断语句] do 程序块 done
编辑sum_to.sh脚本,新增以下代码
# 用while去实现 a=1 while [ $a -le $1 ] do sum2=$[ $sum2 + $a ] a=$[$a +1] done echo $sum2
[root@hadoop100 scripts]# ./sum_to.sh 100 5050 5050
可以使用let来简化写法,编辑sum_to.sh脚本如下
# 用while去实现 a=1 while [ $a -le $1 ] do # sum2=$[ $sum2 + $a ] # a=$[$a +1] let sum2+=a let a++ done echo $sum2
[root@hadoop100 scripts]# ./sum_to.sh 100 5050 5050
read读取控制台输入
-- 选项 -p:指定读取值时的提示符 -- 选项 -t:指定读取值时的等待时间(秒),如果不加则一直等待 -- 参数 变量名:读取指定值的变量名称 read 选项 参数
新建read_test.sh脚本如下
#!/bin/bash read -t 10 -p "请输入您的芳名:" name echo "welcome, $name"
[root@hadoop100 scripts]# vim read_test.sh [root@hadoop100 scripts]# chmod +x read_test.sh [root@hadoop100 scripts]# ./read_test.sh 请输入您的芳名:ithailin welcome, ithailin
函数
系统函数:basename
函数可以理解为一个轻量级的脚本,脚本可以理解为一个重量级的函数
-- basename会删除掉所有的前缀,包括最好一个"/"符号,然后将字符串显示出来,可以理解为取路径里面的名称 -- suffix为后缀,如果suffix被制定了,basename会将string或者pathname中的suffix删掉 basename [string/pathname] suffix
[root@hadoop100 scripts]# basename /root/scripts/param_name.sh param_name.sh [root@hadoop100 scripts]# basename /root/scripts/param_name.sh .sh param_name -- 并不是获取目录下的文件,而只是对字符串的截取操作 [root@hadoop100 scripts]# basename /shshshhs/1212121/1111 1111
系统函数:dirname
从给定的包含绝对路径的文件名中去除文件名非目录的部分,然后返回剩下的路径目录部分
可以理解为取文件路径的绝对路径名称,去除文件名称
dirname 文件绝对路径
[root@hadoop100 scripts]# dirname /root/scripts/param_name.sh /root/scripts [root@hadoop100 scripts]# dirname ../scripts/param_name.sh ../scripts [root@hadoop100 scripts]# dirname ./param_name.sh . -- 并不是找到当前文件,只是对字符串的截取操作 [root@hadoop100 scripts]# dirname /12121/s1s1/sss /12121/s1s1
新疆脚本param_name.sh
#!/bin/bash echo script name:$(basename $0 .sh) echo script path:$(cd $(dirname $0); pwd)
[root@hadoop100 scripts]# ./param_name.sh script name:param_name script path:/root/scripts
自定义函数
-- function:表示当前是一个函数,因为函数名后接了{}(花括号),也表示当前是一个函数,因此function可以省略 -- 函数名后的()也可以省略,函数可以使用$n来进行参数的接收 -- return int;也可以省略,因为函数不反悔信息也是存在的 function 函数名(){ 程序块 return int; } -- 省略之后的写法,不过通常建议完整写法,代码可读性更高(以上) 函数名{ 程序块 }
注意:
1、必须在调用函数之前,先进行函数的声明,shell脚本是逐行执行,不会像其他的语言一样先进行编译;
2、函数返回值只能使用$?系统变量进行获取,可以显示的加 return进行返回,如果不加,将以最后的一条命令运行结果作为返回值,return后面根数值n(0-255)
新建函数脚本fun_test.sh如下
#!/bin/bash function add(){ s=$[$1 + $2] echo $s } read -p "请输入第一个整数:" a read -p "请输入第二个整数:" b -- $(add $a $b):使用命令替换去获取函数打印的信息,再将其赋值给sum sum=$(add $a $b) echo $sum
[root@hadoop100 scripts]# ./fun_test.sh 请输入第一个整数:12 请输入第二个整数:34 46 [root@hadoop100 scripts]# ./fun_test.sh 请输入第一个整数:200 请输入第二个整数:300 500
以下是当前脚本的其他写法
-- 脚本如下(在函数中进行打印输出) #!/bin/bash function add(){ s=$[$1 + $2] echo "和:"$s } read -p "请输入第一个整数:" a read -p "请输入第二个整数:" b add $a $b -- 相关执行命令 [root@hadoop100 scripts]# ./fun_test_error.sh 请输入第一个整数:34 请输入第二个整数:12 和:46 [root@hadoop100 scripts]# ./fun_test_error.sh 请输入第一个整数:200 请输入第二个整数:300 和:500 -- 脚本如下(函数拥有返回体) #!/bin/bash function add(){ s=$[$1 + $2] return "和:"$s } read -p "请输入第一个整数:" a read -p "请输入第二个整数:" b add $a $b echo $? -- 相关执行命令(返回体必须是使用$?接收,也就是整数0-255) [root@hadoop100 scripts]# ./fun_test_error.sh 请输入第一个整数:12 请输入第二个整数:34 ./fun_test_error.sh: 第 5 行:return: 和:46: 需要数字参数 255 -- 脚本如下 #!/bin/bash function add(){ s=$[$1 + $2] return $s } read -p "请输入第一个整数:" a read -p "请输入第二个整数:" b add $a $b echo $? -- 相关执行命令(返回体必须是$?接收0-255,超出上限会重新从0开始累加) [root@hadoop100 scripts]# ./fun_test_error.sh 请输入第一个整数:12 请输入第二个整数:34 46 [root@hadoop100 scripts]# ./fun_test_error.sh 请输入第一个整数:200 请输入第二个整数:300 244
综合案例示例(文件归档)
-- 打包目录,压缩后的文件格式为 tar.gz -- 选项 -c:产生tar.gz打包文件 -- 选项 -v:显示详细信息 -- 选项 -f:指定压缩后的文件名 -- 选项 -z:打包同时压缩 -- 选项 -x:解包 .tar 文件 -- 选项 -C:解压到指定目录 tar [选项] xxx.tar.gz [将要打包的目录或文件]
新建归档脚本 tar_test.sh如下所示
#!/bin/bash #判断当前输入参数个数是否为1 if [ $# -ne 1 ] then echo "参数个数错误!应该输入一个参数作为归档的目录名" exit fi #判断当前输入参数是否是目录 if [ -d $1 ] then echo else echo echo "目录不存在!" echo exit fi #使用 basename命令截取获取输入参数的文件名称,当前输入参数末尾不应该带有/ DIR_NAME=$(basename $1) #使用 dirname命令截取输入参数的路径,进入到路径位置,使用 pwd命令获取当前位置绝对路径 DIR_PATH=$(cd $(dirname $1); pwd) #获取当前日期 DATE=$(date +%y%m%d) #定义生成的归档文件名称 FILE=archive_${DIR_NAME}_$DATE.tar.gz #定义生成归档文件路径 DEST=/root/archive/$FILE #开始归档目录文件 echo "开始归档..." echo tar -czf $DEST $DIR_PATH/$DIR_NAME if [ $? -eq 0 ] then echo echo "归档成功!" echo "归档文件为:$DEST" echo else echo "归档失败!" echo fi exit
[root@hadoop100 scripts]# vim tar_test.sh [root@hadoop100 scripts]# chmod u+x tar_test.sh [root@hadoop100 scripts]# ll | grep tar -rwxr--r--. 1 root root 961 10月 24 14:29 tar_test.sh [root@hadoop100 scripts]# ./tar_test.sh 参数个数错误!应该输入一个参数作为归档的目录名 -- 由于脚本里面使用了 basename命令进行截取输入参数,因此此处输入参数末尾不应该有反/ [root@hadoop100 scripts]# ./tar_test.sh ../scripts 开始归档... tar: 从成员名中删除开头的“/” tar (child): /root/archive/archive_scripts_221024.tar.gz:无法 open: 没有那个文件或目录 tar (child): Error is not recoverable: exiting now 归档失败! [root@hadoop100 scripts]# mkdir /root/archive [root@hadoop100 scripts]# ./tar_test.sh ../scripts 开始归档... tar: 从成员名中删除开头的“/” 归档成功! 归档文件为:/root/archive/archive_scripts_221024.tar.gz [root@hadoop100 scripts]# ll /root/archive/ 总用量 4 -rw-r--r--. 1 root root 1604 10月 24 14:31 archive_scripts_221024.tar.gz
正则表达式入门
正则表达式使用单个字符串来描述、匹配一系列符合某个语法规则的字符串。在很多文本编辑器里,正则表达式通常被用来检索、替换那些符合某个模式的文本。在 Linux中grep,sed,awk等文本处理工具都支持通过正则表达式进行模式匹配。
常规匹配
一串不包含特殊字符的正则表达式来匹配它自己(_test.)
[root@hadoop100 scripts]# ll 总用量 40 -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rw-r--r--. 1 root root 246 10月 21 10:21 hello.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 90 10月 21 10:58 param_name.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr-xr-x. 1 root root 200 10月 20 13:59 sum_to.sh -rwxr--r--. 1 root root 961 10月 24 14:34 tar_test.sh [root@hadoop100 scripts]# ls -l | grep _test. -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr--r--. 1 root root 961 10月 24 14:34 tar_test.sh
特殊字符 ^
匹配以某个字符开头
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep ^a add.sh [root@hadoop100 scripts]# ls | grep ^p param_name.sh [root@hadoop100 scripts]# ll 总用量 40 -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rw-r--r--. 1 root root 246 10月 21 10:21 hello.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 90 10月 21 10:58 param_name.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr-xr-x. 1 root root 200 10月 20 13:59 sum_to.sh -rwxr--r--. 1 root root 961 10月 25 10:24 tar_test.sh [root@hadoop100 scripts]# ll | grep ^- -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rw-r--r--. 1 root root 246 10月 21 10:21 hello.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 90 10月 21 10:58 param_name.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr-xr-x. 1 root root 200 10月 20 13:59 sum_to.sh -rwxr--r--. 1 root root 961 10月 25 10:24 tar_test.sh
特殊字符 $
匹配以某个字符结束
[root@hadoop100 scripts]# ls -l 总用量 40 -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rw-r--r--. 1 root root 246 10月 21 10:21 hello.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 90 10月 21 10:58 param_name.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr-xr-x. 1 root root 200 10月 20 13:59 sum_to.sh -rwxr--r--. 1 root root 961 10月 25 10:24 tar_test.sh [root@hadoop100 scripts]# ll | grep h$ -rwxr-xr-x. 1 root root 39 10月 13 14:56 add.sh -rwxr-xr-x. 1 root root 111 10月 21 10:20 case_test.sh -rwxr-xr-x. 1 root root 149 10月 20 11:08 for_test.sh -rwxr-xr-x. 1 root root 161 10月 21 14:38 fun_test.sh -rw-r--r--. 1 root root 246 10月 21 10:21 hello.sh -rwxr-xr-x. 1 root root 290 10月 14 15:00 if_test.sh -rwxr-xr-x. 1 root root 90 10月 21 10:58 param_name.sh -rwxr-xr-x. 1 root root 81 10月 20 14:25 read_test.sh -rwxr-xr-x. 1 root root 200 10月 20 13:59 sum_to.sh -rwxr--r--. 1 root root 961 10月 25 10:24 tar_test.sh [root@hadoop100 scripts]# ll | grep s$ [root@hadoop100 scripts]#
^$匹配空行:以空开头,以空结束
[root@hadoop100 scripts]# cat tar_test.sh | grep -n ^$ 2: 9: 20: 25: 28: 33: 37: 39: 50: 52:
特殊字符 .
点:匹配任意一个字符
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep t..t case_test.sh for_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep n.m param_name.sh
特殊字符 *
星号:不单独使用,和上一个字符联合使用,表示匹配上一个字符任意次
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep ad* add.sh [root@hadoop100 scripts]# ls | grep ad*se case_test.sh
.*(点星):匹配任意字符串
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep ^f.*sh$ for_test.sh fun_test.sh [root@hadoop100 scripts]# ls | grep ^f.*test.*sh$ for_test.sh fun_test.sh [root@hadoop100 scripts]# ls | grep .*a.*sh$ add.sh case_test.sh param_name.sh read_test.sh tar_test.sh
特殊字符 [ ]
匹配某个范围内的一个字符
-- 匹配6或者8 [6,8] -- 匹配一个0到9的数字 [0-9] -- 匹配任意长度的数字字符串 [0-9]* -- 匹配一个 a-z 之间的字符 [a-z] -- 匹配一个 a-z 之间的字符的任意长度字符串 [a-z]* -- 匹配 a-c 或者 e-f 的之间的任意字符 [a-c,e-f]
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep h[a-z]* hello.sh
特殊字符 \
转义字符:匹配特殊字符时需要使用转义字符
[root@hadoop100 scripts]# cat tar_test.sh #!/bin/bash #判断当前输入参数个数是否为1 if [ $# -ne 1 ] then echo "参数个数错误!应该输入一个参数作为归档的目录名" exit fi #判断当前输入参数是否是目录 if [ -d $1 ] then echo else echo echo "目录不存在!" echo exit fi #使用 basename命令截取获取输入参数的文件名称,当前输入参数末尾不应该带有/ DIR_NAME=$(basename $1) #使用 dirname命令截取输入参数的路径,进入到路径位置,使用 pwd命令获取当前位置绝对路径 DIR_PATH=$(cd $(dirname $1); pwd) #获取当前日期 DATE=$(date +%y%m%d) #定义生成的归档文件名称 FILE=archive_${DIR_NAME}_$DATE.tar.gz #定义生成归档文件路径 DEST=/root/archive/$FILE #开始归档目录文件 echo "开始归档..." echo tar -czf $DEST $DIR_PATH/$DIR_NAME if [ $? -eq 0 ] then echo echo "归档成功!" echo "归档文件为:$DEST" echo else echo "归档失败!" echo fi exit
[root@hadoop100 scripts]# cat tar_test.sh | grep '\$' if [ $# -ne 1 ] if [ -d $1 ] DIR_NAME=$(basename $1) DIR_PATH=$(cd $(dirname $1); pwd) DATE=$(date +%y%m%d) FILE=archive_${DIR_NAME}_$DATE.tar.gz DEST=/root/archive/$FILE tar -czf $DEST $DIR_PATH/$DIR_NAME if [ $? -eq 0 ] echo "归档文件为:$DEST"
[root@hadoop100 scripts]# ls add.sh for_test.sh hello.sh param_name.sh sum_to.sh case_test.sh fun_test.sh if_test.sh read_test.sh tar_test.sh [root@hadoop100 scripts]# ls | grep ^f[a-z]*'\_'.*sh$ for_test.sh fun_test.sh
拓展正则表达式
拓展正则表达式此处不进行过多介绍......
特殊字符 {}
匹配字符具体出现几次,例如 a{2}:表示a字符出现2次
特殊字符 +
匹配字符出现1次或多次
特殊字符 ?
匹配字符出现0次或1次
匹配一个手机号
[root@hadoop100 scripts]# echo "13812345678" | grep ^1[3,4,5,7,8][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]$ 13812345678 -- grep默认不支持拓展正则,使用 -E 选项使 grep命令支持拓展正则表达式 {} [root@hadoop100 scripts]# echo "13812345678" | grep -E ^1[3,4,5,7,8][0-9]{9}$ 13812345678
文本处理工具
cut
对文本文件进行数据剪切,从文件的每一行剪切字节、字符、字段,并将其输出;可以与管道符联合使用,例如 ls | cut
-- 选项 -f:列号,提取第几列 -- 选项 -d:分隔符,按照指定的分隔符分割列,默认是制表符 “\t” -- 选项 -c:按字符进行切割后,加数字n表示取第几列,例如 -c 1 -- filename:文件名称 cut [选项] filename
[root@hadoop100 scripts]# vim cut.txt [root@hadoop100 scripts]# cat cut.txt dong shen guan zhen wo wo lai lai le le [root@hadoop100 scripts]# cut -d " " -f 1 cut.txt dong guan wo lai le [root@hadoop100 scripts]# cut -d " " -f 2 cut.txt shen zhen wo lai le -- 截取第一列和第二列 [root@hadoop100 scripts]# cut -d " " -f 1,2 cut.txt dong shen guan zhen wo wo lai lai le le [root@hadoop100 scripts]# cat /etc/passwd | grep bash$ root:x:0:0:root:/root:/bin/bash ithailin:x:1000:1000:ithailin:/home/ithailin:/bin/bash [root@hadoop100 scripts]# cat /etc/passwd | grep bash$ | cut -d ":" -f 1,6,7 root:/root:/bin/bash ithailin:/home/ithailin:/bin/bash -- 截取第一列到第二列 [root@hadoop100 scripts]# cat /etc/passwd | grep bash$ | cut -d ":" -f 1-2 root:x ithailin:x -- 截取第二列以及之前的所有列 [root@hadoop100 scripts]# cat /etc/passwd | grep bash$ | cut -d ":" -f -2 root:x ithailin:x -- 截取第三列以及之后的所有列 [root@hadoop100 scripts]# cat /etc/passwd | grep bash$ | cut -d ":" -f 3- 0:0:root:/root:/bin/bash 1000:1000:ithailin:/home/ithailin:/bin/bash
[root@hadoop100 scripts]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.181.100 netmask 255.255.255.0 broadcast 192.168.181.255 inet6 fe80::42cb:792a:cf10:3f6a prefixlen 64 scopeid 0x20<link> ether 00:0c:29:93:2e:3d txqueuelen 1000 (Ethernet) RX packets 3400 bytes 269556 (263.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2188 bytes 247095 (241.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 32 bytes 2592 (2.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 32 bytes 2592 (2.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:2d:1b:b8 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@hadoop100 scripts]# ifconfig ens33 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.181.100 netmask 255.255.255.0 broadcast 192.168.181.255 inet6 fe80::42cb:792a:cf10:3f6a prefixlen 64 scopeid 0x20<link> ether 00:0c:29:93:2e:3d txqueuelen 1000 (Ethernet) RX packets 3416 bytes 270764 (264.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2198 bytes 249363 (243.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@hadoop100 scripts]# ifconfig ens33 | grep netmask inet 192.168.181.100 netmask 255.255.255.0 broadcast 192.168.181.255 -- 前面有8个空格,切割之后ip地址是第十行 [root@hadoop100 scripts]# ifconfig ens33 | grep netmask | cut -d " " -f 10 192.168.181.100
awk
一个强大的文本分析工具,把文件逐行的读入,以空格为默认的分隔符将每行切片,切开的部分在进行分析处理
-- 选项 -F:指定输入文件分隔符 -- 选项 -v:赋值一个用户定义变量 -- pattern:表示awk在数据中查找的内容,就是文本匹配模式(正则表达式) -- action:再找到匹配内容时所执行的一切命令 -- filename:文件名称 awk [选项] '/pattern1/{action1} /pattern2/{action2}...' filename
[root@hadoop100 scripts]# which awk /usr/bin/awk -- awk其实是一个软链接,调用的是gawk命令 [root@hadoop100 scripts]# ll /usr/bin/ | grep awk lrwxrwxrwx. 1 root root 4 8月 30 11:48 awk -> gawk -rwxr-xr-x. 1 root root 514168 6月 29 2017 dgawk -rwxr-xr-x. 1 root root 428584 6月 29 2017 gawk -rwxr-xr-x. 1 root root 3188 6月 29 2017 igawk -rwxr-xr-x. 1 root root 428672 6月 29 2017 pgawk
搜索passwd文件以root开头的所有行,并输出该行的第7列
[root@hadoop100 scripts]# cat /etc/passwd | grep ^root | cut -d ":" -f 7 /bin/bash [root@hadoop100 scripts]# cat /etc/passwd | awk -F ":" '/^root/ {print $7}' /bin/bash
搜索passwd文件以root开头的所有行,并输出该行的第1列和第7列,并以“,”分隔
[root@hadoop100 scripts]# cat /etc/passwd | awk -F ":" '/^root/ {print $1","$7}' root,/bin/bash [root@hadoop100 scripts]# cat /etc/passwd | awk -F ":" '/^root/ {print $1","$6","$7}' root,/root,/bin/bash
搜索passwd文件以g开头的所有行,只显示passwd的第一列和第7列,并以“,”分隔,且在所有行前面添加“user shell”,最后一行后面添加“end of file”,参考:https://blog.csdn.net/ha_weii/article/details/80761559
-- awk 会逐行处理文本 , 支持在处理第一行之前做一些准备工作 , 以及在处理完最后一行做一些总结性质的工作 -- BEGIN{}: 读入第一行文本之前执行 , 一般用来初始化操作 -- {}: 逐行处理,逐行读入文本执行相应的处理,是最常见的编辑指令快 -- END{}: 处理完最后一行文本之后执行 , 一般用来输出处理结果 [root@hadoop100 scripts]# cat /etc/passwd | awk -F ":" 'BEGIN{print "user shell"}/^g/{print $1","$7} END{print "end of f ile"}' user shell games,/sbin/nologin gluster,/sbin/nologin geoclue,/sbin/nologin gdm,/sbin/nologin gnome-initial-setup,/sbin/nologin end of file
将passwd文件中的id值都增加1,并输出
-- 自定义变量 i,使用了/^g/是防止输出数据过多 [root@hadoop100 scripts]# cat /etc/passwd | awk -v i=1 -F ":" '/^g/{print $3+i}' 13 996 990 43 989
awk内置变量
FILENAME:文件名
NR:已读的记录数(行号)
NF:浏览记录的域的个数(切割后列的个数)
统计passwd文件,每行的行号,每行的列数
[root@hadoop100 scripts]# awk -F ":" '{print "文件名:" FILENAME ",行号:" NR ",列号:" NF}' /etc/passwd 文件名:/etc/passwd,行号:1,列号:7 文件名:/etc/passwd,行号:2,列号:7 文件名:/etc/passwd,行号:3,列号:7 文件名:/etc/passwd,行号:4,列号:7 ... ... ...
在awk中使用BEGIN来调用awk内置命令FILENAME,但未得到文件名称,得到的却是空行,在gawk (GNU awk) 4
或更高版本中,可以使用BEGINFILE来代替BEGIN,来确保使用内置变量FILENAME,参考:https://oomake.com/question/2976597
[root@hadoop100 scripts]# awk -F ":" 'BEGIN{print "文件名:" FILENAME}{print "行号:" NR ",列号:" NF}' /etc/passwd 文件名: 行号:1,列号:7 行号:2,列号:7 行号:3,列号:7 行号:4,列号:7 行号:5,列号:7 行号:6,列号:7 行号:7,列号:7 行号:8,列号:7 行号:9,列号:7 行号:10,列号:7 行号:11,列号:7 行号:12,列号:7 行号:13,列号:7 行号:14,列号:7 行号:15,列号:7 行号:16,列号:7 行号:17,列号:7 行号:18,列号:7 行号:19,列号:7 行号:20,列号:7 行号:21,列号:7 行号:22,列号:7 行号:23,列号:7 行号:24,列号:7 行号:25,列号:7 行号:26,列号:7 行号:27,列号:7 行号:28,列号:7 行号:29,列号:7 行号:30,列号:7 行号:31,列号:7 行号:32,列号:7 行号:33,列号:7 行号:34,列号:7 行号:35,列号:7 行号:36,列号:7 行号:37,列号:7 行号:38,列号:7 行号:39,列号:7 行号:40,列号:7 行号:41,列号:7 行号:42,列号:7 行号:43,列号:7 行号:44,列号:7 [root@hadoop100 scripts]# awk -F ":" 'BEGIN{print "文件名:" FILENAME}' /etc/passwd 文件名: [root@hadoop100 scripts]# awk -F ":" 'BEGINFILE{print "文件名:" FILENAME}' /etc/passwd 文件名:/etc/passwd [root@hadoop100 scripts]# awk -F ":" 'BEGINFILE{print "文件名:" FILENAME}{print "行号:" NR ",列号:" NF}' /etc/passwd 文件名:/etc/passwd 行号:1,列号:7 行号:2,列号:7 行号:3,列号:7 行号:4,列号:7 行号:5,列号:7 行号:6,列号:7 行号:7,列号:7 行号:8,列号:7 行号:9,列号:7 行号:10,列号:7 行号:11,列号:7 行号:12,列号:7 行号:13,列号:7 行号:14,列号:7 行号:15,列号:7 行号:16,列号:7 行号:17,列号:7 行号:18,列号:7 行号:19,列号:7 行号:20,列号:7 行号:21,列号:7 行号:22,列号:7 行号:23,列号:7 行号:24,列号:7 行号:25,列号:7 行号:26,列号:7 行号:27,列号:7 行号:28,列号:7 行号:29,列号:7 行号:30,列号:7 行号:31,列号:7 行号:32,列号:7 行号:33,列号:7 行号:34,列号:7 行号:35,列号:7 行号:36,列号:7 行号:37,列号:7 行号:38,列号:7 行号:39,列号:7 行号:40,列号:7 行号:41,列号:7 行号:42,列号:7 行号:43,列号:7 行号:44,列号:7
查询ifconfig输出结果中的空行的行号
[root@hadoop100 scripts]# ifconfig | grep -n ^$ 9: 18: 26: [root@hadoop100 scripts]# ifconfig | awk '/^$/ {print NR}' 9 18 26 [root@hadoop100 scripts]# ifconfig | awk '/^$/ {print "空行:" NR}' 空行:9 空行:18 空行:26
切割ip
[root@hadoop100 scripts]# ifconfig ens33 | grep netmask | cut -d " " -f 10 192.168.181.100 [root@hadoop100 scripts]# ifconfig | grep netmask | cut -d " " -f 10 192.168.181.100 127.0.0.1 192.168.122.1 [root@hadoop100 scripts]# ifconfig ens33 | awk '/netmask/ {print $2}' 192.168.181.100 [root@hadoop100 scripts]# ifconfig | awk '/netmask/ {print $2}' 192.168.181.100 127.0.0.1 192.168.122.1
综合案例:发送消息
我们可以利用 Linux自带的 mesg 和 write工具,向其它用户发送消息。实现一个向某个用户快速发送消息的脚本,输入用户名作为第一个参数,后面直接跟要发送的消息。脚本需要检测用户是否登录在系统中、是否打开消息功能,以及当前发送消息是否为空。
-- 查看当前登录用户 who am i -- -- 查看当前所有登录用户 who -- 查看所有登录用户信息状态栏 who -T -- 查看当前登录用户mesg(消息)功能是否开启 mesg -- 关闭当前登录用户mesg(消息)功能 mesg n -- 开启当前登录用户mesg(消息)功能 mesg y
开启两个cmd窗口,登录两个用户root、ithailin进行测试
[root@hadoop100 scripts]# who am i root pts/0 2022-10-25 10:13 (192.168.181.1) -- 用户ithailin未登录前 [root@hadoop100 scripts]# who root pts/0 2022-10-25 10:13 (192.168.181.1) -- 用户ithailin登录后 [root@hadoop100 scripts]# who root pts/0 2022-10-25 10:13 (192.168.181.1) ithailin pts/1 2022-10-25 16:04 (192.168.181.1) -- 当前用户 root mesg功能是开启的 y:开启,n:关闭 [root@hadoop100 scripts]# mesg is y -- 查看当前所有登录用户的mesg功能开启状态,+代表mesg开启。-代表mesg关闭 [root@hadoop100 scripts]# who -T root + pts/0 2022-10-25 10:13 (192.168.181.1) ithailin + pts/1 2022-10-25 16:04 (192.168.181.1) -- 关闭当前登录用户root的mesg功能 [root@hadoop100 scripts]# mesg n [root@hadoop100 scripts]# who -T root - pts/0 2022-10-25 10:13 (192.168.181.1) ithailin + pts/1 2022-10-25 16:04 (192.168.181.1) -- 开启当前登录用户root的mesg功能 [root@hadoop100 scripts]# mesg y [root@hadoop100 scripts]# who -T root + pts/0 2022-10-25 10:13 (192.168.181.1) ithailin + pts/1 2022-10-25 16:04 (192.168.181.1)
使用write命令向指定的其他用户发送消息
[root@hadoop100 scripts]# who -T root + pts/0 2022-10-25 10:13 (192.168.181.1) ithailin + pts/1 2022-10-25 16:04 (192.168.181.1) -- 使用write命令向ithailin用户发送消息,需要指定用户和登录的控制台 [root@hadoop100 scripts]# write ithailin pts/1 hi what you name? my name is root.. bye -- 使用ctrl+C终止 ^C[root@hadoop100 scripts]#
[ithailin@hadoop100 ~]$ -- 收到来自root用户发送的消息,EOF表示来着root用户的消息已经结束,root使用了ctrl+C终止 Message from root@hadoop100 on pts/0 at 16:17 ... hi what you name? my name is root\343\200\202.. bye EOF -- 向root发送消息,并指定控制台 [ithailin@hadoop100 ~]$ write root pts/0 my name is ithailin bye ^C[ithailin@hadoop100 ~]$ -- 来着ithailin用户的消息(root控制台) [root@hadoop100 scripts]# Message from ithailin@hadoop100 on pts/1 at 16:19 ... my name is ithailin bye EOF
编写一个发送消息的脚本,让发送消息更加便捷,新建脚本send_msg.sh如下所示
#!/bin/bash #查看用户是否登录,-i:忽略大小写,-m:后面跟数字,具体匹配多少行。如果是1.则只匹配第一行,awk '{print $1}':获取第一列 login_user=$(who | grep -i -m 1 $1 | awk '{print $1}') #判断用户是否在线, -z:判断值是否为空 if [ -z $login_user ] then echo "$1 不在线!" echo "脚本退出..." exit fi #查看用户是否开启了mesg功能 is_allowed=$(who -T | grep -i -m 1 $1 | awk '{print $2}') # +号表示开启了mesg功能,-号表示未开启mesg功能 if [ $is_allowed != "+" ] then echo "$1 没有开启消息功能!" echo "脚本退出..." exit fi #查看是否有消息发送 if [ -z $2 ] then echo "没有消息发送!" echo "脚本退出..." exit fi #从参数中获取要发送的消息,$*是获取所有参数,将其赋予 cut,使用空格分隔,获取第二个到最后所有的参数,即消息内容,第一个参数是用户 whole_msg=$(echo $* | cut -d " " -f 2-) #获取用户登录的终端, user_terminal=$(who | grep -i -m 1 $1 | grep '{print $2}') #写入要发送的消息 echo $whole_msg | write $login_user $user_terminal #判断消息是否发送成功 if [ $? != 0 ] then echo "发送失败!" else echo "发送成功" fi exit
[root@hadoop100 scripts]# ./send_msg.sh ithailin what you name? 发送成功 [ithailin@hadoop100 ~]$ Message from root@hadoop100 on pts/0 at 17:26 ... what you name? EOF