Quantcast
Channel: 容器云计算,Devops,DBA,网络安全。
Viewing all 780 articles
Browse latest View live

centos7.6安装配置owncloud10.3云网盘。

$
0
0

centos7.6安装配置owncloud10.3云网盘。

注意系统最低centos7.5,最新本最好了。

apache2.4,php7.2,mariadb. 下面最具体安装步骤。

[root@k8s-master ~]#   cd /etc/yum.repos.d/

[root@k8s-master ~]#   wget https://mirrors.aliyun.com/ius/ius-7.repo
[root@k8s-master ~]#   wget https://mirrors.aliyun.com/ius/ius-archive-7.repo
[root@k8s-master ~]#   yum install php72u-cli php72u-common php72u-devel php72u-mysql php72u-xml php72u-odbc php72u php72u-mysqlnd mariadb-server mariadb sqlite php72u-dom php72u-mbstring php72u-gd php72u-pdo httpd24u

[root@k8s-master ~]#   systemctl enable httpd
[root@k8s-master ~]#   systemctl start httpd
 
[root@k8s-master ~]#   cd /var/www/html/
[root@k8s-master ~]#   wget https://download.owncloud.org/community/owncloud-10.3.0.zip

[root@k8s-master ~]#   yum install unzip
[root@k8s-master ~]#   unzip owncloud-10.3.0.zip

[root@k8s-master ~]#   chown -R apache:apache /var/www/html/

[root@k8s-master ~]#   yum install php72usamba recode php72u-intl php72u-bcmath php72u-soap php72u-xmlrpc php72u-opcache php72u-ldap php72u-json php72u-gmp
 
[root@k8s-master ~]#   yum install php72usamba recode php72u-intl php72u-bcmath php72u-soap php72u-xmlrpc php72u-opcache php72u-ldap php72u-json php72u-gmp php72u-zlib
[root@k8s-master ~]#   yum install zlib zlib-devel zip libxml openssl
[root@k8s-master ~]#   yum install php72usamba recode php72u-intl php72u-bcmath php72u-soap php72u-xmlrpc php72u-opcache php72u-ldap php72u-json php72u-gmp php72u-gd
[root@k8s-master ~]#   yum install zlib zlib-devel zip libxml openssl gd iconv
[root@k8s-master ~]#   yum install zlib zlib-devel zip libxml openssl gd iconv php72u-ctype
[root@k8s-master ~]#   yum install zlib zlib-devel zip libxml openssl gd iconv php72u-ctype
[root@k8s-master ~]#   systemctl restart httpd

配置mariadb数据库

[root@k8s-master ~]#   systemctl restart mariadb
[root@k8s-master ~]#   systemctl enable mariadb
[root@k8s-master ~]#   mysql_secure_installation

[root@k8s-master ~]#   mysql -uroot -p


最后通过web地址安装软件。



宝塔面板BT搭建Lamp环境使用配置SSL导致Apache无法启动

$
0
0


宝塔面板BT在阿里云买轻量应用服务器上搭建Lamp环境,给网站配置SSL后,发现所有网站无法访问,

经检查是Apache无法启动,导致网站都不能使用了。

这里我使用的是阿里云上买的免费证书,然后在BTu面板上合成证书,如下图:


配置好后,apache的httpd服务就没法启动了。强制HTTPS后,也是一样。

后来干脆手动配置网站的SSL相关信息后,可以正常启动httpd服务了。

具体网站的配置文件如下:

1,先存放好网站域名的SSL证书,如下:

[root@iZj6c ~]#    mkdir  /www/server/apache/conf/ssl/

[root@iZj6c  ~]# ls /www/server/apache/conf/ssl/
  2894418_ www.92cto.com_chain.crt  2894418_www.92cto.com.key  2894418_www.92cto.com_public.crt

2,找到网站的配置文件,再修改配置如下:

[root@iZj6c  ~]# ls /www/server/panel/vhost/apache/
      0.default.conf  92cto.com.conf  phpinfo.conf  

具体配置文件内容如下:

 [root@iZj6c  apache]# cat 92cto.com.conf
<VirtualHost *:80>
    ServerAdmin webmaster@example.com
    DocumentRoot "/www/wwwroot/92ctocom"
    ServerName c36458a3.92cto.com
    ServerAlias 92cto.com *.92cto.com
    #errorDocument 404 /404.html
    ErrorLog "/www/wwwlogs/92cto.com-error_log"
    CustomLog "/www/wwwlogs/92cto.com-access_log" combined
    #HTTP_TO_HTTPS_START
    <IfModule mod_rewrite.c>
        RewriteEngine on
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule (.*) https://%{SERVER_NAME}$1 [L,R=301]
    </IfModule>
    #HTTP_TO_HTTPS_END
    
    #DENY FILES
     <Files ~ (\.user.ini|\.htaccess|\.git|\.svn|\.project|LICENSE|README.md)$>
       Order allow,deny
       Deny from all
    </Files>
    
    #PHP
    <FilesMatch \.php$>
            SetHandler "proxy:unix:/tmp/php-cgi-54.sock|fcgi://localhost"
    </FilesMatch>
    
    #PATH
    <Directory "/www/wwwroot/92ctocom">
        SetOutputFilter DEFLATE
        Options FollowSymLinks
        AllowOverride All
        Require all granted
        DirectoryIndex index.php index.html
    </Directory>
</VirtualHost>


<VirtualHost *:443>
    ServerAdmin webmasterexample.com
    DocumentRoot "/www/wwwroot/92ctocom/"
    ServerName SSL.92cto.com
    ServerAlias *.92cto.com 92cto.com
    #errorDocument 404 /404.html
    ErrorLog "/www/wwwlogs/92cto.com-error_log"
    CustomLog "/www/wwwlogs/92cto.com-access_log" combined
    
    #SSL
    SSLEngine On
    #SSLCertificateFile /www/server/panel/vhost/cert/92cto.com/fullchain.pem
    #SSLCertificateKeyFile /www/server/panel/vhost/cert/92cto.com/privkey.pem
    #SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
    SSLProtocol All -SSLv2 -SSLv3
    SSLHonorCipherOrder On



SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4
 
SSLProxyCipherSuite HIGH:MEDIUM:!MD5:!RC4
 
SSLHonorCipherOrder on
 
#SSLProtocol TLSv1 +TLSv1.1 +TLSv1.2
 
SSLProxyProtocol all -SSLv3

#SSLPassPhraseDialog  builtin
 
#SSLSessionCache  dbm:d:D:\phpStudy2018\PHPTutorial\Apache\logs\ssl_scache"
 
#SSLSessionCache "shmcb:D:\phpStudy2018\PHPTutorial\Apache\logs\ssl_scache(512000)"
 
SSLSessionCacheTimeout  300


SSLCertificateFile "/www/server/apache/conf/ssl/2894418_www.92cto.com_public.crt"
SSLCertificateKeyFile "/www/server/apache/conf/ssl/2894418_www.92cto.com.key"
SSLCertificateChainFile "/www/server/apache/conf/ssl/2894418_www.92cto.com_chain.crt"

   
    #PHP
    <FilesMatch \.php$>
            SetHandler "proxy:unix:/tmp/php-cgi-54.sock|fcgi://localhost"
    </FilesMatch>
    

    #DENY FILES
     <Files ~ (\.user.ini|\.htaccess|\.git|\.svn|\.project|LICENSE|README.md)$>
       Order allow,deny
       Deny from all
    </Files>

    #PATH
    <Directory "/www/wwwroot/92ctocom/">
        SetOutputFilter DEFLATE
        Options FollowSymLinks
        AllowOverride All
        Require all granted
        DirectoryIndex index.php index.html
    </Directory>
</VirtualHost>


3,最后启动 httpd服务,测试是否能正常访问网站。

  [root@iZj6c  ~]#   /etc/init.d/httpd restart

也可以通过面板启动apache服务。

kubernets k8s v1.16.2搭建 kuboard WebUI面板​

$
0
0

kubernets k8s v1.16.2搭建 kuboard WebUI面板​


前提

安装 Kuboard 时,假设您已经有一个 Kubernetes 集群

如果没有 Kubernetes 集群:

# 兼容性

Kubernetes 版本Kuboard 版本兼容性说明
v1.16v1.0
已验证
v1.15v1.0
已验证
v1.14v1.0
已验证
v1.13v1.0
已验证
v1.12v1.0
Kubernetes Api v1.12 尚不支持 dryRun,
忽略Kuboard在执行命令时的参数校验错误,可正常工作
v1.11v1.0
同上

# 安装

安装 Kuboard。

如果您参考 https://kuboard.cn 网站上提供的 Kubernetes 安装文档,可在 master 节点上执行以下命令。

kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
 
已复制到剪贴板!

 

# 获取Token

您可以获得管理员用户、只读用户的Token

拥有的权限

  • 此Token拥有 ClusterAdmin 的权限,可以执行所有操作

执行命令

# 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')   
 
已复制到剪贴板!

输出

取输出信息中 token 字段

Name: admin-user-token-g8hxb
Namespace: kube-system
Labels: <none>
Annotations: [kubernetes.io/service-account.name](http://kubernetes.io/service-account.name): Kuboard-user
[kubernetes.io/service-account.uid](http://kubernetes.io/service-account.uid): 948bb5e6-8cdc-11e9-b67e-fa163e5f7a0f

Type: [kubernetes.io/service-account-token](http://kubernetes.io/service-account-token)

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWc4aHhiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NDhiYjVlNi04Y2RjLTExZTktYjY3ZS1mYTE2M2U1ZjdhMGYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.DZ6dMTr8GExo5IH_vCWdB_MDfQaNognjfZKl0E5VW8vUFMVvALwo0BS-6Qsqpfxrlz87oE9yGVCpBYV0D00811bLhHIg-IR_MiBneadcqdQ_TGm_a0Pz0RbIzqJlRPiyMSxk1eXhmayfPn01upPdVCQj6D3vAY77dpcGplu3p5wE6vsNWAvrQ2d_V1KhR03IB1jJZkYwrI8FHCq_5YuzkPfHsgZ9MBQgH-jqqNXs6r8aoUZIbLsYcMHkin2vzRsMy_tjMCI9yXGiOqI-E5efTb-_KbDVwV5cbdqEIegdtYZ2J3mlrFQlmPGYTwFI8Ba9LleSYbCi4o0k74568KcN_w
 
已复制到剪贴板!

# 访问Kuboard

您可以通过NodePort、port-forward 两种方式当中的任意一种访问 Kuboard

Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567;您可以按如下方式访问 Kuboard。

http://任意一个Worker节点的IP地址:32567/

输入前一步骤中获得的 token,可进入 Kuboard 集群概览页

TIP

  • 如果您使用的是阿里云、腾讯云等,请在其安全组设置里开放 worker 节点 32567 端口的入站访问,

  • 您也可以修改 Kuboard.yaml 文件,使用自己定义的 NodePort 端口号

下一步

 


Centos8安装docker19.03.4,kubernetes v1.16.2,kuboard面板

$
0
0

Centos8安装docker19.03.4,kubernetes v1.16.2,kuboard面板


下载好centos8-boot.iso.安装到虚拟机中,安装源可以采用163源或是阿里源,这就不说了。

1. 环境预设(在所有主机上操作)

关闭firewalld:

 systemctl stop firewalld && systemctl disable firewalld 

关闭SElinux:

 setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 

关闭Swap:

swapoff -a
echo "vm.swappiness = 0">> /etc/sysctl.conf
sed -i 's/.*swap.*/#&/' /etc/fstab
sysctl -p

使用阿里云yum源:

 wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo 

更新 /etc/hosts 文件:在每一台主机的该文件中添加k8s所有节点的IP和对应主机名,否则初始化的时候回出现告警甚至错误

echo "192.168.137.22 k8smaster" >> /etc/hosts


修改centos8系统为国内源。

cd /etc/yum.repos.d/
 
sed -i 's/mirrorlist/\#mirrorlist/g' CentOS-Base.repo
 
sed -i 's/\#baseurl/baseurl/g' CentOS-Base.repo
 
sed -i 's/mirrorlist/#mirrorlist/g' CentOS-AppStream.repo
sed -i 's/#baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-AppStream.repo
sed -i 's/mirrorlist/#mirrorlist/g' CentOS-Extras.repo
sed -i 's/#baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-Extras.repo
sed -i 's/baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-Base.repo
 

安装阿里云docker源:
cd /etc/yum.repos.d/

curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o docker-ce.repo


查看docker版本信息

yum list docker-ce --showduplicates | sort -r

安装最新版本的docker,实际上kubernetes v16.2 并不支持最新版本的docker ,支持docker 18版本。


wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.10-3.2.el7.x86_64.rpm
yum install  containerd.io-1.2.10-3.2.el7.x86_64.rpm
yum install -y docker-ce
systemctl enable docker --now


配置docker的deamon.json【没有就新建】

  
[root@k8smaster ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://a495m8mk.mirror.aliyuncs.com"]
}
{
          "exec-opts": ["native.cgroupdriver=systemd"]
}
 


安装相关依赖软件。

yum install -y yum-utils device-mapper-persistent-data lvm2


配置k8s相关参数文件。

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl --system


sudo systemctl daemon-reload
sudo systemctl restart docker

安装kubeadm基础环境依赖镜像

[root@apple ~]# kubeadm config images list --kubernetes-version v1.16.2

k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

我们从aliyun下载镜像,新建 一个get_k8s_images.sh 脚本
chmod +x get_k8s_images.sh
./get_k8s_images.sh

#! /bin/bashimages=(
    kube-apiserver:v1.16.2
    kube-controller-manager:v1.16.2
    kube-scheduler:v1.16.2
    kube-proxy:v1.16.2
    pause:3.1
    etcd:3.3.15-0
    coredns:1.6.2
)for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}done

安装kubeadm、kubelet和kubectl

kubeadm不管kubelet和kubectl,所以我们需要手动安装kubelet和kubectl:

yum install -y kubeadm kubelet kubectl  --disableexcludes=kubernetes

Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。
Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。
Kubectl是Kubernetes集群管理工具。

最后启动kubelet:

systemctl enable kubelet --now


部署master 节点

注:在master节点上进行如下操作

在安装过程中我们发现安装的是 1.16.2版本

kubeadm version

输出

kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b",
GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

镜像下载

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

执行 kubeadm config images list 命令就会输出如下所需版本

[root@k8smaster ~]# kubeadm config images list --kubernetes-version v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
[root@k8smaster ~]#

kubeadm基础环境依赖镜像

我们从aliyun下载镜像,新建 一个get_k8s_images.sh 脚本

chmod +x get_k8s_images.sh
./get_k8s_images.sh

#! /bin/bashimages=(
    kube-apiserver:v1.16.2
    kube-controller-manager:v1.16.2
    kube-scheduler:v1.16.2
    kube-proxy:v1.16.2
    pause:3.1
    etcd:3.3.15-0
    coredns:1.6.2
)for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}done

systemctl enable kubelet && systemctl start kubelet
systemctl daemon-reload
systemctl restart kubelet


因为无法直接获取到所需镜像,这里是用镜像映射,通过tag更改名称


wget https://cbs.centos.org/repos/paas7-crio-115-release/x86_64/os/Packages/cri-o-1.15.1-2.el7.x86_64.rpm

rpm -Uvh cri-o-1.15.1-2.el7.x86_64.rpm --nodeps


systemctl daemon-reload
systemctl start crio.service
systemctl daemon-reload

在master进行Kubernetes集群初始化


kubeadm init --kubernetes-version=1.16.2 --apiserver-advertise-address=192.168.137.22 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16

或是

kubeadm init --kubernetes-version=1.16.2 --apiserver-advertise-address=192.168.137.22 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16


  1. –kubernetes-version: 用于指定k8s版本;

  2. –apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。

  3. –pod-network-cidr:用于指定Pod的网络范围; 10.244.0.0/16,可以不用指定,软件会自动添加。

  4. –service-cidr:用于指定SVC的网络范围;

  5. –image-repository: 指定阿里云镜像仓库地址。



这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址

集群初始化成功后返回如下信息:
记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。

Your Kubernetes control-plane has initialized successfully!To start using your cluster, 
you need to run the following as a regular user:  
mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.10.10:6443 --token kehvmq.e33d33lgkrm8h0rn \
    --discovery-token-ca-cert-hash sha256:6150e7960c44890d5dd6b160bbbb4bfa256023db22f004b54d27e1cca72b0afc 

根据以上结果,还要操作一些任务,会有一些报错,可根据自己的情况修改。

                           

Docker中的Cgroup Driver:Cgroupfs 与 Systemd            

在安装kubernetes的过程中,会出现

Copyfailed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

文件驱动默认由systemd改成cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动

docker info查看

Cgroup Driver: systemd

现在有两种方式, 一种是修改docker, 另一种是修改kubelet,

修改docker:#

修改或创建/etc/docker/daemon.json,加入下面的内容:

Copy{  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker:

Copysystemctl restart docker
systemctl status docker

修改kubelet:#

Copyvim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf# Note: This dropin only works with kubeadm and kubelet v1.11+[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

添加如下内容--cgroup-driver=systemd

 

或者:

Copy# 配置kubelet使用国内pause镜像# 配置kubelet的cgroups# 获取docker的cgroups$ DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)$ echo $DOCKER_CGROUPS$ cat >/etc/sysconfig/kubelet<<EOFKUBELET_CGROUP_ARGS="--cgroup-driver=$DOCKER_CGROUPS"
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF# 启动$ systemctl daemon-reload$ systemctl enable kubelet && systemctl restart kubelet

或者:

CopyDOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)echo $DOCKER_CGROUPScat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"EOF# 启动$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl restart kubelet
参考:

https://www.cnblogs.com/sparkdev/p/9523194.html

https://www.jianshu.com/p/02dc13d2f651

作者:hongda

出处:https://www.cnblogs.com/hongdada/p/9771857.html

版权:本站使用「署名 4.0 国际」创作共享协议,转载请在文章明显位置注明作者及出处。




配置kubectl工具

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后可以安装kubernetes dashborad 面板,我这里使用kuboard面板,很好用的。

kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl get svc -A
kubectl get pods -o wide -A
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')
 

Proxmox-VE搭配Ceph存储组建高可用集群虚拟化平台

$
0
0

Proxmox-VE搭配Ceph存储组建高可用虚拟化平台


随着项项目的增多,对测试环境的需求越来越大,今天研发要几台测试环境,明天测试也要几台测试环境,连产品都要测试环境了,

咱们运维也得有自己的测试环境,之前搭建的exsi已经满足不了需求了。

  exsi.JPG   

     刚好内网有几台闲置的机器,于是就着手准备再搭建一套虚拟化平台,EXSI虽说好用,但毕竟是要钱的(之前用的破解版的),本着尊重版权的原则,咱们还是用开源的吧,网上找了一圈,发现目前比较活跃的KVM虚拟化平台有OpenStack、Proxmox VE、oVirt,内部测试用用,OpenStack有点大材小用了,Proxmox VE基本debian定制的,不太喜欢这种模式,而且对debian不是太熟悉,个人还是比较倾向于oVirt,因为oVirt的目标就是瞄准vCenter,而且oVirt和RHEV的关系,有点像Fedora和RHEL,直接CentOS7+Ovirt+GlusterFS比较好的解决方案,怎奈,淘汰下的机器实在是太老了,Ovirt表示硬件不支持,没办法,只能试试Proxmox VE了。


     手头资源有限,所以这里先用三台机器组集群环境,用Proxmox VE再配合Ceph存储组成一个高可用的虚拟化平台,Proxmox VE的安装过程这里就不写了,实在是太简单了,将网上下载的proxmox-ve_5.2-1.iso制作成U盘启动盘,U盘启动,一路下一步就可以了,设置好root密码、IP、hostname等信息就好了,ostname要求写成FQDN格式。


#hosts文件配置

root@proxmox233:~# cat >> /etc/hosts << EOF
192.168.5.232 proxmox232.blufly.com proxmox232
192.168.5.231 proxmox231.blufly.com proxmox231
EOF

root@proxmox232:~# cat >> /etc/hosts << EOF
192.168.5.233 proxmox233.blufly.com proxmox233
192.168.5.231 proxmox231.blufly.com proxmox231
EOF

root@proxmox231:~# cat >> /etc/hosts << EOF
192.168.5.232 proxmox232.blufly.com proxmox232
192.168.5.233 proxmox233.blufly.com proxmox233
EOF

#debian系统更新

rm -f /etc/apt/sources.list.d/pve-enterprise.list
echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" >/etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
apt update && apt dist-upgrade
apt-get install net-tools

设置时钟同步:

apt-get install ntpdate
ntpdate 120.25.108.11
echo "0 * * * * /usr/sbin/ntpdate 120.25.108.11 > /dev/null 2>&1" >> /etc/crontab

#配置免登陆访问(这一步可以省略,在加入集群的时候,会自动建立信任关系)

root@proxmox231:~# ssh-keygen -t rsa
root@proxmox231:~# ssh-copy-id root@proxmox231
root@proxmox231:~# ssh-copy-id root@proxmox232
root@proxmox231:~# ssh-copy-id root@proxmox233

root@proxmox232:~# ssh-keygen -t rsa
root@proxmox232:~# ssh-copy-id root@proxmox231
root@proxmox232:~# ssh-copy-id root@proxmox232
root@proxmox233:~# ssh-copy-id root@proxmox233

root@proxmox233:~# ssh-keygen -t rsa
root@proxmox233:~# ssh-copy-id root@proxmox231
root@proxmox233:~# ssh-copy-id root@proxmox232
root@proxmox233:~# ssh-copy-id root@proxmox233

#在192.168.5.231上面创建pve-cluster集群

root@proxmox231:~# pvecm create pve-cluster

#接下来,通过ssh登陆其他2个pve节点,执行 pvecm add 192.168.5.231

root@proxmox233:~# pvecm add 192.168.5.231
successfully added node 'proxmox233' to cluster.

root@proxmox232:~# pvecm add 192.168.5.231
successfully added node 'proxmox232' to cluster.

#pvecm status 在任何一个节点上查看集群情况

root@proxmox231:~# pvecm status
Quorum information
------------------
Date:             Fri Sep 28 15:39:20 2018
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1/12
Quorate:          Yes
Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 
Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.5.231 (local)
0x00000003          1 192.168.5.232
0x00000002          1 192.168.5.233

   102.JPG  

     Proxmox支持两种磁盘,一种是服务器本地自带磁盘,另一种是外部存储设备的磁盘。对于服务器本地磁盘,可以配置成本地目录、zfs、brd、lvm几种形式。


     对于外部存储设备的磁盘,可以通过nfs,iscsi或者fc协议挂载到Proxmox服务器使用。其中nfs挂载上以后,可以直接作为文件存储使用,通过iscsi或者fc协议挂载后,Proxmox服务器会识别为裸磁盘设备,还需要进一步配置后才可以使用。例如可以配置成lvm,作为卷存储使用,也可以配置成本地目录,作为文件存储使用,但强烈建议不要配置为zfs使用,因为zfs的特点是要求能直接管理物理磁盘设备,如果中间有raid卡等,会严重影响zfs的正常运行。


     当然,用户也可以配置外部的glusterfs、ceph、sheepdog等基于服务器的分布式存储。对于glusterfs,应该是可以直接通过菜单挂载;对于ceph,需要通过iscsi协议挂载;对于sheepdog,需要手工安装sheepdog插件,然后在命令行配置挂载。glusterfs挂载后可以作为文件存储使用,ceph和sheepdog应该都只能作为卷存储使用了。


     前面多次提到了文件存储和卷存储两个概念,又有什么区别呢?主要是Proxmox对于存储使用是有多种需求的,例如对于虚拟磁盘,就有raw,qcow2,vmdk三种,另外还有iso镜像文件、gz容器模版文件、虚拟机备份文件的保存需求,这些都需要文件存储才可以实现。当然,用户也可以直接用用lvm逻辑卷或zvol逻辑卷或rbd卷的方式保存虚拟机的磁盘,相当于raw格式的效果,这就可以使用卷存储来实现。


     那么,说了那么多,到底该怎么配置和选择呢?简单总结下,一般的配置是这样。


本地磁盘-本地目录-文件存储

本地磁盘-lvm-卷存储

本地磁盘-zfs-卷存储/文件存储

本地磁盘-ceph-卷存储

外部存储-nfs-文件存储

外部存储-iscci/fc-lvm-卷存储

外部存储-iscsi/fc-目录-文件存储

外部glusterfs-glusterfs插件挂载-文件存储

外部ceph-iscsi-卷存储

外部sheepdog-插件挂载-卷存储


#在每个节点上安装ceph,详见https://pve.proxmox.com/pve-docs/chapter-pveceph.html

root@proxmox231:~# pveceph install --version luminous
root@proxmox232:~# pveceph install --version luminous
root@proxmox233:~# pveceph install --version luminous

#配置ceph集群存储网络

root@proxmox231:~# pveceph init --network 192.168.5.0/24

#创建ceph集群存储Mon监控

root@proxmox231:~# pveceph createmon
root@proxmox232:~# pveceph createmon
root@proxmox233:~# pveceph createmon

#创建mgr

root@proxmox231:~# pveceph createmgr
root@proxmox232:~# pveceph createmgr
root@proxmox233:~# pveceph createmgr

#创建Ceph OSDs

root@proxmox231:~# pveceph createosd /dev/sdb
root@proxmox232:~# pveceph createosd /dev/sdb
root@proxmox233:~# pveceph createosd /dev/sdb

#创建集群存储资源池ceph osd pool create [资源池名称] 128 128

root@proxmox231:~# ceph osd pool create pvepool 128 128
pool 'pvepool' created

#复制存储ID和密钥到指定文件位置

root@proxmox231:~# mkdir /etc/pve/priv/ceph
root@proxmox231:~# cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph.keyring
root@proxmox231:~# cp /etc/pve/priv/ceph.client.admin.keyring /etc/pve/priv/ceph/ceph1.keyring
root@proxmox231:~# ceph osd pool application enable pvepool rbd
enabled application 'rbd' on pool 'pvepool'

#查看集群状态

root@proxmox231:~# ceph -s
  cluster:
    id:     2cd9afcd-fd20-4e52-966b-3252c6444e6c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum proxmox231,proxmox232,proxmox233
    mgr: proxmox231(active), standbys: proxmox232, proxmox233
    osd: 3 osds: 3 up, 3 in

#添加RBD集群存储

104.JPG

103.JPG

ID:填写为ceph 必填,不能定义

资源池:pvepool 可选(默认为rbd)

Monitor:192.168.5.231 192.168.5.232 192.168.5.233 (注意添加多个Mon以空格隔开)

添加节点:proxmox231,proxmox232,proxmox233


#查看rbd集群存储配置信息

root@proxmox231:~# cat /etc/pve/storage.cfg 
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup
lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
rbd: ceph
        content images,rootdir
        krbd 0
        nodes proxmox233,proxmox231,proxmox232
        pool pvepool

#上传ISO镜像,通过sftp上传到以下目录/var/lib/vz/template/iso,但这里上传的iso只能本机显示,如果要让集群中其它的主机也能共享iso,那就要用到共享存储,刚好内网上有一台NFS服务器,exsi存储用的,上面也有ISO镜像,正好可以共享,于是在数据中心上挂载这个NFS存储

106.JPG


105.JPG


107.JPG

#新建虚拟机,使用ceph存储

108.JPG


1081.JPG

#虚拟机热迁移测试

刚刚在proxmox231上创建了一个centos7的虚拟机,存储用到了ceph,所以先来进行热迁移测试,将proxmox231上的虚拟迁移到proxmox233上面

109.JPG


110.JPG

2018-09-29 15:50:16 starting migration of VM 100 to node 'proxmox233' (192.168.5.233)

2018-09-29 15:50:16 found local disk 'local:iso/CentOS-7-x86_64-DVD-1804.iso' (in current VM config)

2018-09-29 15:50:16 can't migrate local disk 'local:iso/CentOS-7-x86_64-DVD-1804.iso': can't live migrate attached local disks without with-local-disks option

2018-09-29 15:50:16 ERROR: Failed to sync data - can't migrate VM - check log

2018-09-29 15:50:16 aborting phase 1 - cleanup resources

2018-09-29 15:50:16 ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't migrate VM - check log

TASK ERROR: migration aborted


#在迁移的时候报错,那是因为在安装的时候挂载了本地的ISO,所以在迁移之前先修改下虚拟的配置,CD/DVD选择不使用任何介质

111.JPG


#然后再来迁移

112.JPG

#此时之前在proxmox231上面运行的虚拟已经迁移到proxmox233上面来了

113.JPG


#将虚拟机加入HA,进行高可用测试

114.JPG


115.JPG


#好了,已成功将proxmox233上面的k8s71.blufly.com虚拟机添加到HA,现在将proxmox233关机模拟故障

116.JPG


#从截图上可以看到proxmox233已经故障,k8s71.blufly.com这个虚拟机已经迁移到proxmox231上了,实现了高可用

117.JPG

以上只是对proxmox做了简单的测试,基本上能满足日常需求,更多的高级功能后面再来慢慢摸索。

ownCloud 基础优化

$
0
0

ownCloud 基础优化

 

文章目录

  1. 前言
  2. 准备工作
  3. 配置Redis
  4. 配置 CRON
  5. 配置HTTPS
  6. 配置HSTS
  7. 配置MemCache
  8. 配置SMBClient

前言

安装完成之后发现点击管理后有几个小问题。

  • 事务文件锁定应配置为使用基于内存的锁定,而不是默认的基于慢速数据库的锁定。有关详细信息,请参阅 文档
  • 我们建议启用系统 cron,任何其他 cron 方法可能对性能和可靠性有影响。
  • 您正在通过 HTTP 访问该站点,我们强烈建议您按照安全提示配置服务器强制使用 HTTPS。
  • HTTP 严格传输安全(Strict-Transport-Security)报头未配置到至少“15552000”秒。处于增强安全性考虑,我们推荐按照安全提示启用 HSTS
  • 内存缓存未配置。如果可用,请配置 memcache 来增强性能。更多信息请查看我们的文档。

更多安装信息请移步ownCloud 部署

准备工作

  • CA颁发的证书,我这里使用的是腾讯云的证书,也是用的亚洲诚信的证书。

配置Redis

1
2
3
4
#yum install php70w-pecl-redis -y //安装支持组件
#yum install redis-server -y //安装Redis
#systemctl enable redis
#systemctl start redis
1
#vim /var/www/html/owncloud/config/config.php //编辑ownCldou配置文件

加入 Redis Configure 字段中的内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?php
$CONFIG = array (
  'instanceid' => 'ocxhyr8g8cb6',
  'passwordsalt' => 'NRY2nwU6iSmPqQEtLxS6J8PnTY/2+0',
  'secret' => 'wqlck4NWOxE9wRqVm7AWRlb7fN8S95zI2hYk7EP4JFhWnTyL',
  'trusted_domains' =>
  array (
    0 => '10.4.22.71',
    1 => 'owncloud.cloud.cocobike.cn',
  ),
  'datadirectory' => '/var/ownclouddata',
  'overwrite.cli.url' => 'http://10.4.22.71/owncloud',
  'dbtype' => 'mysql',
  'version' => '10.0.3.3',
  'dbname' => 'owncloud',
  'dbhost' => 'localhost',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'owncloud',
  'dbpassword' => 'password',
  'logtimezone' => 'UTC',
  'installed' => true,
  
  // Redis Configure Start
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => array(
  'host' => 'localhost',
  'port' => 6379,
  ),
  // Redis Configure End
);
1
#systemctl restart httpd //重启一下Apache

配置 CRON

1
#cron -e -u apache //打开之后写入以下信息
1
*/15 * * * php -f /var/www/html/owncloud/cron.php

官方文档链接

配置HTTPS

1
2
#yum install mod_ssl -y //安装Apache SSL/TLS 模块
#vim /etc/httpd/conf.d/ssl.conf  //编辑配置文件,制定密钥路径

SSLCertificateFile 为你的公钥路径
SSLCertificateKeyFile 为你的私钥路径
SSLCACertificateFile 为你的CA密钥路径

1
2
#chmod 0400 /root/sslKey/* //执行密钥文件 仅可读
#vim /etc/httpd/conf/httpd.conf  //修改配置文件

ServerName 指定你的域名 比如我的 owncloud.cloud.cocobike.cn 在95行
DocumentRoot 指定你的根路径 “/var/www/html/owncloud” 在119行
AllowOverride 把None改为All 启用.htaccess 在151行

1
2
#httpd -t //检查httpd.conf 语法是否正确
#systemctl restart httpd

配置HSTS

1
2
#vim /etc/httpd/conf/httpd.conf //在最后添加下面字段即可
#systemctl restart httpd
1
2
3
<IfModule mod_headers.c>
  Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"
</IfModule>

配置MemCache

1
2
3
4
#yum install memcached -y
#yum install php70w-pecl-memcached -y
#systemctl enable memcached
#systemctl start memcached

配置config.php

1
#vim /var/www/html/owncloud/config/config.php

加入 Memcache Configure 字段中的内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?php
$CONFIG = array (
  'instanceid' => 'ocxhyr8g8cb6',
  'passwordsalt' => 'NRY2nwU6iSmPqQEtLxS6J8PnTY/2+0',
  'secret' => 'wqlck4NWOxE9wRqVm7AWRlb7fN8S95zI2hYk7EP4JFhWnTyL',
  'trusted_domains' =>
  array (
    0 => '10.4.22.71',
    1 => 'owncloud.cloud.cocobike.cn',
  ),
  'datadirectory' => '/var/ownclouddata',
  'overwrite.cli.url' => 'http://10.4.22.71/owncloud',
  'dbtype' => 'mysql',
  'version' => '10.0.3.3',
  'dbname' => 'owncloud',
  'dbhost' => 'localhost',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'owncloud',
  'dbpassword' => 'password',
  'logtimezone' => 'UTC',
  'installed' => true,
  
  // Redis Configure Start
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => array(
  'host' => 'localhost',
  'port' => 6379,
  ),
  // Redis Configure End
  
  // Memcache Configure Start
  'memcache.local' => '\\OC\\Memcache\\Memcached',
  'memcached' => array(
      'host' => 'localhost',
      'port' => 11211,
  ),
  // Memcache Configure End
);

配置SMBClient

因为在之前的RPM的仓库里没有这个组件所以需要重新找一个

1
2
3
#rpm -Uvh https://centos7.iuscommunity.org/ius-release.rpm
#yum install php70u-pecl-smbclient -y
#systemctl restart httpd

原文作者: KangKang

原文链接: https://heyikang.me/2017/12/16/ownCloud-Optimization/

许可协议: 知识共享署名-非商业性使用 4.0 国际许可协议

ownCloud CentOS7

centos7/centos8安装owncloud10.3与onlyoffice在线文档编辑

$
0
0

centos7/centos8安装owncloud10.3与onlyoffice在线文档编辑


内部共享文档可以使用owncloud,但是要编辑文档的话,可以使用onlyoffice加ONLYOFFICE Desktop Editors。

具体安装owncloud可以看这里 https://www.92cto.com/blog/2169.html .


 [root@k8s-master]#   cd /var/www/html/owncloud/
 [root@k8s-master]#   cd apps/
 [root@k8s-master]#   yum install git
 [root@k8s-master]#   git clone https://github.com/ONLYOFFICE/onlyoffice-owncloud.git onlyoffice
 [root@k8s-master]#   chown -R apache:apache onlyoffice


我这里安装onlyoffice文档服务器,就直接使用阿里云的镜像服务,公开的镜像,可以自行搜索,或是直接使用官网的镜像。

[root@k8s ~]# podman run -i -t -d -p 7808:80 -p 7843:443 --name office-doc registry.cn-hangzhou.aliyuncs.com/thundersdata-public/onlyoffice-documentserver-chinese-fonts:5.4.0.21
4d22f593580a8814a3bd5171b7ddd47a53b4db89ff986daf0c0a9cabaa97f0ab

[root@k8s ~]#
[root@k8s ~]# podman ps
CONTAINER ID  IMAGE                                                                                                   COMMAND               CREATED        STATUS            PORTS                                        NAMES
4d22f593580a  registry.cn-hangzhou.aliyuncs.com/thundersdata-public/onlyoffice-documentserver-chinese-fonts:5.4.0.21  /bin/sh -c /app/o...  6 minutes ago  Up 5 minutes ago  0.0.0.0:7808->80/tcp, 0.0.0.0:7843->443/tcp  office-doc

我这里使用的是centos8的系统中自带的podman 容器,与docker一样,当然也可以使用centos7系统中的docker,两都没有什么差别。

打开文档服务器http://192.168.137.18:7808/ ,出现以下页面就是安装正常了。当然docker中一样目录可以挂载到本机目录,这里就不讲了。



最后再owncloud中启用onlyoffice,再填写onlyoffice的web接口地址,我这里接口地址就是:  http://192.168.137.18:7808/


2. Enabling OwnCloud ONLYOFFICE integration app

Place OwnCloud ONLYOFFICE integration app into the /apps directory on your OwnCloud server:

cd apps/ git clone https://github.com/ONLYOFFICE/onlyoffice-owncloud.git onlyoffice

Go to OwnCloud, open the page with Not enabled apps and click Enable for the ONLYOFFICE application.


3. Configuring OwnCloud ONLYOFFICE integration app

Go to the OwnCloud Admin panel, open the ONLYOFFICE section and enter the address of the server where the ONLYOFFICE Document Server is installed:

 


保存后,可以看到下面生成了新的连接设置选项,可以根据自己的情况选择。

最后可以下载 ONLYOFFICE Desktop Editors 软件,当客户端文档编辑软件使用,

再到个人文件空间里查看,多了一个功能,而且打开一些文档时,直接进入web编辑文档功能。


也可以直接点击文件,相关类型的文件不会出现下载,会直接转到Onlyoffice在线编辑页面,可以直接编辑文档,

随时关闭窗口,不需要点击保存,下次打开后就是已经修改过的文档,非常方便。




Win10/windows server2016开始菜单打不开卡住解决方法

$
0
0
笔记本安装Win10/windows server2016后,使用中发现电脑死机后,开始菜单打不开,点开始菜单卡住的解决方法


打开 事件查看器, 查看windows 应用日志,有以下几个报错:

1,

错误应用程序名称: SearchUI.exe,版本: 10.0.14393.2430,时间戳: 0x5b691c85
错误模块名称: KERNELBASE.dll,版本: 10.0.14393.3269,时间戳: 0x5d9133fb
异常代码: 0x00000004
错误偏移量: 0x0000000000034c48
错误进程 ID: 0x2c08
错误应用程序启动时间: 0x01d594581a9dc8e8
错误应用程序路径: C:\Windows\SystemApps\Microsoft.Windows.Cortana_cw5n1h2txyewy\SearchUI.exe
错误模块路径: C:\Windows\System32\KERNELBASE.dll
报告 ID: a9a24f4c-c847-4a1f-91a1-11584661050c
错误程序包全名: Microsoft.Windows.Cortana_1.7.0.14393_neutral_neutral_cw5n1h2txyewy
错误程序包相对应用程序 ID: CortanaUI


2,

激活应用 Microsoft.Windows.ShellExperienceHost_cw5n1h2txyewy!App 失败,错误: 远程过程调用失败。 请查看 Microsoft-Windows-TWinUI/运行日志以了解其他信息。


3,

错误存储段 2000106270865921514,类型 5
事件名称: MoAppCrash
响应: 不可用
Cab Id: 0

问题签名:
P1: Microsoft.Windows.ShellExperienceHost_10.0.14393.2068_neutral_neutral_cw5n1h2txyewy
P2: praid:App
P3: 10.0.14393.2339
P4: 5b1f1748
P5: KERNELBASE.dll
P6: 10.0.14393.3269
P7: 5d9133fb
P8: 00000004
P9: 0000000000034c48
P10:

附加文件:
\\?\C:\Users\Administrator\AppData\Local\Temp\WERF68E.tmp.WERInternalMetadata.xml

可在此处获取这些文件:
C:\ProgramData\Microsoft\Windows\WER\ReportArchive\AppCrash_Microsoft.Window_4113d9ba36af3b272c2fe2398afae230c961ef40_6922adae_2e0ffecb

分析符号:
重新检查解决方案: 0
报告 Id: a73a3ce6-25d0-42ef-9c53-760986299f95
报告状态: 0
哈希存储段: 354ddb2031b52ae7abc1ce0e6bfe95ea


4,

错误应用程序名称: ShellExperienceHost.exe,版本: 10.0.14393.2339,时间戳: 0x5b1f1748
错误模块名称: KERNELBASE.dll,版本: 10.0.14393.3269,时间戳: 0x5d9133fb
异常代码: 0x00000004
错误偏移量: 0x0000000000034c48
错误进程 ID: 0x2050
错误应用程序启动时间: 0x01d5945817912a5b
错误应用程序路径: C:\Windows\SystemApps\ShellExperienceHost_cw5n1h2txyewy\ShellExperienceHost.exe
错误模块路径: C:\Windows\System32\KERNELBASE.dll
报告 ID: a73a3ce6-25d0-42ef-9c53-760986299f95
错误程序包全名: Microsoft.Windows.ShellExperienceHost_10.0.14393.2068_neutral_neutral_cw5n1h2txyewy
错误程序包相对应用程序 ID: App


5,

错误应用程序名称: SearchUI.exe,版本: 10.0.14393.2430,时间戳: 0x5b691c85
错误模块名称: KERNELBASE.dll,版本: 10.0.14393.3269,时间戳: 0x5d9133fb
异常代码: 0x00000004
错误偏移量: 0x0000000000034c48
错误进程 ID: 0x2634
错误应用程序启动时间: 0x01d59453b2c609cf
错误应用程序路径: C:\Windows\SystemApps\Microsoft.Windows.Cortana_cw5n1h2txyewy\SearchUI.exe
错误模块路径: C:\Windows\System32\KERNELBASE.dll
报告 ID: e198277d-745f-4c0f-af7b-4c36c55d3249
错误程序包全名: Microsoft.Windows.Cortana_1.7.0.14393_neutral_neutral_cw5n1h2txyewy
错误程序包相对应用程序 ID: CortanaUI



二,从上面日志看,发现ShellExperienceHost.exe与Windows.Cortana 出现问题导致的。


     1, 从网上下载    Win10开始菜单修复工具 官方最新版。

               https://www.cr173.com/soft/280320.html

       http://cycy.198424.com/wskscdxf.zip


      2,解压出软件包,然后运行工具,我这里提示 Microsoft.Windows.ShellExperienceHost 和 "Microsoft.Windows.Cortana" 应用程序出现问题需要修   复。


     3,查找资料需要在powershell中运行两个命令。

          管理员模式启动 powershell,执行以下两个命令。

    
Get-AppXPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "C:\Windows\SystemApps\ShellExperienceHost_cw5n1h2txyewy\AppXManifest.xml"}


 Get-AppXPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppXManifest.xml"}


执行完后,发现开始菜单已经可以正常打开了。或是重启系统后再试下。

阿里云/VPS/云主机使用DD命令一键安装RouterOS-ROS系统

$
0
0

阿里云/VPS/云主机使用DD命令一键安装RouterOS-ROS系统


这个ros chr版本支持  Virtio network device,当然也可以是别的网卡型号,
但必需routeros镜像有对应网卡驱动,不然安装后,找不到网卡。centos系统可以通过 pci 命令查看相应的网卡驱动。

1、阿里云环境centos6.9 x64:
内网网卡为eth0
阿里云的linux下硬盘名称为/dev/vda
注意阿里云的安全组建议开放任意协议和端口,任意IP允许访问

2、安装完ROS:chr-6.39.2.img版本
内网网卡为ether1
上面描述的很重要,你要根据你的VPS实际网卡情况,来设定脚本。

wget http://download2.mikrotik.com/routeros/6.39.2/chr-6.39.2.img.zip -O chr.img.zip && \
gunzip -c chr.img.zip > chr.img && \
mount -o loop,offset=33554944 chr.img /mnt && \
ADDRESS0=`ip addr show eth0 | grep global | cut -d' ' -f 6 | head -n 1` && \
GATEWAY0=`ip route list | grep default | cut -d' ' -f 3` && \
echo "/ip address add address=$ADDRESS0 interface=[/interface ethernet find where name=ether1]
/ip route add gateway=$GATEWAY0
" > /mnt/rw/autorun.scr && \
umount /mnt && \
echo u > /proc/sysrq-trigger && \
dd if=chr.img bs=1024 of=/dev/vda && \
reboot

命令说明:
1、wget从ros官方下载CHR镜像到本地目录,并命名为chr.img.zip;
这个版本支持Ethernet controller: Red Hat, Inc. Virtio network device,当然也可以是别的网卡型号,
但必需routeros镜像有对应网卡驱动,不然安装后,找不到网卡。
centos系统可以通过 pci 命令查看相应的网卡驱动。

[root@iZj6c38tf0alwq9klk2aweZ ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:04.0 Communication controller: Red Hat, Inc. Virtio console
00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device
00:06.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon 


2、gunzip把chr.img.zip解压为chr.img
3、把chr.img镜像释放到/mnt目录下
4、抓取eth0的IP地址,并赋值参数为ADDRESS0
7、抓取ip route里的默认网关,并赋值参数为GATEWAY0
8、echo后面的为ros里的命令,ROS的内网网卡赋值内网IP,并设置默认网关,
并赋值给/mnt/rw/autorun.scr,这里可以干好多事情,大家自由发挥
9、umount /mnt,卸载已经加载的文件系统/mnt
10、echo u > /proc/sysrq-trigger 立即重新挂载所有的文件系统为只读
11、dd:用指定大小的块拷贝一个文件,并在拷贝的同时进行指定的转换。
if=文件名:输入文件名,缺省为标准输入。即指定源文件。
of=文件名:输出文件名,缺省为标准输出。即指定目的文件。
12、reboot重启机器


阿里云aliyun主机ECS快速安装搭建配置install MikroTik RouterOS/ROS

*以下内容转自https://www.cnblogs.com/itfat/p/8183644.html

*注意阿里云的安全组建议开放任意协议和端口,任意IP允许访问
*今天测试阿里云2C4G的死活失败,但是1C1G就OK。大家注意下。

阿里云环境centos6.9 x64:
内网网卡为eth0
外网网卡为eth1
阿里云的linux下硬盘名称为/dev/vda

安装ROS:chr-6.39.2.img版本
内网网卡为ether1
外网网卡为ether2
上面描述的很重要,你要根据你的VPS实际网卡情况,来设定脚本。

脚本内容:

wget http://download2.mikrotik.com/routeros/6.39.2/chr-6.39.2.img.zip -O chr.img.zip && \
gunzip -c chr.img.zip > chr.img && \
mount -o loop,offset=33554944 chr.img /mnt && \
ADDRESS0=`ip addr show eth0 | grep global | cut -d' ' -f 6 | head -n 1` && \
ADDRESS1=`ip addr show eth1 | grep global | cut -d' ' -f 6 | head -n 1` && \
GATEWAY0=`ip route list | grep '10.0.0.0/8' | cut -d' ' -f 3` && \
GATEWAY1=`ip route list | grep default | cut -d' ' -f 3` && \
echo "/ip address add address=$ADDRESS0 interface=[/interface ethernet find where name=ether1]
/ip address add address=$ADDRESS1 interface=[/interface ethernet find where name=ether2]
/ip route add dst-address=10.0.0.0/8 gateway=$GATEWAY0
/ip route add dst-address=100.64.0.0/10 gateway=$GATEWAY0
/ip route add dst-address=172.16.0.0/12 gateway=$GATEWAY0
/ip route add gateway=$GATEWAY1
" > /mnt/rw/autorun.scr && \
umount /mnt && \
echo u > /proc/sysrq-trigger && \
dd if=chr.img bs=1024 of=/dev/vda && \
reboot

命令说明:

1、wget从ros官方下载CHR镜像到本地目录,并命名为chr.img.zip;建议wget下载。自己搭建http下载的地址,比如可以放到阿里云的oss下载路径。
http://lbros.oss-cn-hangzhou.aliyuncs.com
2、gunzip把chr.img.zip解压为chr.img
3、把chr.img镜像释放到/mnt目录下
4、抓取eth0的IP地址,并赋值参数为ADDRESS0
5、抓取eth1的IP地址,并赋值参数为ADDRESS1
6、抓取ip route里的10.0.0.0/8网关,并赋值参数为GATEWAY0
7、抓取ip route里的默认网关,并赋值参数为GATEWAY1
8、echo后面的为ros里的命令,ROS的内网网卡赋值内网IP,外网网卡赋值外网IP,并设置默认网关以及到阿里云的局域网段的网关,阿里云的局域网有三个段分别是10.0.0.0/8,100.64.0.0/10,172.16.0.0/12并赋值给/mnt/rw/autorun.scr,这里可以干好多事情,大家自由发挥
9、umount /mnt,卸载已经加载的文件系统/mnt
10、echo u > /proc/sysrq-trigger 立即重新挂载所有的文件系统为只读
11、dd:用指定大小的块拷贝一个文件,并在拷贝的同时进行指定的转换。
if=文件名:输入文件名,缺省为标准输入。即指定源文件。
of=文件名:输出文件名,缺省为标准输出。即指定目的文件。
12、reboot重启机器

 


Centos7 firewalld·动态防火墙使用方法示例

$
0
0

Centos7 firewalld·动态防火墙使用方法示例


# systemctl start firewalld         # 启动,
# systemctl enable firewalld        # 开机启动
# systemctl stop firewalld          # 关闭
# systemctl disable firewalld       # 取消开机启动

具体的规则管理,可以使用firewall-cmd ,具体的使用方法可以

$ firewall-cmd --help

--zone=NAME                         # 指定 zone
--permanent                         # 永久修改,--reload 后生效
--timeout=seconds                   # 持续效果,到期后自动移除,用于调试,不能与 --permanent 同时使用
1. 查看规则

查看运行状态

$ firewall-cmd --state

查看已被激活的 Zone 信息

$ firewall-cmd --get-active-zones
public
  interfaces: eth0 eth1

查看指定接口的 Zone 信息

$ firewall-cmd --get-zone-of-interface=eth0
public

查看指定级别的接口

$ firewall-cmd --zone=public --list-interfaces
eth0

查看指定级别的所有信息,譬如 public

复制代码
复制代码
$ firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eth0
  sources:
  services: dhcpv6-client http ssh
  ports:
  masquerade: no
  forward-ports:
  icmp-blocks:
  rich rules:
复制代码
复制代码

查看所有级别被允许的信息

$ firewall-cmd --get-service

查看重启后所有 Zones 级别中被允许的服务,即永久放行的服务

$ firewall-cmd --get-service --permanent
2. 管理规则
# firewall-cmd --panic-on           # 丢弃
# firewall-cmd --panic-off          # 取消丢弃
# firewall-cmd --query-panic        # 查看丢弃状态
# firewall-cmd --reload             # 更新规则,不重启服务
# firewall-cmd --complete-reload    # 更新规则,重启服务

添加某接口至某信任等级,譬如添加 eth0 至 public,永久修改

# firewall-cmd --zone=public --add-interface=eth0 --permanent

 

设置 public 为默认的信任级别

# firewall-cmd --set-default-zone=public
a. 管理端口

列出 dmz 级别的被允许的进入端口

# firewall-cmd --zone=dmz --list-ports

 

允许 tcp 端口 8080 至 dmz 级别

# firewall-cmd --zone=dmz --add-port=8080/tcp

 

允许某范围的 udp 端口至 public 级别,并永久生效

# firewall-cmd --zone=public --add-port=5060-5059/udp --permanent

 

b. 网卡接口

列出 public zone 所有网卡

# firewall-cmd --zone=public --list-interfaces

 

将 eth0 添加至 public zone,永久

# firewall-cmd --zone=public --permanent --add-interface=eth0

 

eth0 存在与 public zone,将该网卡添加至 work zone,并将之从 public zone 中删除

# firewall-cmd --zone=work --permanent --change-interface=eth0

 

删除 public zone 中的 eth0,永久

# firewall-cmd --zone=public --permanent --remove-interface=eth0

 

c. 管理服务

添加 smtp 服务至 work zone

# firewall-cmd --zone=work --add-service=smtp

 

移除 work zone 中的 smtp 服务

# firewall-cmd --zone=work --remove-service=smtp

 

d. 配置 external zone 中的 ip 地址伪装

查看

# firewall-cmd --zone=external --query-masquerade

 

打开伪装

# firewall-cmd --zone=external --add-masquerade

 

关闭伪装

# firewall-cmd --zone=external --remove-masquerade

 

e. 配置 public zone 的端口转发

要打开端口转发,则需要先

# firewall-cmd --zone=public --add-masquerade

 

然后转发 tcp 22 端口至 3753

# firewall-cmd --zone=public --add-forward-port=port=22:proto=tcp:toport=3753

 

转发 22 端口数据至另一个 ip 的相同端口上

# firewall-cmd --zone=public --add-forward-port=port=22:proto=tcp:toaddr=192.168.1.100

 

转发 22 端口数据至另一 ip 的 2055 端口上

# firewall-cmd --zone=public --add-forward-port=port=22:proto=tcp:toport=2055:toaddr=192.168.1.100

 

f. 配置 public zone 的 icmp

查看所有支持的 icmp 类型

# firewall-cmd --get-icmptypes
destination-unreachable echo-reply echo-request parameter-problem redirect router-advertisement router-solicitation source-quench time-exceeded

 

列出

# firewall-cmd --zone=public --list-icmp-blocks

 

添加 echo-request 屏蔽

# firewall-cmd --zone=public --add-icmp-block=echo-request [--timeout=seconds]

 

移除 echo-reply 屏蔽

# firewall-cmd --zone=public --remove-icmp-block=echo-reply

 

g. IP 封禁
# firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='222.222.222.222' reject"

How to install NextCloud 16 server on CentOS 7

$
0
0

How to install NextCloud 16 server on CentOS 7.x


NextCloud is a Dropbox-like solution for self-hosted file sharing and syncing. Installing NextCloud 16 on CentOS is quite simple. Whether you want to backup, have file-syncing or just have a Google Calendar alternative, this guide is for you.

What is NextCloud? Is it like a “cloud”?


If you stumbled here by chance and don’t know what NextCloud is, here is an article explaining its principal features and advantages/disadvantages. In this other article you can find NextCloud 16 new features. To tell you the truth, NextCloud is a SaaS cloud, if you want to know more about cloud types you can read this article.

In this article we will cover the installation of the server (not the client).

What’s the newest version?

The newest version of this tutorial is the following:

Looking for an earlier version of this tutorial?

Step1: Install software

Important
I take absolutely NO responsibility of what you do with your machine; use this tutorial as a guide and remember you can possibly cause data loss if you touch things carelessly.

The first step in order to install NextCloud 16 is to install a web server and PHP. Since CentOS 7 ships with PHP 5.4 by default but NextCloud 16 requires at least PHP 7 we’ll also be installing PHP 7 from a third-party repository. The following procedure will install apache as webserver. Input the commands one by one to avoid errors!

CentOS 7

If you’d rather use PHP 7.3, you can follow this tutorial: how to install PHP 7.3 on CentOS 7. PHP 7.3 isn’t yet available in this repository.

Warning!
If you decided to use PHP 7.3 rather than PHP 7.2 using the past tutorial, replace each instance of php72w with php73w in all the successive commands.

Open a terminal and input the following commands:

  1. # yum install epel-release
  2. # rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
  3. # yum install httpd php72w php72w-dom php72w-mbstring php72w-gd php72w-pdo php72w-json php72w-xml php72w-zip php72w-curl php72w-pear php72w-intl setroubleshoot-server bzip2 php72w-pecl-imagick

Step 2: Database selection

Now that you got the software, you need to choose a database that will support the installation. You have three choices:

  • SQLite: is a single-file database. It is suggested only for small installations since it will slow NextCloud down sensibly.
  • MariaDB/MySQL: are popular open source databases especially amongst web developers. It is the suggested choice.
  • PostgreSQL: a popular enterprise-class database. More complicated than MySQL/MariaDB.

Now, this choice won’t really alter the functionality of NextCloud (except if you use SQLite), so pick whatever you know best. If you’re unsure pick MariaDB/MySQL.

SQLiteMySQL/MariaDBPostgreSQL

No additional steps are required if you choose SQLite.

Install the software:

  1. # yum install mariadb-server php72w-mysql

Start (and enable at boot) the service:

  1. # systemctl start mariadb
  2. # systemctl enable mariadb

Next step is to configure the database management system. During the configuration you will be prompted to choose a root password, pick a strong one.

  1. # mysql_secure_installation

Now you need to enter the database (you will be asked the password you just set):

  1. $ mysql -u root -p

Now that you are in create a database:

  1. CREATE DATABASE nextcloud;

Now you need to create the user that will be used to connect to the database:

  1. CREATE USER 'nc_user'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD_HERE';

The last step is to grant the privileges to the new user:

  1. GRANT ALL PRIVILEGES ON nextcloud.* TO 'nc_user'@'localhost';
  2. FLUSH PRIVILEGES;

When you’re done type Ctrl-D to exit.

Install the software:

  1. # yum install postgresql postgresql-server php72w-pgsql

Run the setup:

  1. # postgresql-setup initdb

Start (and enable at boot) the service:

  1. # systemctl start postgresql
  2. # systemctl enable postgresql

Now you need to enter the database:

  1. $ sudo -u postgres psql

Now that you are in create a database:

  1. CREATE DATABASE nextcloud;

Now you need to create the user that will be used to connect to the database:

  1. CREATE USER nc_user WITH PASSWORD 'YOUR_PASSWORD_HERE';

The last step is to grant the privileges to the new user:

  1. GRANT ALL PRIVILEGES ON DATABASE nextcloud to nc_user;

When you’re done type \q and press enter to exit.

Warning!
You may experience difficulties in authenticating NextCloud with PostgreSQL since the local authentication method is set to ident by default. If you want to change it keep reading.

The configuration file for PostgreSQL is a file located in /var/lib/pgsql/data/pg_hba.conf . Open it with your favourite editor and look for the marked line (line 5):

  1. # TYPE DATABASE USER ADDRESS METHOD
  2. # "local" is for Unix domain socket connections only
  3. local all all peer
  4. # IPv4 local connections:
  5. host all all 127.0.0.1/32 ident
  6. # IPv6 local connections:
  7. host all all ::1/128 ident
  8. # Allow replication connections from localhost, by a user with the
  9. # replication privilege.
  10. #local replication postgres peer
  11. #host replication postgres 127.0.0.1/32 ident
  12. #host replication postgres ::1/128 ident

Replace ident with md5 on that line and restart PostgreSQL:

  1. # systemctl restart postgresql

Step 3: Install NextCloud

This step involves getting the software and configure Apache to run it.

CentOS 7

With these step we download the software and extract it:

  1. # cd /var/www/html
  2. # curl -o nextcloud-16-latest.tar.bz2 https://download.nextcloud.com/server/releases/latest-16.tar.bz2
  3. # tar -xvjf nextcloud-16-latest.tar.bz2
  4. # mkdir nextcloud/data
  5. # chown -R apache:apache nextcloud
  6. # rm nextcloud-16-latest.tar.bz2

Now we need to create a new file in /etc/httpd/conf.d/nextcloud.conf . Feel free to use whatever editor you feel comfortable with and add the following lines:

  1. Alias /nextcloud "/var/www/html/nextcloud/"

  2. <Directory /var/www/html/nextcloud/>
  3. Options +FollowSymlinks
  4. AllowOverride All

  5. <IfModule mod_dav.c>
  6. Dav off
  7. </IfModule>

  8. SetEnv HOME /var/www/html/nextcloud
  9. SetEnv HTTP_HOME /var/www/html/nextcloud

  10. </Directory>

Step 4: Setting Apache and SELinux

In this step we’ll start (and enable) the webserver and we’ll set SELinux up. Now, many tutorials will tell you to disable SELinux (because it is a difficult component to manage). Instead, I suggest you to keep it on and add the rules for NextCloud:

CentOS 7
  1. # semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
  2. # semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
  3. # semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
  4. # semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess'
  5. # semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'
  6. # restorecon -Rv '/var/www/html/nextcloud/'

If you decided to use a Mariadb/MySQL/PostgreSQL, you also need to allow apache to access it:

  1. # setsebool -P httpd_can_network_connect_db

In case you chose PostgreSQL you also need to enable httpd_execmem (I’m still investigating why this is needed):

  1. # setsebool -P httpd_execmem

Another important thing to do is to raise PHP’s memory limit:

  1. # sed -i '/^memory_limit =/s/=.*/= 512M/' /etc/php.ini

Now that you’ve configured SELinux let’s start and enable Apache:

  1. # systemctl start httpd
  2. # systemctl enable httpd

Step 5: Configuring firewall

This step is essential when your firewall is enabled. If your firewall is enabled you won’t be able to access your NextCloud 14 instance; on the other hand if it isn’t enabled you shouldn’t have any problems and you can simply skip this step. 

Tip!
Keep in mind having a firewall enabled is a good security practice and you should already have one enabled.

In order for the firewall to work, it must be enabled. This guide will not include this part. When you enable a firewall many things can go wrong, e.g. you’re using SSH, you enable the firewall and your connection is cut and can’t connect otherwise, hence you should carefully review the documentation from your distribution.

To open the ports needed by NextCloud 16 follow these steps:

FirewallDIPtables

FirewallD is a newer firewall used to simplify firewall management. If you’re using it you can simply do:

  1. # firewall-cmd --add-service http --permanent
  2. # firewall-cmd --add-service https --permanent
  3. # firewall-cmd --reload

IPtables is an older firewall (still widely used), if you have disabled firewallD you can use IPtables directly.

  1. # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
  2. # iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT

Step 6: Install

Once you’re done, it’s time to install everything. Head to http://YOUR_IP_ADDRESS/nextcloud/ and you will be facing the following screen:

Nextcloud 16 Installation
Nextcloud 16 Installation

Select an administrator username and password. Then click on “Storage & Database“, here you can select the data folder, but if you don’t know what you’re doing it’s best if you leave it with the default value. Then select the database you chose during step 2. Fill everything and if you’ve followed all the steps correctly you should be seeing the following screen:

NextCloud 16 Files app as viewed when installing for the first time
NextCloud 16 Files app

Step 7: Enable Caching (suggested)

NextCloud is good but it can be very slow if you don’t configure a caching solution. There are two caching solutions covered in this guide:

  • PHP OPcache: a PHP inbuilt cache solution that speeds up scripts execution.
  • Redis server: a fast in-memory key-value store that speeds up everything in NextCloud.

Enabling OPcache

CentOS

Open a terminal and input the following commands:

  1. # yum install php-opcache

Now you need to edit a file located at /etc/php.d/10-opcache.ini . With your favorite editor, edit the file and make it look like this:

  1. ; Enable Zend OPcache extension module
  2. zend_extension=opcache.so
  3. opcache.enable=1
  4. opcache.enable_cli=1
  5. opcache.interned_strings_buffer=8
  6. opcache.max_accelerated_files=10000
  7. opcache.memory_consumption=128
  8. opcache.save_comments=1
  9. opcache.revalidate_freq=1

These values are suggested by NextCloud, but you’re free to tweak them to suit your needs. Once you’re done you can restart apache:

  1. # systemctl restart httpd

Installing and configuring Redis

CentOS

Open a terminal and input the following commands:

  1. # yum install redis php72w-pecl-redis

Now you must configure NextCloud to use Redis. To do so you need to edit the NextCloud configuration file located at /var/www/html/nextcloud/config/config.php . The file will look like this, add the highlighted lines:

  1. <?php
  2. $CONFIG = array (
  3. 'instanceid' => '',
  4. 'passwordsalt' => '',
  5. 'secret' => '',
  6. 'trusted_domains' =>
  7. array (
  8. 0 => 'YOUR_IP',
  9. ),
  10. 'datadirectory' => '/var/www/html/nextcloud/data',
  11. 'dbtype' => 'mysql',
  12. 'version' => '15.0.0.10',
  13. 'overwrite.cli.url' => 'http://YOUR_IP/nextcloud',
  14. 'dbname' => 'nextcloud',
  15. 'dbhost' => 'localhost',
  16. 'dbport' => '',
  17. 'dbtableprefix' => 'oc_',
  18. 'dbuser' => 'nc_user',
  19. 'dbpassword' => 'YOUR_PASSWORD_HERE',
  20. 'installed' => true,
  21. 'memcache.locking' => '\OC\Memcache\Redis',
  22. 'memcache.distributed' => '\OC\Memcache\Redis',
  23. 'memcache.local' => '\OC\Memcache\Redis',
  24. 'redis' => [
  25. 'host' => 'localhost',
  26. 'port' => 6379,
  27. 'timeout' => 3,
  28. ],
  29. );

These settings will enable NextCloud to use Redis for caching and file locks. Of course these settings are just an example, you can tweak them to suit your needs.

Now you need to modify (for some reason) the Redis port SELinux label in order to enable Apache to access Redis:

  1. # semanage port -m -t http_port_t -p tcp 6379

Lastly, enable and start Redis and restart the webserver:

  1. # systemctl restart redis
  2. # systemctl enable redis
  3. # systemctl restart httpd

Step 8: Expose NextCloud to Internet (optional)

Important
Hosting applications available to the Internet is potentially dangerous. In order to keep your applications safe you need to be proficient in system security and to follow security best practices.

Most people will want to access their files from whatever location they are. To do so, your newly created NextCloud instance needs to be connected to the Internet.

Given that you need to take care of port-forwarding (if you’re a home user) and domain configuration (which varies according to your provider), here you can find the instructions to create a virtual host with Apache.

CentOS

Using your favorite text editor, edit the file we created previously at /etc/httpd/conf.d/nextcloud.conf . And make it look like this:

  1. <VirtualHost *:80>
  2. ServerName YOURDOMAIN.TLD
  3. ServerAdmin YOUR@EMAIL.TLD
  4. DocumentRoot /var/www/html/nextcloud

  5. <directory /var/www/html/nextcloud>
  6. Require all granted
  7. AllowOverride All
  8. Options FollowSymLinks MultiViews
  9. SetEnv HOME /var/www/html/nextcloud
  10. SetEnv HTTP_HOME /var/www/html/nextcloud
  11. </directory>
  12. </VirtualHost>

It is important to set ServerName according to a domain you own and have configured correctly. Now you need to add YOURDOMAIN.TLD to the trusted domains in the NextCloud config file. You can do so with the following command:

  1. $ sudo -u apache php /var/www/html/nextcloud/occ config:system:set trusted_domains 2 --value=YOURDOMAIN.TLD

Once you complete this step you won’t be able to access NextCloud through http://YOUR_IP_ADDRESS/nextcloud anymore. Instead you will be able to access it through http://YOURDOMAIN.TLD (notice /nextcloud is gone).

Lastly, restart the webserver:

  1. # systemctl restart httpd

Step 9: Get a free SSL certificate with Let’s Encrypt! (SUGGESTED!

Now that you have your NextCloud instance up and running you’re good to go, but beware: you’re not safe. Internet is a dangerous place for your data and you will most likely need an SSL certificate to ensure your communications are encrypted. Provided you own a domain name you can get one for free using Let’s Encrypt! No catches, free forever.

Warning!
Let’s Encrypt has rate limits in place to prevent inappropriate usage of the CA. There’s a limit on the numbers of attempts you can do before getting a temporary ban. During this setup, if things go wrong, I suggest you to use the –staging option to avoid the temporary ban. The –staging option will use a testing server and will not issue valid certificates. When you have completed the procedure against the test server successfully, you can remove the –staging option to obtain the real certificate.
CentOS

Open a terminal and input the following commands:

  1. # yum install certbot certbot-apache

Now you will run the command to install a certificate, follow the procedure and you will get everything configured out of the box:

  1. $ sudo certbot --apache

Lastly, restart the webserver:

  1. # systemctl restart httpd

If you need further help you can follow my other tutorial on Let’s Encrypt on CentOS (the apache part).

Image courtesy of mark | marksei
The following two tabs change content below.
The IT guy with a little boredom look in his eyes, fond of computers since forever he now works as a freelancer in the IT and shares his experiences through this blog.

CentOS7/RHEL 7安装KVM虚拟化并桥接网卡

$
0
0


CentOS7安装KVM虚拟化

一、KVM介绍

KVM,基于内核的虚拟机(英语:Kernel-based Virtual Machine,缩写为 KVM),

是一种用于Linux内核中的虚拟化基础设施,可以将Linux内核转化为一个hypervisor。


二、KVM部署及使用

1.系统环境查询

复制代码
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core) 
[root@localhost ~]# uname -r
3.10.0-862.el7.x86_64
[root@localhost ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31


验证CPU是否支持虚拟化,输入有vmx或svm就支持,支持虚拟化则就支持KVM
[root@localhost ~]# cat /proc/cpuinfo | egrep 'vmx|svm'
有vmx或svm内容输出就支持,支持虚拟化则就支持KVM

查看是否加载KVM

 [root@promote images]# lsmod | grep kvm

 kvm_intel 174841 0
 kvm 578518 1 kvm_intel
 irqbypass 13503 1 kvm


 已经加载,如果没有加载,则执行以下命令,加载KVM

 [root@localhost ~]#modprobe kvm

配置网卡桥接,原来的主机网卡配置文件内容修改如下: 
BOOTPROTO=none
DEVICE=enp2s0f0
#NM_CONTROLLED=no
ONBOOT=yes
#TYPE=Ethernet
#USERCTL=no
#IPADDR=45.141.44.2
#NETMASK=255.255.255.0
#GATEWAY=45.141.44.1
#DNS1=8.8.8.8
BRIDGE=br0

只需要以上内容就行了,不要再多内容,不然可能会启动不了网卡,导致主机无法远程。
下面是br0桥接网卡内容。

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=45.141.44.2
NETMASK=255.255.255.0
GATEWAY=45.141.44.1
DNS1=8.8.8.8


配置好网卡配置文件后,可以停用防火墙与sellinux。
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl disable firewalld.service
systemctl stop firewalld.service
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf

重启网卡只能使用这个命令,不能使用netowrk,不然会有问题,两都有冲突。
systemctl restart NetworkManager.service
或是直接重启系统。

2.安装KVM虚拟化软件

安装依赖包(使用本地yum源)

# yum install libvirt* virt-* qemu-kvm* -y
# yum -y install qemu-kvm qemu-kvm-tools qemu-img virt-manager libvirt libvirt-python libvirt-client bridge-utils virt-viewer virt-install 

说明:

libvirt    # 虚拟机管理
virt       # 虚拟机安装克隆
qemu-kvm   # 管理虚拟机磁盘

启动KVM

# systemctl start libvirtd.service
# systemctl status libvirtd.servic


2.安装第一台KVM虚机

[root@localhost ~]# virt-install --name centos6.5_1 --ram 800 --vcpus 1 
--disk path=/home/vmdisk/cnetos6.5_1.qcow2,format=qcow2,size=10 
--network=bridge=br0 --cdrom=/opt/CentOS-6.5-x86_64-minimal.iso  
--os-type=linux --autostart --vnclisten=0.0.0.0 --vncport=6900 --vnc
新建一个centos6.5的虚拟机,开通VNC远程端口6900,内存 800M,硬盘为10GB,
网卡采用桥接br0形式的虚拟机。
[root@localhost ~]# virt-install 
--virt-type kvm 
--os-type=linux 
--os-variant rhel7 
--name centos7 
--memory 1024 
--vcpus 1 
--disk (虚拟硬盘绝对路径),format=raw,size=10 
--cdrom (iso镜像文件绝对路径) 
--network network=default 
--noautoconsole
复制代码

注意:需要先将镜像文件拷贝到 设置的路径下

参数说明

参数

参数说明

--virt-type HV_TYPE

要使用的管理程序名称 (kvm, qemu, xen, ...)

--os-type

系统类型

--os-variant DISTRO_VARIANT

在客户机上安装的操作系统,例如:'fedora18'、'rhel6'、'winxp' 等。

-n NAME, --name NAME

客户机实例名称

--memory MEMORY

配置客户机虚拟内存大小

--vcpus VCPUS

配置客户机虚拟 CPU(vcpu) 数量。

--disk DISK

指定存储的各种选项。

-cdrom CDROM   

光驱安装介质

-w NETWORK, --network NETWORK

配置客户机网络接口。

--graphics GRAPHICS

配置客户机显示设置。

虚拟化平台选项:

-v, --hvm         

这个客户机应该是一个全虚拟化客户机

-p, --paravirt    

这个客户机应该是一个半虚拟化客户机

--container       

这个客户机应该是一个容器客户机

--virt-type HV_TYPE

要使用的管理程序名称 (kvm, qemu, xen, ...)

--arch ARCH       

模拟 CPU 架构

--machine MACHINE

机器类型为仿真类型

其它选项:

--noautoconsole

不要自动尝试连接到客户端控制台

--autostart

主机启动时自动启动域。

--noreboot 

安装完成后不启动客户机。

3.KVM虚机管理

virsh命令常用参数总结

参数

参数说明

基础操作

list

查看虚拟机列表,列出域

start

启动虚拟机,开始一个(以前定义的)非活跃的域

shutdown

关闭虚拟机,关闭一个域

destroy(危险)

强制关闭虚拟机,销毁(停止)域

vncdisplay

查询虚拟机vnc端口号

配置管理操作

dumpxml

导出主机配置信息

undefine

删除主机

define

导入主机配置

domrename

对虚拟机进行重命名

挂起与恢复

suspend

挂起虚拟机

resume

恢复虚拟机

自启动管理

autostart

虚拟机开机启动

autostart --disable

取消虚拟机开机启动

以上参数通过  “virsh  --help” 获得。

查看虚拟机配置文件

[root@localhost ~]# cat /etc/libvirt/qemu/test01.xml

修改KVM虚拟机配置的方法

[root@localhost ~]# virsh edit test01    (使用该命令修改可以对文件进行语法校验)

Running virt-install to Build the KVM Guest System

The virt-install utility must be run as root and accepts a wide range of command-line arguments that are used to provide configuration information related to the virtual machine being created. Some of these command-line options are mandatory (specifically name, ram and disk storage must be provided) while others are optional. A summary of these arguments is outlined in the following table:

Argument

Description

-h, --help Show the help message and exit
--connect=CONNECT Connect to a non-default hypervisor.
-n NAME, --name=NAME Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down (’virsh shutdown’) & delete (’virsh undefine’) it prior to running "virt-install".
-r MEMORY, --ram=MEMORY Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation.
--arch=ARCH Request a non-native CPU architecture for the guest virtual machine. The option is only currently available with QEMU guests, and will not enable use of acceleration. If omitted, the host CPU architecture will be used in the guest.
-u UUID, --uuid=UUID UUID for the guest; if none is given a random UUID will be generated. If you specify UUID, you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID
--vcpus=VCPUS Number of virtual cpus to configure for the guest. Not all hypervisors support SMP guests, in which case this argument will be silently ignored
--check-cpu Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do.
--cpuset=CPUSET Set which physical cpus the guest can use. "CPUSET" is a comma separated list of numbers, which can also be specified in ranges. If the value ’auto’ is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available.
--os-type=OS_TYPE Optimize the guest configuration for a type of operating system (ex. ’linux’, ’windows’). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks. See "--os-variant" for valid options. For a full list of valid options refer to the man page (man virt-install).
--os-variant=OS_VARIANT Further optimize the guest configuration for a specific operating system variant (ex. ’fedora8’, ’winxp’). This parameter is optional, and does not require an "--os-type" to be specified. For a full list of valid options refer to the man page (man virt-install).
--host-device=HOSTDEV Attach a physical host device to the guest. HOSTDEV is a node device name as used by libvirt (as shown by ’virsh nodedev-list’).
--sound Attach a virtual audio device to the guest. (Full virtualization only).
--noacpi Override the OS type / variant to disables the ACPI setting for fully virtualized guest. (Full virtualization only).
-v, --hvm Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.
-p, --paravirt This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the "--hvm" are specified, this will be assumed.
--accelerate When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available.
-c CDROM, --cdrom=CDROM File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the "--location" argument. If a cdrom has been specified via the "--disk" option, and neither "--cdrom" nor any other install option is specified, the "--disk" cdrom is used as the install media.
-l LOCATION, --location=LOCATION Installation source for guest virtual machine kernel+initrd pair. The "LOCATION" can take one of the following forms:
  • DIRECTORY - Path to a local directory containing an installable distribution image
  • nfs:host:/path or nfs://host/path - An NFS server location containing an installable distributionimage
  • http://host/path - An HTTP server location containing an installable distribution image
  • ftp://host/path - An FTP server location containing an installable distribution image
--pxe Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process.
--import Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via "--disk" or "--file".
--livecd Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the "--nodisks" flag in combination.
-x EXTRA, --extra-args=EXTRA Additional kernel command line arguments to pass to the installer when performing a guest install from "--location".
--disk=DISKOPTS Specifies media to use as storage for the guest, with various options.
--disk opt1=val1,opt2=val2,... To specify media, one of the following options is required:
  • path - A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume. Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a ’size’ value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method.
  • pool - An existing libvirt storage pool name to create new storage on. Requires specifying a ’size’ value.
  • vol - An existing libvirt storage volume to use. This is specified as ’poolname/volname’.
  • device - Disk device type. Value can be ’cdrom’, ’disk’, or ’floppy’. Default is ’disk’. If a ’cdrom’ is specified, and no install method is chosen, the cdrom is used as the install media.
  • bus - Disk bus type. Value can be ’ide’, ’scsi’, ’usb’, ’virtio’ or ’xen’. The default is hypervisor dependent since not all hypervisors support all bus types.
  • perms - Disk permissions. Value can be ’rw’ (Read/Write), ’ro’ (Readonly), or ’sh’ (Shared Read/Write). Default is ’rw’
  • size - size (in GB) to use if creating new storage
  • sparse - whether to skip fully allocating newly created storage. Value is ’true’ or ’false’. Default is ’true’ (do not fully allocate). The initial time taken to fully-allocate the guest virtual disk (spare=false) will be usually by balanced by faster install times inside the guest. Thus use of this option is recommended to ensure consistently high performance and to avoid I/O errors in the guest should the host filesystem fill up.
  • cache - The cache mode to be used. The host pagecache provides cache memory. The cache value can be ’none’, ’writethrough’, or ’writeback’. ’writethrough’ provides read caching. ’writeback’ provides read and write caching. See the examples section for some uses. This option deprecates "--file", "--file-size", and "--nonsparse".
-f DISKFILE, --file=DISKFILE Path to the file, disk partition, or logical volume to use as the backing store for the guest’s virtual disk. This option is deprecated in favor of "--disk".
-s DISKSIZE, --file-size=DISKSIZE Size of the file to create for the guest virtual disk. This is deprecated in favor of "--disk".
--nonsparse Fully allocate the storage when creating. This is deprecated in favort of "--disk"
--nodisks Request a virtual machine without any local disk storage, typically used for running ’Live CD’ images or installing to network storage (iSCSI or NFS root).
-w NETWORK, --network=NETWORK Connect the guest to the host network. The value for "NETWORK" can take one of 3 formats:
  • bridge:BRIDGE - Connect to a bridge device in the host called "BRIDGE". Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN. Also use this if live migration will be used with this guest.
  • network:NAME - Connect to a virtual network in the host called "NAME". Virtual networks can be listed, created, deleted using the "virsh" command line tool. In an unmodified install of "libvirt" there is usually a virtual network with a name of "default". Use a virtual network if the host has dynamic networking (eg NetworkManager), or using wireless. The guest will be NATed to the LAN by whichever connection is active.
  • user - Connect to the LAN using SLIRP. Only use this if running a QEMU guest as an unprivileged user. This provides a very limited form of NAT.
  • If this option is omitted a single NIC will be created in the guest. If there is a bridge device in the host with a physical interface enslaved, that will be used for connectivity. Failing that, the virtual network called "default" will be used. This option can be specified multiple times to setup more than one NIC.
-b BRIDGE, --bridge=BRIDGE Bridge device to connect the guest NIC to. This parameter is deprecated in favour of the "--network" parameter.
-m MAC, --mac=MAC Fixed MAC address for the guest; If this parameter is omitted, or the value "RANDOM" is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence ’00:16:3e’, while for QEMU or KVM virtual machines it must be ’54:52:00’.
--nonetworks Request a virtual machine without any network interfaces.
--vnc Setup a virtual console in the guest and export it as a VNC server in the host. Unless the "--vncport" parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the "vncdisplay" command to "virsh" (or virt-viewer(1) can be used which handles this detail for the use).
--vncport=VNCPORT Request a permanent, statically assigned port number for the guest VNC console. Use of this option is discouraged as other guests may automatically choose to run on this port causing a clash.
--sdl Setup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated.
--nographics No graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the --extra-args option). Xen PV will set this up automatically. The command ’virsh console NAME’ can be used to connect to the serial device.
--noautoconsole Don’t automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the "virsh" "console" command to display the text console. Use of this parameter will disable this behaviour.
-k KEYMAP, --keymap=KEYMAP Request that the virtual VNC console be configured to run with a non- English keyboard layout.
-d, --debug Print debugging information to the terminal when running the install process. The debugging information is also stored in "$HOME/.virtinst/virt-install.log" even if this parameter is omitted.
--noreboot Prevent the domain from automatically rebooting after the install has completed.
--wait=WAIT Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of --noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is succeeded, virt-install simply exits, leaving the virtual machine in its current state.
--force Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit.
--prompt Specifically enable prompting. Default prompting is off (as of virtinst 0.400.0)

An Example CentOS virt-install Command

With reference to the above command-line argument list, we can now look at an example command-line construct using the virt-install tool.

The following command creates a new KVM virtual machine configured to run Windows 7 using full virtualization. It creates a new, 10GB disk image, assigns 512MB of RAM to the virtual machine, configures a CD device for the installation media and uses VNC to display the console:

virt-install --name myWin7 –-hvm --ram 512 --disk path=/tmp/win7.img,size=10 
--network network:default --vnc --os-variant vista --cdrom /dev/hda

Note that the above command line assumes the installation media is in a drive corresponding to device file /dev/hda. This may differ on your system, or may be replaced by a path to an ISO image file residing on a file system.

As the creation process runs, the virt-install command will display status updates of the creation progress:

Starting install...
Creating storage file...                                 | 6.0 GB     00:00
Creating domain...                                       |    0 B     00:00
Domain installation still in progress. Waiting for installation to complete.


Install KVM Hypervisor on CentOS 7.x and RHEL 7.x

KVM is an open source hardware virtualization software through which we can create and run multiple Linux based and windows based virtual machines simultaneously. KVM is known as Kernel based Virtual Machine because when we install KVM package then KVM module is loaded into the current kernel and turns our Linux machine into a hypervisor.

In this post first we will demonstrate how we can install KVM hypervisor on CentOS 7.x and RHEL 7.x and then we will try to install virtual machines.

Before proceeding KVM installation, let’s check whether your system’s CPU supports Hardware Virtualization.

Run the beneath command from the console.

[root@linuxtechi ~]# grep -E '(vmx|svm)' /proc/cpuinfo

We should get the word either vmx or svm in the output, otherwise CPU doesn’t support virtualization.

Step:1 Install KVM and its associate packages

Run the following yum command to install KVM and its associated packages.

[root@linuxtechi ~]# yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils

Start and enable the libvirtd service

[root@linuxtechi ~]# systemctl start libvirtd
[root@linuxtechi ~]# systemctl enable libvirtd

Run the beneath command to check whether KVM module is loaded or not

[root@linuxtechi ~]# lsmod | grep kvm
kvm_intel             162153  0
kvm                   525409  1 kvm_intel
[root@linuxtechi ~]#

In Case you have Minimal CentOS 7 and RHEL 7 installation , then virt-manger will not start for that you need to install x-window package.

[root@linuxtechi ~]# yum install "@X Window System" xorg-x11-xauth xorg-x11-fonts-* xorg-x11-utils -y

Reboot the Server and then try to start virt manager.

Step:2 Start the Virt Manager

Virt Manager is a graphical tool through which we can install and manage virtual machines. To start the virt manager type the ‘virt-manager‘ command from the terminal.

[root@linuxtechi ~]# virt-manager


Step:3 Configure Bridge Interface

Before Start creating VMs , let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.

[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/
[root@linuxtechi network-scripts]# cp ifcfg-eno49 ifcfg-br0
[root@linuxtechi network-scripts]#

Edit the Interface file and set followings:

[root@linuxtechi network-scripts]# vi ifcfg-eno49
TYPE=Ethernet
BOOTPROTO=static
DEVICE=eno49
ONBOOT=yes
BRIDGE=br0

Edit the Bridge file (ifcfg-br0) and set the followings:

[root@linuxtechi network-scripts]# vi ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.10.21
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.11

Replace the IP address and DNS server details as per your setup.

Restart the network Service to enable the bridge interface.

[root@linuxtechi ~]#  systemctl restart NetworkManager.service
[root@linuxtechi ~]#

Check the Bridge interface using below command :

[root@linuxtechi ~]# ip addr show br0

Step:4 Start Creating Virtual Machines.

Now Create Virtual Machine either from the command line using ‘virt-install‘ command or from GUI (virt-manager )

Let’s Create a virtual machine of “Windows Server 2012 R2” using virt-manager.

Start the “virt-manager”

Go to the File Option, click on “New Virtual Machine”

create-new-virtual-machine-virt-manager

We will be using ISO file as installation media. In the next step Specify the path of ISO file.

path-iso-file-windows2012r2

Click on Forward.

Specify the Compute Resources : RAM and CPU as per your setup.


Click on Forward to proceed further.

Specify the storage Size of Virtual Machine, In my case I am using 25G.


In the Next step Specify the Name of Virtual Machine and select network as ‘ Bridge bro’


Click on Finish to start the installation.


Follow the screen instructions and complete the installation.

Creating a virtual Machine from Command Line:

Virtual Machines can be created from the console as well using ‘virt-install’ command. In the following example i going to virtual machine of Ubuntu 16.04 LTS.

[root@linuxtechi ~]# virt-install --name=Ubuntu-16-04 --file=/var/lib/libvirt/images/ubuntu16-04.dsk --file-size=20 --nonsparse --graphics spice --vcpus=2 --ram=2048 --cdrom=ubuntu-16.04-server-amd64.iso --network bridge=br0 --os-type=linux --os-variant=generic
Starting install...
Allocating 'ubuntu16-04.dsk'               | 20 GB 00:00:00
Creating domain...


Follow the instruction now and complete the installation.

In the above ‘virt-install’ command we have used following options :

  • –name = <Name of the Virtual Machine>
  • –file = <Location where our virtual machine disk file will be stored >
  • –file-size = < Size of the Virtual Machine, in my case it is 20GB >
  • –nonsparse = < Allocate the whole storage while creating>
  • –graphics = < Specify the graphical tool for interactive installation, in above example I am using spice >
  • –vcpu = < Number of virtual CPU for the Machine >
  • –ram = < RAM size for the virtual Machine >
  • –cdrom = < Virtual CD ROM which specify the installation media like ISO file >
  • –network = < it is used to specify which network we will use for the virtual machine, in this example I am bridge interface>
  • –os-type = < Operating system type like linux and window>
  • –os-variant= <KVM maintains the OS variants like ‘fedora18′, ‘rhel6’ and ‘winxp’ , this option is optional and if you not sure about OS variant you can mentioned it as generic>

Once the Installation is completed we can access the Virtual Machine console from ‘virt-manager‘ as shown below.


That’s it, basic installation and configuration of KVM hypervisor is completed.



Virtualization in Linux: Installing KVM on CentOS & RHEL

In this tutorial, we will be installing KVM on CentOS or RHEL machines. KVM (also called QEMU) or Kernel Based Virtualization Machine is a Hardware based virtualization software that provide a Linux system capability to run multiple operating systems in Linux environment. It can run Linux as well as Windows family OS.

By hardware based virtualization, it means that your processor must support hardware virtualization to run KVM on your system. So if your processor is Intel based, it must support Intel VT or if you are using AMD based processor, it must support AMD-V. So before we proceed further with this tutorial we must check if your processor supports hardware virtualization or not. Most of the modern processors do support hardware virtualization but to be sure, please run the following command,

$ egrep ‘(vmx|svm)’ /proc/cpuinfo

If you receive ‘vmx or svm’ in the output than processor supports hardware virtualization otherwise it doesn’t support it & you can’t install KVM/QEMU on your machine.

KVM/QEMU can be managed either graphically or through CLI. We use virt-manager for managing virtual machines, it can create, delete, edit & can also cold/live migrate guest machines between hosts.


 

Installing KVM on CentOS or RHEL

For installing KVM, run the following command,

$ yum install qemu-kvmqemu-imgvirt-manager libvirtlibvirt-python libvirt-client virt-install virt-viewer

Now, let’s have a brief look at what these packages actually are,

  • qemu-kvm is QEMU emulator, it’s the main package for KVM,
  • qemu-img is QEMU disk image manager,
  • virt-install is a command line tool to create virtual machines.
  • libvirt , it provides daemon to manage virtual machines and controls hypervisor.
  • libvirt-client , it provides client side API’s for accessing servers and virsh utility which provides command line tool to manage virtual machines.
  • virt-viewer is the graphical console.

QEMU is now ready, we will now restart our virtualization daemon called ‘libvertd’,

$ systemctl restart libvertd

We will now create virtual machine with the help of virt-manager. But before we start with creating a virtual machine, we will have to configure a bridge adapter, which is required if we need to access outside network from our VM.

 

Creating a Bridge adapter

Copy file for your current network interface ‘ifcfg-en0s1’ to another file for bridge interface named ‘ifcfg-br0’

$ cd /etc/sysconfig/network-scripts/
$ cp ifcfg-en0s1 ifcfg-br0

Now we will edit the file ‘ifcfg-br0’,

$ vi ifcfg-br0

TYPE=Bridge
BOOTPROTO=static
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.1.110
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8

Change network settings as per your own network requirements. Save the file & restart network services.

$ systemctl restart network

Now let’s create our first virtual machine.

 

Creating a Virtual Machine

We will launch ‘virt-manager’ to create our first virtual machine. You can launch viet-manager either using CLI or graphically,

For CLI, launch your terminal & type

$ virt-manager

Or Virtual machine manager in you Application under system tools. Once it has been launched, goto ‘File’ & click on ‘New Virtual Machine’

installing KVM on Centos

We will be using an ISO image for our installation, so select ‘Local Install Media’ for installing OS,

installing KVM on Centos

next , select the location for your ISO image & click Forward,

installing KVM on Centos

on the next page, select ‘Memory’ & number of ‘CPUs’ & click Forward,

installing KVM on Centos

specify the storage size for your VM & click Forward,

installing KVM on Centos

On the next page will be the summary for our VM, review all the configurations & in Network selection , select bridged adapter ‘br0’ & hit finish. Now install the OS as you normally do & boot into VM once the installation has been completed. Similarly create as many VMs as you need & as your resources permit.

This concludes our tutorial for installing KVM on CentOS. if you are having any issues or have any suggestions, please feel free to submit them through comment box down below.

cAdvisor+InfluxDB+Grafana 监控Docker

$
0
0

cAdvisor+InfluxDB+Grafana 监控Docker

目录

  • 一、概念

  • 二、单节点部署

  • 三、Swarm多节点部署

 


容器的监控方案其实有很多,有docker自身的docker stats命令、有Scout、有Data Dog等等,本文主要和大家分享一下比较经典的容器开源监控方案组合:cAdvisor+InfluxDB+Grafan

回到顶部

一、概念

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
1). InfluxDB是什么
        nfluxDB是用GO语言编写的一个开源分布式时序、事件和指标数据库,无需外部的依赖,类似的数据库有Elasticsearch、Graphite等等
 
        InfluxDB主要的功能:
            基于时间序列:支持与时间有关的相关函数(如最大、最小、求和等)
            可度量性:可以实时对大量数据进行计算
            基于事件:它支持任意的事件数据
 
        InfluxDB的主要特点:
            无结构(无模式):可以是任意数量的列
            可拓展的
            支持min, max, sum, count, mean, median 等一系列函数,方便统计
            原生的HTTP支持,内置HTTP API
            强大的类SQL语法
            自带管理界面,方便使用
 
2). cAdvisor是什么
        它是Google用来监测单节点的资源信息的监控工具。Cadvisor提供了一目了然的单节点多容器的资源监控功能。Google的Kubernetes中也缺省地将其作为单节点的资源监控工具,各个节点缺省会被安装上Cadvisor
        cAvisor是利用docker status的数据信息,了解运行时容器资源使用和性能特征的一种工具
        cAdvisor的容器抽象基于Google的lmctfy容器栈,因此原生支持Docker容器并能够“开箱即用”地支持其他的容器类型。
        cAdvisor部署为一个运行中的daemon,它会收集、聚集、处理并导出运行中容器的信息。
        这些信息能够包含容器级别的资源隔离参数、资源的历史使用状况、反映资源使用和网络统计数据完整历史状况的柱状图。
 
        cAdvisor功能:
            展示Host和容器两个层次的监控数据
            展示历史变化数据
 
        温馨提示:
            由于 cAdvisor 提供的操作界面略显简陋,而且需要在不同页面之间跳转,并且只能监控一个 host,这不免会让人质疑它的实用性。
            但 cAdvisor 的一个亮点是它可以将监控到的数据导出给第三方工具,由这些工具进一步加工处理。
            我们可以把 cAdvisor 定位为一个监控数据收集器,收集和导出数据是它的强项,而非展示数据
 
3). Grafana是什么
        Grafana是一个可视化面板(Dashboard),有着非常漂亮的图表和布局展示,功能齐全的度量仪表盘和图形编辑器,支持Graphite、zabbix、InfluxDB、Prometheus和OpenTSDB作为数据源
 
        Grafana主要特性:
            灵活丰富的图形化选项;
            可以混合多种风格;
            支持白天和夜间模式;
            支持多个数据源;
 
温馨提示:
    在这套监控方案中:InfluxDB用于数据存储,cAdvisor用户数据采集,Grafana用于数据展示

回到顶部

二、单节点部署

温馨提示:
   服务器信息:
   主机IP:192.168.15.129
主机名:master1
docker版本:18.06.1-ce

1. 下载镜像(可做可不做,在创建容器的时候会如果本地没有会自动下载)

1
2
3
4
5
6
7
8
9
10
11
# 下载镜像
[root@master1 ~]# docker pull tutum/influxdb
[root@master1 ~]# docker pull google/cadvisor
[root@master1 ~]# docker pull grafana/grafana
 
# 查看镜像
[root@master1 ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
grafana/grafana     latest              7038dbc9a50c        7 days ago          223MB
google/cadvisor     latest              75f88e3ec333        10 months ago       62.2MB
tutum/influxdb      latest              c061e5808198        2 years ago         290MB

2. 创建InfluxDB容器

1
2
3
4
5
6
7
8
9
10
11
12
13
# 创建InfluxDB容器
[root@master1 ~]# docker run -itd -p 8083:8083 -p 8086:8086 --name influxdb tutum/influxdb
 
参数详解:
-itd:已交互模式运行容器,并分配伪终端,并在后台启动容器
-p:端口映射 8083端口为influxdb后台控制端口,8086端口是influxdb的数据端口
--name:给容器起个名字
tutum/influxdb:以这个镜像运行容器(本地有使用本地,没有先去下载然后启动容器)
 
# 查看容器
[root@master1 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                            NAMES
f01c5e754bc0        tutum/influxdb      "/run.sh"           3 seconds ago       Up 2 seconds        0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp   influxdb

配置InfluxDB

登录InfluxDB的8083端口,也是管理平台设置管理员用户名密码,并添加数据库

登录URL:http://192.168.15.129:8083

设置管理员用户名密码,并添加数据库

3. 创建cadvisor容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 创建cadvisor容器
[root@master1 ~]# docker run -itd --name cadvisor -p 8080:8080 --mount type=bind,src=/,dst=/rootfs,ro --mount type=bind,src=/var/run,dst=/var/run --mount type=bind,src=/sys,dst=/sys,ro --mount type=bind,src=/var/lib/docker/,dst=/var/lib/docker,ro google/cadvisor -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_user=root -storage_driver_password=root -storage_driver_host=192.168.15.129:8086
 
参数详解:
-itd:已交互模式运行容器,并分配伪终端,并在后台启动容器
-p: 端口映射 8080为cadvisor的管理平台端口
--name:给容器起个名字
--mout:把宿主机的相文目录绑定到容器中,这些目录都是cadvisor需要采集的目录文件和监控内容
google/cadvisor:以这个镜像运行容器(本地有使用本地,没有先去下载然后启动容器)
-storage_driver:需要指定cadvisor的存储驱动这里是influxdb
-storage_driver_db:需要指定存储的数据库
-storage_driver_user:influxdb数据库的用户名(测试可以加可以不加)
-storage_driver_password:influxdb数据库的密码(测试可以加可以不加)
-storage_driver_host:influxdb数据库的地址和端口
 
# 查看容器
[root@master1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
7c2005bb79d1        google/cadvisor     "/usr/bin/cadvisor -…"   3 seconds ago       Up 2 seconds        0.0.0.0:8080->8080/tcp                           cadvisor
2fa150d3c52b        tutum/influxdb      "/run.sh"                10 minutes ago      Up 10 minutes       0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp   influxdb

查看cadvisor管理平台
登录URL:http://192.168.15.129:8080

登录数据库查看有没有把采集的数据写入(SHOW MEASUREMENTS执行这个命令)

得到上面的结果说明已经采集到数据并且写入到数据库了

4. 创建grafana容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 创建grafana容器
[root@master1 ~]# docker run -itd --name grafana  -p 3000:3000 grafana/grafana
 
参数详解:
-itd:已交互模式运行容器,并分配伪终端,并在后台启动容器
-p: 端口映射 3000为grafana的管理平台端口
--name:给容器起个名字
grafana/grafana:以这个镜像运行容器(本地有使用本地,没有先去下载然后启动容器)
 
# 查看容器
[root@master1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
57f335665902        grafana/grafana     "/run.sh"                2 seconds ago       Up 1 second         0.0.0.0:3000->3000/tcp                           grafana
7c2005bb79d1        google/cadvisor     "/usr/bin/cadvisor -…"   15 minutes ago      Up 15 minutes       0.0.0.0:8080->8080/tcp                           cadvisor
2fa150d3c52b        tutum/influxdb      "/run.sh"                25 minutes ago      Up 25 minutes       0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp   influxdb

配置granfana
登录URL:http://192.168.15.129:3000
默认用户名:admin
默认密码:admin
温馨提示:
   首次登录会提示修改密码才可以登录,我这里修改密码为admin

 得到上面的结果表示整个监控已经部署完成并可以对基础监控进行实施监控,具体需要监控什么,grafana怎么样排版,怎样起名字,根据个人的业务需求来进行设置即可 

回到顶部

三、Swarm多节点部署

刚刚上面的例子是在一台主机上监控一台主机的容器信息,这里我们要使用Swarm的集群部署多台主机容器之间的监控
温馨提示:
   主机IP:192.168.15.129 主机名:master1  角色:Swarm的主         granfana容器 influxdb容器 cadvisor容器
   主机IP:192.168.15.130 主机名:node1    角色:Swarm的node节点   cadvisor容器
   主机IP:192.168.15.131 主机名:node2    角色:Swarm的node节点   cadvisor容器

1. 准备工作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 创建InfluxDB的宿主机目录挂载到容器
[root@master1 ~]# mkdir -p /opt/influxdb
 
# 下载镜像(可做可不做,在创建容器的时候会如果本地没有会自动下载)
[root@master1 ~]# docker pull tutum/influxdb
[root@master1 ~]# docker pull google/cadvisor
[root@master1 ~]# docker pull grafana/grafana
 
# 查看镜像
[root@master1 ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
grafana/grafana     latest              7038dbc9a50c        7 days ago          223MB
google/cadvisor     latest              75f88e3ec333        10 months ago       62.2MB
tutum/influxdb      latest              c061e5808198        2 years ago         290MB

2. 编写创建容器的yml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# 编写docker-compose.yml文件
[root@master1 ~]# mkdir test
[root@master1 test]# cat docker-compose.yml
version: '3.7'
 
services:
  influx:
    image: tutum/influxdb
    ports:
      - "8083:8083"
      - "8086:8086"
    volumes:
      - "/opt/influxdb:/var/lib/influxdb"
    deploy:
      replicas: 1
      placement:
        constraints: [node.role==manager]
 
  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"
    depends_on:
      - "influx"
    deploy:
      replicas: 1
      placement:
        constraints: [node.role==manager]
 
  cadvisor:
    image: google/cadvisor
    ports:
      - "8080:8080"
    hostname: '{{.Node.Hostname}}'
    command: -logtostderr -docker_only -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=influx:8086
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    depends_on:
      - influx
    deploy:
      mode: global
 
volumes:
  influx:
    driver: local
  grafana:
    driver: local

3. 创建Swarm集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 在master1上执行
[root@master1 test]# docker swarm init --advertise-addr 192.168.15.129
Swarm initialized: current node (xtooqr30af6fdcu51jzdv79wh) is now a manager.
 
To add a worker to this swarm, run the following command:
    # 这里已经提示使用下面的命令在node节点上执行就可以加入集群(前提docker服务一定是启动的)
    docker swarm join --token SWMTKN-1-3yyjydabd8v340kptius215s29rbsq8tviy00s08g6md1y25k2-81tp7lpv114a393g4wlgx4a30 192.168.15.129:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
 
 
# 在node1和node2上执行
[root@node1 ~]# docker swarm join --token SWMTKN-1-3yyjydabd8v340kptius215s29rbsq8tviy00s08g6md1y25k2-81tp7lpv114a393g4wlgx4a30 192.168.15.129:2377
This node joined a swarm as a worker
 
[root@node2 ~]# docker swarm join --token SWMTKN-1-3yyjydabd8v340kptius215s29rbsq8tviy00s08g6md1y25k2-81tp7lpv114a393g4wlgx4a30 192.168.15.129:2377
This node joined a swarm as a worker.
 
# 在master1上查看集群主机
[root@master1 test]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
xtooqr30af6fdcu51jzdv79wh *   master1             Ready               Active              Leader              18.06.1-ce
y24c6sfs3smv5sd5h7k66x8zv     node1               Ready               Active                                  18.06.1-ce
k554xe59lcaeu1suaguvxdnel     node2               Ready               Active                                  18.06.1-ce

4. 创建集群容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 创建集群容器
[root@master1 test]# docker stack deploy -c docker-compose.yml swarm-monitor
Creating network swarm-monitor_default
Creating service swarm-monitor_cadvisor
Creating service swarm-monitor_influx
Creating service swarm-monitor_grafana
 
 
# 查看创建的容器
[root@master1 test]# docker service  ls
ID                  NAME                     MODE                REPLICAS            IMAGE                    PORTS
wn36f7be6i5a        swarm-monitor_cadvisor   global              3/3                 google/cadvisor:latest   *:8080->8080/tcp
ufn3lqbhbww3        swarm-monitor_grafana    replicated          1/1                 grafana/grafana:latest   *:3000->3000/tcp
lf0z6dp1u8sn        swarm-monitor_influx     replicated          1/1                 tutum/influxdb:latest    *:8083->8083/tcp, *:8086->8086/tcp
 
# 查看容器的服务
[root@master1 test]# docker service ps swarm-monitor_cadvisor
ID                  NAME                                               IMAGE                    NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
vy1kqg5u8x3f        swarm-monitor_cadvisor.k554xe59lcaeu1suaguvxdnel   google/cadvisor:latest   node2               Running             Running about a minute ago                      
a08b5bysra3d        swarm-monitor_cadvisor.y24c6sfs3smv5sd5h7k66x8zv   google/cadvisor:latest   node1               Running             Running about a minute ago                      
kkca4kyojgr2        swarm-monitor_cadvisor.xtooqr30af6fdcu51jzdv79wh   google/cadvisor:latest   master1             Running             Running 59 seconds ago  
 
[root@master1 test]# docker service ps swarm-monitor_grafana
ID                  NAME                      IMAGE                    NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
klyjl7rxzmoz        swarm-monitor_grafana.1   grafana/grafana:latest   master1             Running             Running about a minute ago       
 
[root@master1 test]# docker service ps swarm-monitor_influx
ID                  NAME                     IMAGE                   NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
pan5yvwq7b79        swarm-monitor_influx.1   tutum/influxdb:latest   master1             Running             Running about a minute ago    

5. 访问web测试
1) 访问influxdb并创建数据库
登录InfluxDB的8083端口,并添加数据库
登录URL:http://192.168.15.129:8083

2) 访问cadvisor
登录URL:http://192.168.15.129:8080
登录数据库查看有没有把采集的数据写入

 

3) 访问grafana并配置
登录URL:http://192.168.15.129:3000
默认用户名:admin
默认密码:admin
温馨提示:
   首次登录会提示修改密码才可以登录,我这里修改密码为admin

这个动图比较长 主要是对grafana的配置操作,注意里面的alpine_test容器不是和集群一块创建的是我单独创建的  

做到以上的效果,说明已经部署成功了,具体的配置方案就是因需求而异了


kubernetes/K8s镜像Pod应用更新方式

$
0
0

kubernetes/K8s镜像Pod应用更新方式

镜像更新

kubernetes集群中镜像有三种更新方式,无论哪一种都属于滚动式更新,在更新过程中服务不会中断

  1. 编辑已存在的yaml文件,使用apply命令更新

    以nginx镜像为例,查看现有nginx版本

    [root@k8s-node2 .ssh]# curl -I 10.10.10.4:88
    
    [root@k8s-master ~]# vim nginx-deploy.yaml
    

    编辑文件,把版本更改成1.11

    执行apply命令

    [root@k8s-master ~]# kubectl apply -f nginx-deploy.yaml
    

    查看更新发布过程

    [root@k8s-master ~]# kubectl rollout status deploy nginx-test
    

    访问验证

    [root@k8s-node1 ssh]# curl -I 10.10.10.4:88
    

    查看更新发布历史

    回滚历史版本

    [root@k8s-master ~]# kubectl rollout undo deploy nginx-test --to-     revision=7
    
  1. 直接编辑deployment内容

    查看deploy

    [root@k8s-master ~]# kubectl get deploy
    

    编辑deploy

    [root@k8s-master ~]# kubectl edit deploy nginx-test
    

    直接修改相关内容即可自动更新

  1. 使用kubectl set命令
    [root@k8s-master ~]# kubectl set image deploy nginx-test nginx=nginx:1.11
    

# 查看集群信息
$ kubectl cluster-info

# kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来
$ kubeadm reset

# 获取nodes节点
$ kubectl get nodes

# 删除node节点
$ kubectl delete node c7

# 获取pods
$ kubectl get pods --all-namespaces

# 查看某个pod的状态
$ kubectl describe pod kube-dns -n kube-system

# 重新生成 token kube1
$ kubeadm token generate
$ kubeadm token create <generated-token> --print-join-command --ttl=24h

kubectl apply -f kubernetes-dashboard.yaml 
kubectl delete -f kubernetes-dashboard.yaml 
# 我们发现deployment的create和apply命令都带有一个–record参数,这是告诉apiserver记录update的历史。
# 通过kubectl rollout history可以查看deployment的update history:
kubectl apply -f deployment-demo-v0.2.yaml --record
kubectl rollout history deployment deployment-demo
# Deployment下Pod的回退操作异常简单,通过rollout undo即可完成。
# rollout undo会将Deployment回退到record中的上一个revision(见上面rollout history的输出中有revision列):
kubectl rollout undo deployment deployment-demo

# 更新svc
kubectl replace -f xxx.yaml
# 强制更新svc
kubectl replace -f xxx.yaml --force
kubectl edit

# 查看详细信息(包含错误信息)
kubectl describe pod kube-dns -n kube-system
kubectl describe deployment deployment-demo

kubectl logs kubernetes-dashboard-67589f8d6b-l7tfd -n kube-system
kubectl delete pod prometheus-tim-3864503240-rwpq5 -n kube-system

kubectl get deployment --all-namespaces
kubectl get svc  --all-namespaces
kubectl get pod  -o wide  --all-namespaces

kubectl exec nginx-9d85d49b7-7knw6 env
kubectl describe svc/nginx
kubectl get all
kubectl get rs
kubectl get rc
kubectl get deployments

kubectl get svc,ep
# k8s的LVS方案,内置了nginx
kubectl get ingress

# kubernetes在kubectl cli工具中仅提供了对Replication Controller的rolling-update支持,通过kubectl -help,我们可以查看到下面的命令usage描述:
kubectl rolling-update [metadata.name] --update-period=10s -f xxx.yaml
kubectl rolling-update hello-rc –image=index.tenxcloud.com/tailnode/hello:v2.0

# 升级镜像版本
kubectl -n default set image deployments/gateway gateway=192.168.31.149:5000/dev/core-gateway:latest


#如果在升级过程中出现问题(比如长时间无响应),可以CTRL+C结束再使用kubectl rolling-update hello-rc –-rollback进行回滚,但如果升级完成后出现问题(比如新版本程序出core),此命令无能为力,需要使用同样方法“升级”为旧版本

# kubernetes Deployment是一个更高级别的抽象,就像文章开头那幅示意图那样,Deployment会创建一个Replica Set,用来保证Deployment中Pod的副本数。
# 由于kubectl rolling-update仅支持replication controllers,因此要想rolling-updata deployment中的Pod,你需要修改Deployment自己的manifest文件并应用。
# 这个修改会创建一个新的Replica Set,在scale up这个Replica Set的Pod数的同时,减少原先的Replica Set的Pod数,直至zero。
# 而这一切都发生在Server端,并不需要kubectl参与。

# busybox
kubectl exec busybox -- nslookup kube-dns.kube-system

kubectl apply -f kubernetes-dashboard.yaml -f account.yaml
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

# [Kubernetes Dashboard token失效时间设置](https://blog.csdn.net/u013201439/article/details/80930285)
kubectl edit deployment kubernetes-dashboard -n kube-system

kubectl expose deployment springboot-demo-deployment --type=NodePort
minikube service springboot-demo-deployment --url
curl $(minikube service springboot-demo-deployment --url)/hello

# configMap
kubectl get configmap nginx-config
kubectl get configmap nginx-config -o yaml
kubectl edit configmap env-config
# curl -s https://paste.ubuntu.com/p/ZmyxsHB7Xt/ |sed -n '/api/,/true/p' | sed  's@true@"true"@' | kubectl  create -f -  

# 将名为foo中的pod副本数设置为3
kubectl scale --replicas=3 rs/foo
# 将由“foo.yaml”配置文件中指定的资源对象和名称标识的Pod资源副本设为3
kubectl scale --replicas=3 -f foo.yaml
# 如果当前副本数为2,则将其扩展至3。
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
# 设置多个RC中Pod副本数量。
kubectl scale --replicas=5 rc/foo rc/bar rc/baz

# 默认情况下,为了保证 master 的安全,master 是不会被调度到 app 的。你可以取消这个限制通过输入
$ kubectl taint nodes --all node-role.kubernetes.io/master-

# 修改kube-proxy访问apiserver指向keepalived的虚拟ip
kubectl get configmap -n kube-system kube-proxy -o yaml > kube-proxy-cm.yaml
sed -i "s#server:.*#server: https://${vip}:6443#g" kube-proxy-cm.yaml
kubectl apply -f kube-proxy-cm.yaml --force
kubectl delete pod -n kube-system -l k8s-app=kube-proxy
kubernetes的想法是将实例紧密包装到尽可能接近100%。 所有的部署应该与CPU /内存限制固定在一起。 所以如果调度程序发送一个pod到一台机器,它不应该使用交换。 设计者不想交换,因为它会减慢速度。

所以关闭swap主要是为了性能考虑。

当然为了一些节省资源的场景,比如运行容器数量较多,可添加kubelet参数 --fail-swap-on=false来解决。


关闭swap

swapoff -a
再把/etc/fstab文件中带有swap的行删了,没有就无视

kubernetes/K8s镜像Pod应用滚动更新,回滚三种方式

$
0
0

更新k8s镜像版本的三种方式

一、知识准备

更新镜像版本是在k8s日常使用中非常常见的一种操作,本文主要介绍更新介绍的三种方法


二、环境准备

组件版本
OSUbuntu 18.04.1 LTS
docker18.06.0-ce


三、准备镜像

首先准备2个不同版本的镜像,用于测试(已经在阿里云上创建好2个不同版本的nginx镜像)

docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v2

这两个镜像只有版本号不同,其他的都一样

root@k8s-master:~# docker run -d --rm -p 10080:80 nginx:v1
e88097841c5feef92e4285a2448b943934ade5d86412946bc8d86e262f80a050
root@k8s-master:~# curl http://127.0.0.1:10080
----------
version: v1
hostname: f5189a5d3ad3

四、更新镜像的三种方法

我们首先准备一个yaml文件用于测试:

root@k8s-master:~# more image_update.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: image-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: image-update
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
        imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
    selector:
      app: image-update
    ports:
    - protocol: TCP
      port: 10080
      targetPort: 80

简单验证一下:

root@k8s-master:~# kubectl apply -f image_update.yaml
deployment.extensions "image-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
nginx-service   ClusterIP   10.254.240.225   <none>        10080/TCP   1m
root@k8s-master:~# kubectl get pod  -owide
NAME                                READY     STATUS    RESTARTS   AGE       IP              NODE
image-deployment-58b646ffb6-d4sl7   1/1       Running   0          1m        10.10.169.131   k8s-node2
root@k8s-master:~# curl http://10.254.240.225:10080
----------
version: v1
hostname: image-deployment-58b646ffb6-d4sl7

已经正常工作了,并且当前版本是v1

下面介绍修改镜像的方法

(1)修改配置文件

这应该是最常用的方法了

修改配置文件,将nginx:v1改成nginx:v2

root@k8s-master:~# sed -i 's/nginx:v1/nginx:v2/g' image_update.yaml

应用配置文件:

root@k8s-master:~# kubectl apply -f image_update.yaml
deployment.extensions "image-deployment" configured
service "nginx-service" unchanged
root@k8s-master:~# kubectl get pod  -owide
NAME                                READY     STATUS              RESTARTS   AGE       IP              NODE
image-deployment-55cb946d47-7tzp8   0/1       ContainerCreating   0          16s       <none>          k8s-node1
image-deployment-58b646ffb6-d4sl7   1/1       Terminating         0          11m       10.10.169.131   k8s-node2

等待一段时间之后,v2版本ready之后

root@k8s-master:~# kubectl get pod  -owide
NAME                                READY     STATUS    RESTARTS   AGE       IP              NODE
image-deployment-55cb946d47-7tzp8   1/1       Running   0          1m        10.10.36.119    k8s-node1
root@k8s-master:~# curl http://10.254.240.225:10080
----------
version: v2
hostname: image-deployment-55cb946d47-7tzp8

成功更新为v2

(2)使用patch命令

首先找到deployment:

root@k8s-master:~# kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
image-deployment   1         1         1            1           20m

通过patch更新:

root@k8s-master:~# kubectl patch deployment image-deployment --patch '{"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1"}]}}}}'
deployment.extensions "image-deployment" patched

等待一段时间之后:

root@k8s-master:~# curl http://10.254.240.225:10080
----------
version: v1
hostname: image-deployment-58b646ffb6-hbzk9

通过patch更新之后,镜像版本更新回v1

(3)使用set image命令

使用set image命令将镜像版本更新到v2

root@k8s-master:~# kubectl set image deploy image-deployment *=registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v2
root@k8s-master:~# curl http://10.254.240.225:10080
----------
version: v2
hostname: image-deployment-55cb946d47-zsdc6

等待一段时间之后,版本又更新到v2

五、小结

● 本文介绍了3种方法更新镜像版本,分别是:配置文件;patch方式;set image方式



k8s deployment的滚动更新

$
0
0

详细聊聊k8s deployment的滚动更新(二)

一、知识准备

● 本文详细探索deployment在滚动更新时候的行为
● 相关的参数介绍:
  livenessProbe:存活性探测。判断pod是否已经停止
  readinessProbe:就绪性探测。判断pod是否能够提供正常服务
  maxSurge:在滚动更新过程中最多可以存在的pod数
  maxUnavailable:在滚动更新过程中最多不可用的pod数


二、环境准备

组件版本
OSUbuntu 18.04.1 LTS
docker18.06.0-ce


三、准备镜像、yaml文件

首先准备2个不同版本的镜像,用于测试(已经在阿里云上创建好2个不同版本的nginx镜像)

docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1

2个镜像都提供相同的服务,只不过nginx:delay_v1会延迟启动20才启动nginx

root@k8s-master:~# docker run -d --rm -p 10080:80 nginx:v1
e88097841c5feef92e4285a2448b943934ade5d86412946bc8d86e262f80a050
root@k8s-master:~# curl http://127.0.0.1:10080
----------
version: v1
hostname: f5189a5d3ad3

yaml文件:

root@k8s-master:~# more roll_update.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: update-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: roll-update
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
        imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
    selector:
      app: roll-update
    ports:
    - protocol: TCP
      port: 10080
      targetPort: 80

四、livenessProbe与readinessProbe

livenessProbe:存活性探测,最主要是用来探测pod是否需要重启
readinessProbe:就绪性探测,用来探测pod是否已经能够提供服务

● 在滚动更新的过程中,pod会动态的被delete,然后又被create出来。存活性探测保证了始终有足够的pod存活提供服务,一旦出现pod数量不足,k8s会立即拉起新的pod
● 但是在pod启动的过程中,服务正在打开,并不可用,这时候如果有流量打过来,就会造成报错

下面来模拟一下这个场景:

首先apply上述的配置文件

root@k8s-master:~# kubectl apply -f roll_update.yaml
deployment.extensions "update-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl get pod -owide
NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
update-deployment-7db77f7cc6-c4s2v   1/1       Running   0          28s       10.10.235.232   k8s-master
update-deployment-7db77f7cc6-nfgtd   1/1       Running   0          28s       10.10.36.82     k8s-node1
update-deployment-7db77f7cc6-tflfl   1/1       Running   0          28s       10.10.169.158   k8s-node2
root@k8s-master:~# kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
nginx-service   ClusterIP   10.254.254.199   <none>        10080/TCP   1m

重新打开终端,测试当前服务的可用性(每秒做一次循环去获取nginx的服务内容):

root@k8s-master:~# while :; do curl http://10.254.254.199:10080; sleep 1; done
----------
version: v1
hostname: update-deployment-7db77f7cc6-nfgtd
----------
version: v1
hostname: update-deployment-7db77f7cc6-c4s2v
----------
version: v1
hostname: update-deployment-7db77f7cc6-tflfl
----------
version: v1
hostname: update-deployment-7db77f7cc6-nfgtd
...

这时候把镜像版本更新到nginx:delay_v1,这个镜像会延迟启动nginx,也就是说,会先sleep 20s,然后才去启动nginx服务。这就模拟了在服务启动过程中,虽然pod已经是存在的状态,但是并没有真正提供服务

root@k8s-master:~# kubectl patch deployment update-deployment --patch '{"metadata":{"annotations":{"kubernetes.io/change-cause":"update version to v2"}} ,"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1"}]}}}}'
deployment.extensions "update-deployment" patched
...
----------
version: v1
hostname: update-deployment-7db77f7cc6-h6hvt
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-6th87
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-n22vz
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-njmpz
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-6th87

可以看到,由于延迟启动,nginx并没有真正做好准备提供服务,此时流量已经发到后端,导致服务不可用的状态

所以,加入readinessProbe是非常必要的手段:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: update-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: roll-update
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
        imagePullPolicy: Always
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
    selector:
      app: roll-update
    ports:
    - protocol: TCP
      port: 10080
      targetPort: 80

重复上述步骤,先创建nginx:v1,然后patch到nginx:delay_v1

root@k8s-master:~# kubectl apply -f roll_update.yaml
deployment.extensions "update-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl patch deployment update-deployment --patch '{"metadata":{"annotations":{"kubernetes.io/change-cause":"update version to v2"}} ,"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1"}]}}}}'
deployment.extensions "update-deployment" patched
root@k8s-master:~# kubectl get pod -owide
NAME                                 READY     STATUS        RESTARTS   AGE       IP              NODE
busybox                              1/1       Running       0          45d       10.10.235.255   k8s-master
lifecycle-demo                       1/1       Running       0          32d       10.10.169.186   k8s-node2
private-reg                          1/1       Running       0          92d       10.10.235.209   k8s-master
update-deployment-54d497b7dc-4mlqc   0/1       Running       0          13s       10.10.169.178   k8s-node2
update-deployment-54d497b7dc-pk4tb   0/1       Running       0          13s       10.10.36.98     k8s-node1
update-deployment-6d5d7c9947-l7dkb   1/1       Terminating   0          1m        10.10.169.177   k8s-node2
update-deployment-6d5d7c9947-pbzmf   1/1       Running       0          1m        10.10.36.97     k8s-node1
update-deployment-6d5d7c9947-zwt4z   1/1       Running       0          1m        10.10.235.246   k8s-master

● 由于设置了readinessProbe,虽然pod已经启动起来了,但是并不会立即投入使用,所以出现了 READY: 0/1 的情况
● 并且有pod出现了一直持续Terminating状态,因为滚动更新的限制,至少要保证有pod可用

再查看curl的状态,image的版本平滑更新到了nginx:delay_v1,没有出现报错的状况

root@k8s-master:~# while :; do curl http://10.254.66.136:10080; sleep 1; done
...
version: v1
hostname: update-deployment-6d5d7c9947-pbzmf
----------
version: v1
hostname: update-deployment-6d5d7c9947-zwt4z
----------
version: v1
hostname: update-deployment-6d5d7c9947-pbzmf
----------
version: v1
hostname: update-deployment-6d5d7c9947-zwt4z
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-pk4tb
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-4mlqc
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-pk4tb
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-4mlqc
...

五、maxSurge与maxUnavailable

● 在滚动更新中,有几种更新方案:先删除老的pod,然后添加新的pod;先添加新的pod,然后删除老的pod。在这个过程中,服务必须是可用的(也就是livenessProbe与readiness必须检测通过)
● 在具体的实施中,由maxSurge与maxUnavailable来控制究竟是先删老的还是先加新的以及粒度
● 若指定的副本数为3:
  maxSurge=1 maxUnavailable=0:最多允许存在4个(3+1)pod,必须有3个pod(3-0)同时提供服务。先创建一个新的pod,可用之后删除老的pod,直至全部更新完毕
  maxSurge=0 maxUnavailable=1:最多允许存在3个(3+0)pod,必须有2个pod(3-1)同时提供服务。先删除一个老的pod,然后创建新的pod,直至全部更新完毕
● 归根结底,必须满足maxSurge与maxUnavailable的条件,如果maxSurge与maxUnavailable同时为0,那就没法更新了,因为又不让删除,也不让添加,这种条件是无法满足的

六、小结

● 本文介绍了deployment滚动更新过程中,maxSurge、maxUnavailable、liveness、readiness等参数的使用
● 在滚动更新过程中,还有留有一个问题。比如在一个大型的系统中,某个业务的pod数很多(100个),执行一次滚动更新时,势必会造成pod版本不一致(有些pod是老版本,有些pod是新版本),用户访问很有可能会造成多次结果不一致的现象,直至版本更新完毕。关于这个问题待之后慢慢讨论

解决kubernetes默认证书1年有效期问题

$
0
0

解决kubernetes默认证书1年有效期问题


拉取源码

cd /data && git clone https://github.com/kubernetes/kubernetes.git


切换到指定版本

以V1.12.3为例

git checkout -b remotes/origin/release-1.12  v1.12.3


安装go环境

cd /data/soft && wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
tar zxvf go1.11.2.linux-amd64.tar.gz  -C /usr/local 

编辑/etc/profile文件添加如下:

#go setting
export GOROOT=/usr/local/go
export GOPATH=/usr/local/gopath
export PATH=$PATH:$GOROOT/bin

source /etc/profile 生效

验证:

go version
go version go1.11.2 linux/amd64


修改源码

/data/kubernetes/staging/src/k8s.io/client-go/util/cert/cert.go

112  NotAfter:     time.Now().Add(duration365d * 10).UTC(),
187  NotAfter:  validFrom.Add(maxAge *10),
215  NotAfter:  validFrom.Add(maxAge * 10),

原来1年 ; * 10 表示10年 


编译

cd /data/kubernetes/ && make WHAT=cmd/kubeadm


查看编译之后文件

ls -l /data/kubernetes/_output/bin/kubeadm

替换kubeadm

mv /usr/bin/kubeadm /usr/bin/kubeadm_backup
ln -s /data/kubernetes/_output/bin/kubeadm /usr/bin/kubeadm

重新初始化集群


查看证书

cd /etc/kubernetes/pki

openssl x509 -in front-proxy-client.crt   -noout -text  |grep Not
            Not Before: Nov 28 09:07:02 2018 GMT
            Not After : Nov 25 09:07:03 2028 GMT

openssl x509 -in apiserver.crt   -noout -text  |grep Not
            Not Before: Nov 28 09:07:04 2018 GMT
            Not After : Nov 25 09:07:04 2028 GMT

weblogic 补 20191015 ,p30109677_1036_Generic.zip安装步骤

$
0
0

Weblogic10.3.6反序列化漏洞补丁(3L3H)升级方案


weblogic 补丁 20191015 ,p30109677_1036_Generic.zip

weblogic 补丁 20191015 ,p30386660_122130_Generic.zip

这是补丁包解压后的内容,3L3H是PATCH_ID,可以阅读readme,里面有介绍怎么安装卸载等

在这里插入图片描述
卸载补丁

    Navigate to the {MW_HOME}/utils/bsu directory.
    Execute bsu.sh -remove -patchlist={PATCH_ID} -prod_dir={MW_HOME}/{WL_HOME}

$cd /home/weblogic/Oracle/Middleware/utils/bsu
$./bsu.sh -remove -patchlist=MXLE -prod_dir=/home/weblogic/Oracle/Middleware/wlserver_10.3

解压补丁
unzip p30109677_1036_Generic.zip to {MW_HOME}/utils/bsu/cache_dir
unzip p30109677_1036_Generic.zip to /home/weblogic/Oracle/Middleware/utils/bsu/cache_dir
报错
(Archive: p30109677_1036_Generic.zip
caution: filename not matched: to
caution: filename not matched: /home/weblogic/Oracle/Middleware/utils/bsu/cache_dir)
执行命令,正确解压到指定目录
$unzip p30109677_1036_Generic.zip -d /home/weblogic/Oracle/Middleware/utils/bsu/cache_dir

查看补丁
$cd pwd /home/weblogic/Oracle/utils/bsu
$./bsu.sh -prod_dir=/home/weblogic/Oracle/Middleware/wlserver_10.3/ -status=applied -verbose -view

安装补丁
bsu.sh -install -patch_download_dir={MW_HOME}/utils/bsu/cache_dir -patchlist={PATCH_ID} -prod_dir={MW_HOME}/{WL_HOME}
$./bsu.sh -install -patch_download_dir=/home/weblogic/Oracle/Middleware/utils/bsu/cache_dir -patchlist=3L3H -prod_dir=/home/weblogic/Oracle/Middleware/wlserver_10.3

安装结果
 

Centos/RHEL升级openssh-8.1p1应对漏洞扫描

$
0
0

centos6的系统关于openssh版本太低只有openssh5.3,需要升级到最新版openssh8.1p1,

为了方便,这个制作了一个RPM版本的安装包,可以在centos6、RHEL6系统安装,确认正常。

安装方法,先下载附件中的openssh8.1p1的RPM包,执行下列命令

#:  yum -y install gcc gcc-c++ zlib zlib-devel openssl openssl-devel pam-devel pam-devel rpm-build pam-devel openssl pam libedit pam-devel initscripts libXt-devel imake gtk2-devel unzip

 安装依赖包,不是所有包都需要。

#:  rpm -e `rpm -qa |grep openssh` --nodeps

删除旧版openssh软件包。

#:  rpm -ivh *.rpm 

  安装所有新版openssh8.1p1安装包。

openssh8.1p1_x86_64 RPM安装包


当然还有一个很重要的事情,安装好后,请修改配置文件启用root账号。

如果在升级时使用ssh远程,那么重启sshd服务会掉线,导致ssh程序没有重启完成,

所需要可以先安装好 screen程序,然后把下面的命令写到脚本里面,再执行新写的脚本文件,这样就可以正常重启ssh,

重启后,也可以正常连接ssh:

#: /etc/init.d/sshd stop     停止服务

#:/etc/init.d/sshd restart   测试重启服务。

最好远程测试下ssh是否正常登陆。


当然也可以使用以下方法编译安装Openssh8.1p1

OpenSSH-8.1p1

Introduction to OpenSSH

The OpenSSH package contains ssh clients and the sshd daemon. This is useful for encrypting authentication and subsequent traffic over a network. The ssh and scp commands are secure implementations of telnet and rcp respectively.

This package is known to build and work properly using an LFS-9.0 platform.

Package Information

OpenSSH Dependencies

Optional

GDB-8.3.1 (for tests), Linux-PAM-1.3.1, X Window System, MIT Kerberos V5-1.17.1, libedit, LibreSSL Portable, OpenSC, and libsectok

Optional Runtime (Used only to gather entropy)

OpenJDK-12.0.2, Net-tools-CVS_20101030, and Sysstat-12.2.0

User Notes: http://wiki.linuxfromscratch.org/blfs/wiki/OpenSSH

Installation of OpenSSH

OpenSSH runs as two processes when connecting to other computers. The first process is a privileged process and controls the issuance of privileges as necessary. The second process communicates with the network. Additional installation steps are necessary to set up the proper environment, which are performed by issuing the following commands as the root user:

install  -v -m700 -d /var/lib/sshd &&
chown    -v root:sys /var/lib/sshd &&

groupadd -g 50 sshd        &&
useradd  -c 'sshd PrivSep' \
         -d /var/lib/sshd  \
         -g sshd           \
         -s /bin/false     \
         -u 50 sshd

Install OpenSSH by running the following commands:

./configure --prefix=/usr                     \
            --sysconfdir=/etc/ssh             \
            --with-md5-passwords              \
            --with-privsep-path=/var/lib/sshd &&
make

The testsuite requires an installed copy of scp to complete the multiplexing tests. To run the test suite, first copy the scp program to /usr/bin, making sure that you backup any existing copy first.

To test the results, issue: make tests.

Now, as the root user:

make install &&
install -v -m755    contrib/ssh-copy-id /usr/bin     &&

install -v -m644    contrib/ssh-copy-id.1 \
                    /usr/share/man/man1              &&
install -v -m755 -d /usr/share/doc/openssh-8.1p1     &&
install -v -m644    INSTALL LICENCE OVERVIEW README* \
                    /usr/share/doc/openssh-8.1p1

Command Explanations

--sysconfdir=/etc/ssh: This prevents the configuration files from being installed in /usr/etc.

--with-md5-passwords: This enables the use of MD5 passwords.

--with-pam: This parameter enables Linux-PAM support in the build.

--with-xauth=/usr/bin/xauth: Set the default location for the xauth binary for X authentication. Change the location if xauth will be installed to a different path. This can also be controlled from sshd_config with the XAuthLocation keyword. You can omit this switch if Xorg is already installed.

--with-kerberos5=/usr: This option is used to include Kerberos 5 support in the build.

--with-libedit: This option enables line editing and history features for sftp.

Configuring OpenSSH

Config Files

~/.ssh/*, /etc/ssh/ssh_config, and /etc/ssh/sshd_config

There are no required changes to any of these files. However, you may wish to view the /etc/ssh/ files and make any changes appropriate for the security of your system. One recommended change is that you disable root login via ssh. Execute the following command as the root user to disable root login via ssh:

echo "PermitRootLogin no" >> /etc/ssh/sshd_config

If you want to be able to log in without typing in your password, first create ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub with ssh-keygen and then copy ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys on the remote computer that you want to log into. You'll need to change REMOTE_USERNAME and REMOTE_HOSTNAME for the username and hostname of the remote computer and you'll also need to enter your password for the ssh-copy-id command to succeed:

ssh-keygen &&
ssh-copy-id -i ~/.ssh/id_rsa.pub REMOTE_USERNAME@REMOTE_HOSTNAME

Once you've got passwordless logins working it's actually more secure than logging in with a password (as the private key is much longer than most people's passwords). If you would like to now disable password logins, as the root user:

echo "PasswordAuthentication no" >> /etc/ssh/sshd_config &&
echo "ChallengeResponseAuthentication no" >> /etc/ssh/sshd_config

If you added Linux-PAM support and you want ssh to use it then you will need to add a configuration file for sshd and enable use of LinuxPAM. Note, ssh only uses PAM to check passwords, if you've disabled password logins these commands are not needed. If you want to use PAM, issue the following commands as the root user:

sed 's@d/login@d/sshd@g' /etc/pam.d/login > /etc/pam.d/sshd &&
chmod 644 /etc/pam.d/sshd &&
echo "UsePAM yes" >> /etc/ssh/sshd_config

Additional configuration information can be found in the man pages for sshd, ssh and ssh-agent.

Boot Script

To start the SSH server at system boot, install the /etc/rc.d/init.d/sshd init script included in the blfs-bootscripts-20191204 package.

make install-sshd

Contents

Installed Programs: scp, sftp, slogin (symlink to ssh), ssh, ssh-add, ssh-agent, ssh-copy-id, ssh-keygen, ssh-keyscan, and sshd
Installed Libraries: None
Installed Directories: /etc/ssh, /usr/share/doc/openssh-8.1p1, and /var/lib/sshd

Short Descriptions

scp

is a file copy program that acts like rcp except it uses an encrypted protocol.

sftp

is an FTP-like program that works over the SSH1 and SSH2 protocols.

slogin

is a symlink to ssh.

ssh

is an rlogin/rsh-like client program except it uses an encrypted protocol.

sshd

is a daemon that listens for ssh login requests.

ssh-add

is a tool which adds keys to the ssh-agent.

ssh-agent

is an authentication agent that can store private keys.

ssh-copy-id

is a script that enables logins on remote machine using local keys.

ssh-keygen

is a key generation tool.

ssh-keyscan

is a utility for gathering public host keys from a number of hosts.

Last updated on 2019-10-12 12:26:41 -0500

apache安全之修改或隐藏版本信息

$
0
0

为什么要屏蔽版本?
防止他人恶意扫描已知版本的漏洞,从而利用漏洞进行×××,造成不必要的损失。

一、源码文件屏蔽版本
1.修改源文件
[root@localhost httpd-2.4.20]# vim include/ap_release.h
修改以下内容:
#define AP_SERVER_BASEVENDOR"Apache Software Foundation"
#define AP_SERVER_BASEPROJECT "ApacheHTTP Server"
#define AP_SERVER_BASEPRODUCT"Apache"
#define AP_SERVER_MAJORVERSION_NUMBER 2
#define AP_SERVER_MINORVERSION_NUMBER 2
#define AP_SERVER_PATCHLEVEL_NUMBER 25
#define AP_SERVER_DEVBUILD_BOOLEAN 0
改为:
#define AP_SERVER_BASEVENDOR"Kry" #服务器供应商名称
#define AP_SERVER_BASEPROJECT "Web Server" #服务的项目名称
#define AP_SERVER_BASEPRODUCT"Kry Web Server" #服务的产品名称
#define AP_SERVER_MAJORVERSION_NUMBER 8 #主板本号
#define AP_SERVER_MINORVERSION_NUMBER 8 #小版本号
#define AP_SERVER_PATCHLEVEL_NUMBER 8 #补丁版本号
#define AP_SERVER_DEVBUILD_BOOLEAN 0

2.测试效果
[root@localhost httpd-2.4.20]# curl -I 192.168.0.146
HTTP/1.1 200 OK
Date: Tue, 10 Jan 2017 16:05:28 GMT
Server: Kry Web Server/8.8.8 (Unix)

3.注意
不是所有与Apache结合使用的程序都适用,例如SVN(subversion),在安装过程中需要检测Apache版本,如果是一个无法识别的版本会安装失败,所以不是所有都适合源码屏蔽版本号。

二、修改配置文件彻底屏蔽
[root@localhost ~]# vim /usr/local/apache-2.4.20/conf/extra/httpd-default.conf
修改以下内容:
ServerTokens Full
ServerSignature Off
改为:
ServerTokens Prod #不显示服务器操作系统类型
ServerSignature On #不显示web服务器版本号


   检查语法:/application/apache/bin/apachectl -t
    平滑重启:/application/apache/bin/apachectl graceful    
    查看效果:curl -I 192.168.31.36

Viewing all 780 articles
Browse latest View live