#SSL SSLEngine On #SSLCertificateFile /www/server/panel/vhost/cert/92cto.com/fullchain.pem #SSLCertificateKeyFile /www/server/panel/vhost/cert/92cto.com/privkey.pem #SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH SSLProtocol All -SSLv2 -SSLv3 SSLHonorCipherOrder On
sed -i 's/mirrorlist/\#mirrorlist/g' CentOS-Base.repo
sed -i 's/\#baseurl/baseurl/g' CentOS-Base.repo
sed -i 's/mirrorlist/#mirrorlist/g' CentOS-AppStream.repo sed -i 's/#baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-AppStream.repo sed -i 's/mirrorlist/#mirrorlist/g' CentOS-Extras.repo sed -i 's/#baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-Extras.repo sed -i 's/baseurl=http:\/\/mirror.centos.org\/$contentdir/baseurl=https:\/\/mirrors.aliyun.com\/centos/g' CentOS-Base.repo
Your Kubernetes control-plane has initialized successfully!To start using your cluster,
you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.10.10:6443 --token kehvmq.e33d33lgkrm8h0rn \
--discovery-token-ca-cert-hash sha256:6150e7960c44890d5dd6b160bbbb4bfa256023db22f004b54d27e1cca72b0afc
根据以上结果,还要操作一些任务,会有一些报错,可根据自己的情况修改。
Docker中的Cgroup Driver:Cgroupfs 与 Systemd
在安装kubernetes的过程中,会出现
Copyfailed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Copyvim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf# Note: This dropin only works with kubeadm and kubelet v1.11+[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
2018-09-29 15:50:16 starting migration of VM 100 to node 'proxmox233' (192.168.5.233)
2018-09-29 15:50:16 found local disk 'local:iso/CentOS-7-x86_64-DVD-1804.iso' (in current VM config)
2018-09-29 15:50:16 can't migrate local disk 'local:iso/CentOS-7-x86_64-DVD-1804.iso': can't live migrate attached local disks without with-local-disks option
2018-09-29 15:50:16 ERROR: Failed to sync data - can't migrate VM - check log
[root@k8s ~]# [root@k8s ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d22f593580a registry.cn-hangzhou.aliyuncs.com/thundersdata-public/onlyoffice-documentserver-chinese-fonts:5.4.0.21 /bin/sh -c /app/o... 6 minutes ago Up 5 minutes ago 0.0.0.0:7808->80/tcp, 0.0.0.0:7843->443/tcp office-doc
NextCloud is a Dropbox-like solution for self-hosted file sharing and syncing. Installing NextCloud 16 on CentOS is quite simple. Whether you want to backup, have file-syncing or just have a Google Calendar alternative, this guide is for you.
What is NextCloud? Is it like a “cloud”?
If you stumbled here by chance and don’t know what NextCloud is, here is an article explaining its principal features and advantages/disadvantages. In this other article you can find NextCloud 16 new features. To tell you the truth, NextCloud is a SaaS cloud, if you want to know more about cloud types you can read this article.
In this article we will cover the installation of the server (not the client).
What’s the newest version?
The newest version of this tutorial is the following:
I take absolutely NO responsibility of what you do with your machine; use this tutorial as a guide and remember you can possibly cause data loss if you touch things carelessly.
The first step in order to install NextCloud 16 is to install a web server and PHP. Since CentOS 7 ships with PHP 5.4 by default but NextCloud 16 requires at least PHP 7 we’ll also be installing PHP 7 from a third-party repository. The following procedure will install apache as webserver. Input the commands one by one to avoid errors!
CentOS 7
If you’d rather use PHP 7.3, you can follow this tutorial: how to install PHP 7.3 on CentOS 7. PHP 7.3 isn’t yet available in this repository.
Warning!
If you decided to use PHP 7.3 rather than PHP 7.2 using the past tutorial, replace each instance of php72w with php73w in all the successive commands.
Now that you got the software, you need to choose a database that will support the installation. You have three choices:
SQLite: is a single-file database. It is suggested only for small installations since it will slow NextCloud down sensibly.
MariaDB/MySQL: are popular open source databases especially amongst web developers. It is the suggested choice.
PostgreSQL: a popular enterprise-class database. More complicated than MySQL/MariaDB.
Now, this choice won’t really alter the functionality of NextCloud (except if you use SQLite), so pick whatever you know best. If you’re unsure pick MariaDB/MySQL.
SQLiteMySQL/MariaDBPostgreSQL
No additional steps are required if you choose SQLite.
Install the software:
# yum install mariadb-server php72w-mysql
Start (and enable at boot) the service:
# systemctl start mariadb
# systemctl enable mariadb
Next step is to configure the database management system. During the configuration you will be prompted to choose a root password, pick a strong one.
# mysql_secure_installation
Now you need to enter the database (you will be asked the password you just set):
$ mysql -u root -p
Now that you are in create a database:
CREATE DATABASE nextcloud;
Now you need to create the user that will be used to connect to the database:
CREATE USER 'nc_user'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD_HERE';
The last step is to grant the privileges to the new user:
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nc_user'@'localhost';
Now you need to create the user that will be used to connect to the database:
CREATE USER nc_user WITH PASSWORD 'YOUR_PASSWORD_HERE';
The last step is to grant the privileges to the new user:
GRANT ALL PRIVILEGES ON DATABASE nextcloud to nc_user;
When you’re done type \q and press enter to exit.
Warning!
You may experience difficulties in authenticating NextCloud with PostgreSQL since the local authentication method is set to ident by default. If you want to change it keep reading.
The configuration file for PostgreSQL is a file located in /var/lib/pgsql/data/pg_hba.conf . Open it with your favourite editor and look for the marked line (line 5):
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 ident
# IPv6 local connections:
host all all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
Replace ident with md5 on that line and restart PostgreSQL:
# systemctl restart postgresql
Step 3: Install NextCloud
This step involves getting the software and configure Apache to run it.
CentOS 7
With these step we download the software and extract it:
Now we need to create a new file in /etc/httpd/conf.d/nextcloud.conf . Feel free to use whatever editor you feel comfortable with and add the following lines:
Alias /nextcloud "/var/www/html/nextcloud/"
<Directory /var/www/html/nextcloud/>
Options +FollowSymlinks
AllowOverride All
<IfModule mod_dav.c>
Dav off
</IfModule>
SetEnv HOME /var/www/html/nextcloud
SetEnv HTTP_HOME /var/www/html/nextcloud
</Directory>
Step 4: Setting Apache and SELinux
In this step we’ll start (and enable) the webserver and we’ll set SELinux up. Now, many tutorials will tell you to disable SELinux (because it is a difficult component to manage). Instead, I suggest you to keep it on and add the rules for NextCloud:
CentOS 7
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'
# restorecon -Rv '/var/www/html/nextcloud/'
If you decided to use a Mariadb/MySQL/PostgreSQL, you also need to allow apache to access it:
# setsebool -P httpd_can_network_connect_db
In case you chose PostgreSQL you also need to enable httpd_execmem (I’m still investigating why this is needed):
# setsebool -P httpd_execmem
Another important thing to do is to raise PHP’s memory limit:
# sed -i '/^memory_limit =/s/=.*/= 512M/' /etc/php.ini
Now that you’ve configured SELinux let’s start and enable Apache:
# systemctl start httpd
# systemctl enable httpd
Step 5: Configuring firewall
This step is essential when your firewall is enabled. If your firewall is enabled you won’t be able to access your NextCloud 14 instance; on the other hand if it isn’t enabled you shouldn’t have any problems and you can simply skip this step.
Tip!
Keep in mind having a firewall enabled is a good security practice and you should already have one enabled.
In order for the firewall to work, it must be enabled. This guide will not include this part. When you enable a firewall many things can go wrong, e.g. you’re using SSH, you enable the firewall and your connection is cut and can’t connect otherwise, hence you should carefully review the documentation from your distribution.
To open the ports needed by NextCloud 16 follow these steps:
FirewallDIPtables
FirewallD is a newer firewall used to simplify firewall management. If you’re using it you can simply do:
# firewall-cmd --add-service http --permanent
# firewall-cmd --add-service https --permanent
# firewall-cmd --reload
IPtables is an older firewall (still widely used), if you have disabled firewallD you can use IPtables directly.
Once you’re done, it’s time to install everything. Head to http://YOUR_IP_ADDRESS/nextcloud/ and you will be facing the following screen:
Nextcloud 16 Installation
Select an administrator username and password. Then click on “Storage & Database“, here you can select the data folder, but if you don’t know what you’re doing it’s best if you leave it with the default value. Then select the database you chose during step 2. Fill everything and if you’ve followed all the steps correctly you should be seeing the following screen:
NextCloud 16 Files app
Step 7: Enable Caching (suggested)
NextCloud is good but it can be very slow if you don’t configure a caching solution. There are two caching solutions covered in this guide:
PHP OPcache: a PHP inbuilt cache solution that speeds up scripts execution.
Redis server: a fast in-memory key-value store that speeds up everything in NextCloud.
Enabling OPcache
CentOS
Open a terminal and input the following commands:
# yum install php-opcache
Now you need to edit a file located at /etc/php.d/10-opcache.ini . With your favorite editor, edit the file and make it look like this:
; Enable Zend OPcache extension module
zend_extension=opcache.so
opcache.enable=1
opcache.enable_cli=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1
These values are suggested by NextCloud, but you’re free to tweak them to suit your needs. Once you’re done you can restart apache:
# systemctl restart httpd
Installing and configuring Redis
CentOS
Open a terminal and input the following commands:
# yum install redis php72w-pecl-redis
Now you must configure NextCloud to use Redis. To do so you need to edit the NextCloud configuration file located at /var/www/html/nextcloud/config/config.php . The file will look like this, add the highlighted lines:
These settings will enable NextCloud to use Redis for caching and file locks. Of course these settings are just an example, you can tweak them to suit your needs.
Now you need to modify (for some reason) the Redis port SELinux label in order to enable Apache to access Redis:
# semanage port -m -t http_port_t -p tcp 6379
Lastly, enable and start Redis and restart the webserver:
# systemctl restart redis
# systemctl enable redis
# systemctl restart httpd
Step 8: Expose NextCloud to Internet (optional)
Important
Hosting applications available to the Internet is potentially dangerous. In order to keep your applications safe you need to be proficient in system security and to follow security best practices.
Most people will want to access their files from whatever location they are. To do so, your newly created NextCloud instance needs to be connected to the Internet.
Given that you need to take care of port-forwarding (if you’re a home user) and domain configuration (which varies according to your provider), here you can find the instructions to create a virtual host with Apache.
CentOS
Using your favorite text editor, edit the file we created previously at /etc/httpd/conf.d/nextcloud.conf . And make it look like this:
<VirtualHost *:80>
ServerName YOURDOMAIN.TLD
ServerAdmin YOUR@EMAIL.TLD
DocumentRoot /var/www/html/nextcloud
<directory /var/www/html/nextcloud>
Require all granted
AllowOverride All
Options FollowSymLinks MultiViews
SetEnv HOME /var/www/html/nextcloud
SetEnv HTTP_HOME /var/www/html/nextcloud
</directory>
</VirtualHost>
It is important to set ServerName according to a domain you own and have configured correctly. Now you need to add YOURDOMAIN.TLD to the trusted domains in the NextCloud config file. You can do so with the following command:
Once you complete this step you won’t be able to access NextCloud through http://YOUR_IP_ADDRESS/nextcloud anymore. Instead you will be able to access it through http://YOURDOMAIN.TLD (notice /nextcloud is gone).
Lastly, restart the webserver:
# systemctl restart httpd
Step 9: Get a free SSL certificate with Let’s Encrypt! (SUGGESTED!
Now that you have your NextCloud instance up and running you’re good to go, but beware: you’re not safe. Internet is a dangerous place for your data and you will most likely need an SSL certificate to ensure your communications are encrypted. Provided you own a domain name you can get one for free using Let’s Encrypt! No catches, free forever.
Warning!
Let’s Encrypt has rate limits in place to prevent inappropriate usage of the CA. There’s a limit on the numbers of attempts you can do before getting a temporary ban. During this setup, if things go wrong, I suggest you to use the –staging option to avoid the temporary ban. The –staging option will use a testing server and will not issue valid certificates. When you have completed the procedure against the test server successfully, you can remove the –staging option to obtain the real certificate.
CentOS
Open a terminal and input the following commands:
# yum install certbot certbot-apache
Now you will run the command to install a certificate, follow the procedure and you will get everything configured out of the box:
$ sudo certbot --apache
Lastly, restart the webserver:
# systemctl restart httpd
If you need further help you can follow my other tutorial on Let’s Encrypt on CentOS (the apache part).
The IT guy with a little boredom look in his eyes, fond of computers since forever he now works as a freelancer in the IT and shares his experiences through this blog.
Running virt-install to Build the KVM Guest System
The virt-install utility must be run as root and accepts a wide range of command-line arguments that are used to provide configuration information related to the virtual machine being created. Some of these command-line options are mandatory (specifically name, ram and disk storage must be provided) while others are optional. A summary of these arguments is outlined in the following table:
Argument
Description
-h, --help
Show the help message and exit
--connect=CONNECT
Connect to a non-default hypervisor.
-n NAME, --name=NAME
Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down (’virsh shutdown’) & delete (’virsh undefine’) it prior to running "virt-install".
-r MEMORY, --ram=MEMORY
Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation.
--arch=ARCH
Request a non-native CPU architecture for the guest virtual machine. The option is only currently available with QEMU guests, and will not enable use of acceleration. If omitted, the host CPU architecture will be used in the guest.
-u UUID, --uuid=UUID
UUID for the guest; if none is given a random UUID will be generated. If you specify UUID, you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID
--vcpus=VCPUS
Number of virtual cpus to configure for the guest. Not all hypervisors support SMP guests, in which case this argument will be silently ignored
--check-cpu
Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do.
--cpuset=CPUSET
Set which physical cpus the guest can use. "CPUSET" is a comma separated list of numbers, which can also be specified in ranges. If the value ’auto’ is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available.
--os-type=OS_TYPE
Optimize the guest configuration for a type of operating system (ex. ’linux’, ’windows’). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks. See "--os-variant" for valid options. For a full list of valid options refer to the man page (man virt-install).
--os-variant=OS_VARIANT
Further optimize the guest configuration for a specific operating system variant (ex. ’fedora8’, ’winxp’). This parameter is optional, and does not require an "--os-type" to be specified. For a full list of valid options refer to the man page (man virt-install).
--host-device=HOSTDEV
Attach a physical host device to the guest. HOSTDEV is a node device name as used by libvirt (as shown by ’virsh nodedev-list’).
--sound
Attach a virtual audio device to the guest. (Full virtualization only).
--noacpi
Override the OS type / variant to disables the ACPI setting for fully virtualized guest. (Full virtualization only).
-v, --hvm
Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.
-p, --paravirt
This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the "--hvm" are specified, this will be assumed.
--accelerate
When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available.
-c CDROM, --cdrom=CDROM
File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the "--location" argument. If a cdrom has been specified via the "--disk" option, and neither "--cdrom" nor any other install option is specified, the "--disk" cdrom is used as the install media.
-l LOCATION, --location=LOCATION
Installation source for guest virtual machine kernel+initrd pair. The "LOCATION" can take one of the following forms:
DIRECTORY - Path to a local directory containing an installable distribution image
nfs:host:/path or nfs://host/path - An NFS server location containing an installable distributionimage
http://host/path - An HTTP server location containing an installable distribution image
ftp://host/path - An FTP server location containing an installable distribution image
--pxe
Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process.
--import
Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via "--disk" or "--file".
--livecd
Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the "--nodisks" flag in combination.
-x EXTRA, --extra-args=EXTRA
Additional kernel command line arguments to pass to the installer when performing a guest install from "--location".
--disk=DISKOPTS
Specifies media to use as storage for the guest, with various options.
--disk opt1=val1,opt2=val2,...
To specify media, one of the following options is required:
path - A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume. Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a ’size’ value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method.
pool - An existing libvirt storage pool name to create new storage on. Requires specifying a ’size’ value.
vol - An existing libvirt storage volume to use. This is specified as ’poolname/volname’.
device - Disk device type. Value can be ’cdrom’, ’disk’, or ’floppy’. Default is ’disk’. If a ’cdrom’ is specified, and no install method is chosen, the cdrom is used as the install media.
bus - Disk bus type. Value can be ’ide’, ’scsi’, ’usb’, ’virtio’ or ’xen’. The default is hypervisor dependent since not all hypervisors support all bus types.
perms - Disk permissions. Value can be ’rw’ (Read/Write), ’ro’ (Readonly), or ’sh’ (Shared Read/Write). Default is ’rw’
size - size (in GB) to use if creating new storage
sparse - whether to skip fully allocating newly created storage. Value is ’true’ or ’false’. Default is ’true’ (do not fully allocate). The initial time taken to fully-allocate the guest virtual disk (spare=false) will be usually by balanced by faster install times inside the guest. Thus use of this option is recommended to ensure consistently high performance and to avoid I/O errors in the guest should the host filesystem fill up.
cache - The cache mode to be used. The host pagecache provides cache memory. The cache value can be ’none’, ’writethrough’, or ’writeback’. ’writethrough’ provides read caching. ’writeback’ provides read and write caching. See the examples section for some uses. This option deprecates "--file", "--file-size", and "--nonsparse".
-f DISKFILE, --file=DISKFILE
Path to the file, disk partition, or logical volume to use as the backing store for the guest’s virtual disk. This option is deprecated in favor of "--disk".
-s DISKSIZE, --file-size=DISKSIZE
Size of the file to create for the guest virtual disk. This is deprecated in favor of "--disk".
--nonsparse
Fully allocate the storage when creating. This is deprecated in favort of "--disk"
--nodisks
Request a virtual machine without any local disk storage, typically used for running ’Live CD’ images or installing to network storage (iSCSI or NFS root).
-w NETWORK, --network=NETWORK
Connect the guest to the host network. The value for "NETWORK" can take one of 3 formats:
bridge:BRIDGE - Connect to a bridge device in the host called "BRIDGE". Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN. Also use this if live migration will be used with this guest.
network:NAME - Connect to a virtual network in the host called "NAME". Virtual networks can be listed, created, deleted using the "virsh" command line tool. In an unmodified install of "libvirt" there is usually a virtual network with a name of "default". Use a virtual network if the host has dynamic networking (eg NetworkManager), or using wireless. The guest will be NATed to the LAN by whichever connection is active.
user - Connect to the LAN using SLIRP. Only use this if running a QEMU guest as an unprivileged user. This provides a very limited form of NAT.
If this option is omitted a single NIC will be created in the guest. If there is a bridge device in the host with a physical interface enslaved, that will be used for connectivity. Failing that, the virtual network called "default" will be used. This option can be specified multiple times to setup more than one NIC.
-b BRIDGE, --bridge=BRIDGE
Bridge device to connect the guest NIC to. This parameter is deprecated in favour of the "--network" parameter.
-m MAC, --mac=MAC
Fixed MAC address for the guest; If this parameter is omitted, or the value "RANDOM" is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence ’00:16:3e’, while for QEMU or KVM virtual machines it must be ’54:52:00’.
--nonetworks
Request a virtual machine without any network interfaces.
--vnc
Setup a virtual console in the guest and export it as a VNC server in the host. Unless the "--vncport" parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the "vncdisplay" command to "virsh" (or virt-viewer(1) can be used which handles this detail for the use).
--vncport=VNCPORT
Request a permanent, statically assigned port number for the guest VNC console. Use of this option is discouraged as other guests may automatically choose to run on this port causing a clash.
--sdl
Setup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated.
--nographics
No graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the --extra-args option). Xen PV will set this up automatically. The command ’virsh console NAME’ can be used to connect to the serial device.
--noautoconsole
Don’t automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the "virsh" "console" command to display the text console. Use of this parameter will disable this behaviour.
-k KEYMAP, --keymap=KEYMAP
Request that the virtual VNC console be configured to run with a non- English keyboard layout.
-d, --debug
Print debugging information to the terminal when running the install process. The debugging information is also stored in "$HOME/.virtinst/virt-install.log" even if this parameter is omitted.
--noreboot
Prevent the domain from automatically rebooting after the install has completed.
--wait=WAIT
Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of --noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is succeeded, virt-install simply exits, leaving the virtual machine in its current state.
--force
Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit.
--prompt
Specifically enable prompting. Default prompting is off (as of virtinst 0.400.0)
An Example CentOS virt-install Command
With reference to the above command-line argument list, we can now look at an example command-line construct using the virt-install tool.
The following command creates a new KVM virtual machine configured to run Windows 7 using full virtualization. It creates a new, 10GB disk image, assigns 512MB of RAM to the virtual machine, configures a CD device for the installation media and uses VNC to display the console:
Note that the above command line assumes the installation media is in a drive corresponding to device file /dev/hda. This may differ on your system, or may be replaced by a path to an ISO image file residing on a file system.
As the creation process runs, the virt-install command will display status updates of the creation progress:
Starting install...
Creating storage file... | 6.0 GB 00:00
Creating domain... | 0 B 00:00
Domain installation still in progress. Waiting for installation to complete.
Install KVM Hypervisor on CentOS 7.x and RHEL 7.x
KVM is an open source hardware virtualization software through which we can create and run multiple Linux based and windows based virtual machines simultaneously. KVM is known as Kernel based Virtual Machine because when we install KVM package then KVM module is loaded into the current kernel and turns our Linux machine into a hypervisor.
In this post first we will demonstrate how we can install KVM hypervisor on CentOS 7.x and RHEL 7.x and then we will try to install virtual machines.
Before proceeding KVM installation, let’s check whether your system’s CPU supports Hardware Virtualization.
Reboot the Server and then try to start virt manager.
Step:2 Start the Virt Manager
Virt Manager is a graphical tool through which we can install and manage virtual machines. To start the virt manager type the ‘virt-manager‘ command from the terminal.
[root@linuxtechi ~]# virt-manager
Step:3 Configure Bridge Interface
Before Start creating VMs , let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.
Now Create Virtual Machine either from the command line using ‘virt-install‘ command or from GUI (virt-manager )
Let’s Create a virtual machine of “Windows Server 2012 R2” using virt-manager.
Start the “virt-manager”
Go to the File Option, click on “New Virtual Machine”
We will be using ISO file as installation media. In the next step Specify the path of ISO file.
Click on Forward.
Specify the Compute Resources : RAM and CPU as per your setup.
Click on Forward to proceed further.
Specify the storage Size of Virtual Machine, In my case I am using 25G.
In the Next step Specify the Name of Virtual Machine and select network as ‘ Bridge bro’
Click on Finish to start the installation.
Follow the screen instructions and complete the installation.
Creating a virtual Machine from Command Line:
Virtual Machines can be created from the console as well using ‘virt-install’ command. In the following example i going to virtual machine of Ubuntu 16.04 LTS.
Follow the instruction now and complete the installation.
In the above ‘virt-install’ command we have used following options :
–name = <Name of the Virtual Machine>
–file = <Location where our virtual machine disk file will be stored >
–file-size = < Size of the Virtual Machine, in my case it is 20GB >
–nonsparse = < Allocate the whole storage while creating>
–graphics = < Specify the graphical tool for interactive installation, in above example I am using spice >
–vcpu = < Number of virtual CPU for the Machine >
–ram = < RAM size for the virtual Machine >
–cdrom = < Virtual CD ROM which specify the installation media like ISO file >
–network = < it is used to specify which network we will use for the virtual machine, in this example I am bridge interface>
–os-type = < Operating system type like linux and window>
–os-variant= <KVM maintains the OS variants like ‘fedora18′, ‘rhel6’ and ‘winxp’ , this option is optional and if you not sure about OS variant you can mentioned it as generic>
Once the Installation is completed we can access the Virtual Machine console from ‘virt-manager‘ as shown below.
That’s it, basic installation and configuration of KVM hypervisor is completed.
Virtualization in Linux: Installing KVM on CentOS & RHEL
In this tutorial, we will be installing KVM on CentOS or RHEL machines. KVM (also called QEMU) or Kernel Based Virtualization Machine is a Hardware based virtualization software that provide a Linux system capability to run multiple operating systems in Linux environment. It can run Linux as well as Windows family OS.
By hardware based virtualization, it means that your processor must support hardware virtualization to run KVM on your system. So if your processor is Intel based, it must support Intel VT or if you are using AMD based processor, it must support AMD-V. So before we proceed further with this tutorial we must check if your processor supports hardware virtualization or not. Most of the modern processors do support hardware virtualization but to be sure, please run the following command,
$ egrep ‘(vmx|svm)’ /proc/cpuinfo
If you receive ‘vmx or svm’ in the output than processor supports hardware virtualization otherwise it doesn’t support it & you can’t install KVM/QEMU on your machine.
KVM/QEMU can be managed either graphically or through CLI. We use virt-manager for managing virtual machines, it can create, delete, edit & can also cold/live migrate guest machines between hosts.
Now, let’s have a brief look at what these packages actually are,
qemu-kvm is QEMU emulator, it’s the main package for KVM,
qemu-img is QEMU disk image manager,
virt-install is a command line tool to create virtual machines.
libvirt , it provides daemon to manage virtual machines and controls hypervisor.
libvirt-client , it provides client side API’s for accessing servers and virsh utility which provides command line tool to manage virtual machines.
virt-viewer is the graphical console.
QEMU is now ready, we will now restart our virtualization daemon called ‘libvertd’,
$ systemctl restart libvertd
We will now create virtual machine with the help of virt-manager. But before we start with creating a virtual machine, we will have to configure a bridge adapter, which is required if we need to access outside network from our VM.
Creating a Bridge adapter
Copy file for your current network interface ‘ifcfg-en0s1’ to another file for bridge interface named ‘ifcfg-br0’
$ cd /etc/sysconfig/network-scripts/ $ cp ifcfg-en0s1 ifcfg-br0
Change network settings as per your own network requirements. Save the file & restart network services.
$ systemctl restart network
Now let’s create our first virtual machine.
Creating a Virtual Machine
We will launch ‘virt-manager’ to create our first virtual machine. You can launch viet-manager either using CLI or graphically,
For CLI, launch your terminal & type
$ virt-manager
Or Virtual machine manager in you Application under system tools. Once it has been launched, goto ‘File’ & click on ‘New Virtual Machine’
We will be using an ISO image for our installation, so select ‘Local Install Media’ for installing OS,
next , select the location for your ISO image & click Forward,
on the next page, select ‘Memory’ & number of ‘CPUs’ & click Forward,
specify the storage size for your VM & click Forward,
On the next page will be the summary for our VM, review all the configurations & in Network selection , select bridged adapter ‘br0’ & hit finish. Now install the OS as you normally do & boot into VM once the installation has been completed. Similarly create as many VMs as you need & as your resources permit.
This concludes our tutorial for installing KVM on CentOS. if you are having any issues or have any suggestions, please feel free to submit them through comment box down below.
root@k8s-master:~# kubectl apply -f image_update.yaml
deployment.extensions "image-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.254.240.225 <none> 10080/TCP 1m
root@k8s-master:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
image-deployment-58b646ffb6-d4sl7 1/1 Running 01m10.10.169.131 k8s-node2
root@k8s-master:~# sed -i 's/nginx:v1/nginx:v2/g' image_update.yaml
应用配置文件:
root@k8s-master:~# kubectl apply -f image_update.yaml
deployment.extensions "image-deployment" configured
service "nginx-service" unchanged
root@k8s-master:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
image-deployment-55cb946d47-7tzp8 0/1 ContainerCreating 016s <none> k8s-node1
image-deployment-58b646ffb6-d4sl7 1/1 Terminating 011m10.10.169.131 k8s-node2
等待一段时间之后,v2版本ready之后
root@k8s-master:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
image-deployment-55cb946d47-7tzp8 1/1 Running 01m10.10.36.119 k8s-node1
root@k8s-master:~# curl http://10.254.240.225:10080
----------
version: v2
hostname: image-deployment-55cb946d47-7tzp8
成功更新为v2
(2)使用patch命令
首先找到deployment:
root@k8s-master:~# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
image-deployment 111120m
root@k8s-master:~# kubectl apply -f roll_update.yaml
deployment.extensions "update-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
update-deployment-7db77f7cc6-c4s2v 1/1 Running 028s10.10.235.232 k8s-master
update-deployment-7db77f7cc6-nfgtd 1/1 Running 028s10.10.36.82 k8s-node1
update-deployment-7db77f7cc6-tflfl 1/1 Running 028s10.10.169.158 k8s-node2
root@k8s-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.254.254.199 <none> 10080/TCP 1m
cd /data/soft &&wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
tar zxvf go1.11.2.linux-amd64.tar.gz -C /usr/local
编辑/etc/profile文件添加如下:
#go settingexport GOROOT=/usr/local/go
export GOPATH=/usr/local/gopath
export PATH=$PATH:$GOROOT/bin
source /etc/profile 生效
验证:
go version
go version go1.11.2 linux/amd64
cd /etc/kubernetes/pki
openssl x509 -in front-proxy-client.crt -noout -text |grep Not
Not Before: Nov 28 09:07:02 2018 GMT
Not After : Nov 25 09:07:03 2028 GMT
openssl x509 -in apiserver.crt -noout -text |grep Not
Not Before: Nov 28 09:07:04 2018 GMT
Not After : Nov 25 09:07:04 2028 GMT
The OpenSSH package contains ssh clients and the sshd daemon. This is useful for encrypting authentication and subsequent traffic over a network. The ssh and scp commands are secure implementations of telnet and rcp respectively.
This package is known to build and work properly using an LFS-9.0 platform.
OpenSSH runs as two processes when connecting to other computers. The first process is a privileged process and controls the issuance of privileges as necessary. The second process communicates with the network. Additional installation steps are necessary to set up the proper environment, which are performed by issuing the following commands as the root user:
Install OpenSSH by running the following commands:
./configure --prefix=/usr \
--sysconfdir=/etc/ssh \
--with-md5-passwords \
--with-privsep-path=/var/lib/sshd &&
make
The testsuite requires an installed copy of scp to complete the multiplexing tests. To run the test suite, first copy the scp program to /usr/bin, making sure that you backup any existing copy first.
--sysconfdir=/etc/ssh: This prevents the configuration files from being installed in /usr/etc.
--with-md5-passwords: This enables the use of MD5 passwords.
--with-pam: This parameter enables Linux-PAM support in the build.
--with-xauth=/usr/bin/xauth: Set the default location for the xauth binary for X authentication. Change the location if xauth will be installed to a different path. This can also be controlled from sshd_config with the XAuthLocation keyword. You can omit this switch if Xorg is already installed.
--with-kerberos5=/usr: This option is used to include Kerberos 5 support in the build.
--with-libedit: This option enables line editing and history features for sftp.
Configuring OpenSSH
Config Files
~/.ssh/*, /etc/ssh/ssh_config, and /etc/ssh/sshd_config
There are no required changes to any of these files. However, you may wish to view the /etc/ssh/ files and make any changes appropriate for the security of your system. One recommended change is that you disable root login via ssh. Execute the following command as the root user to disable root login via ssh:
echo "PermitRootLogin no" >> /etc/ssh/sshd_config
If you want to be able to log in without typing in your password, first create ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub with ssh-keygen and then copy ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys on the remote computer that you want to log into. You'll need to change REMOTE_USERNAME and REMOTE_HOSTNAME for the username and hostname of the remote computer and you'll also need to enter your password for the ssh-copy-id command to succeed:
Once you've got passwordless logins working it's actually more secure than logging in with a password (as the private key is much longer than most people's passwords). If you would like to now disable password logins, as the root user:
If you added Linux-PAM support and you want ssh to use it then you will need to add a configuration file for sshd and enable use of LinuxPAM. Note, ssh only uses PAM to check passwords, if you've disabled password logins these commands are not needed. If you want to use PAM, issue the following commands as the root user:
二、修改配置文件彻底屏蔽 [root@localhost ~]# vim /usr/local/apache-2.4.20/conf/extra/httpd-default.conf 修改以下内容: ServerTokens Full ServerSignature Off 改为: ServerTokens Prod #不显示服务器操作系统类型 ServerSignature On #不显示web服务器版本号