openstack ocata版本简化安装

Network Time Protocol (NTP)

Controller Node

apt install chrony

Edit the /etc/chrony/chrony.conf 添加如下信息

#修改10.0.0.0/24为自己环境的网段 server controller iburst allow 10.0.0.0/24

注释掉 pool 2.debian.pool.ntp.org offline iburst line Restart the NTP service

service chrony restart

Compute Node

apt install chrony

Edit the /etc/chrony/chrony.conf 添加如下信息

server controller iburst

注释掉 pool 2.debian.pool.ntp.org offline iburst line

service chrony restart

OpenStack packages(所有节点)

apt install software-properties-common add-apt-repository cloud-archive:ocata apt update && apt dist-upgrade apt install python-openstackclient

SQL database(控制节点)

apt install mariadb-server python-pymysql

创建和配置该文件 /etc/mysql/mariadb.conf.d/99-openstack.cnf,配置信息如下。将bind-address的IP地址换成控制节点的IP地址:

[mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8

重启数据库服务器,初始化数据库服务器。

service mysql restart mysql_secure_installation

Message queue(控制节点)

apt install rabbitmq-server #替换RABBIT_PASS为自己设置的密码 rabbitmqctl add_user openstack RABBIT_PASS Creating user "openstack" ... rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ...

Memcached(控制节点)

apt install memcached python-memcache

编辑 /etc/memcached.conf 替换已经存在的 "-l 127.0.0.1" 为controller node的IP地址

-l 10.0.0.11 service memcached restart

1|0Identity service(控制节点)


Prerequisites

mysql CREATE DATABASE keystone; #替换KEYSTONE_DBPASS为自己的密码 GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

Install and configure components

apt install keystone

编辑 /etc/keystone/keystone.conf。替换KETSTONE_DBPASS为上面数据库注册时的密码。

[database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet

注释或移除[database]配置项下面的其他数据库连接

#同步数据库 su -s /bin/sh -c "keystone-manage db_sync" keystone #Initialize Fernet key repositories: keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone #替换ADMIN_PASS为admin用户的密码 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne

编辑 /etc/apache2/apache2.conf file添加下面的配置信息

ServerName controller

Finalize the installation

service apache2 restart rm -f /var/lib/keystone/keystone.db

配置 administrative account,替换ADMIN_PASS为自己创建admin用户时的密码

export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3

Create a domain, projects, users, and roles

openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo openstack user create --domain default --password-prompt demo User Password: Repeat User Password: openstack role create user openstack role add --project demo --user demo user

Verify operation

Forsecurityreasons,disablethetemporaryauthenticationtokenmechanism:

编辑Edit the /etc/keystone/keystone-paste.ini 文件,去掉 "admin_token_auth"从下面配置项中 [pipeline:public_api] [pipeline:admin_api] [pipeline:api_v3] 取消环境变量 unset OS_AUTH_URL OS_PASSWORD

Creating the scripts

创建 amind-openrc文件,填如下内容 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 创建 demo-openrc文件,填如下内容 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

2|0Image service


Prerequisites

mysql CREATE DATABASE glance; 替换GLANCE_DBPASS为自己的密码 GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

. admin-openrc openstack user create --domain default --password-prompt glance User Password: Repeat User Password: openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292

Install and configure components

apt install glance

编辑 the /etc/glance/glance-api.conf file。 替换GLANCE_DBPASS和GLANCE_PASS为设定密码。

[database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

编辑 the /etc/glance/glance-registry.conf file,替换两个GLANCE_DBPASS为设定密码。

[database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone

su -s /bin/sh -c "glance-manage db_sync" glance #Restart the Image services: service glance-registry restart service glance-api restart

Verify operation

. admin-openrc wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img openstack image create "cirros" \ --file cirros-0.3.5-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public

3|0Compute service


3|1Install and configure controller node


Prerequisites

mysql CREATE DATABASE nova_api; CREATE DATABASE nova; CREATE DATABASE nova_cell0; 修改*_DBPASS为自己的密码 GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; . admin-openrc #创建NOVA用户 openstack user create --domain default --password-prompt nova User Password: Repeat User Password: openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 #创建placement用户 openstack user create --domain default --password-prompt placement User Password: Repeat User Password: openstack role add --project service --user placement admin openstack service create --name placement --description "Placement API" placement openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778

Install and configure components

apt install nova-api nova-conductor nova-consoleauth \ nova-novncproxy nova-scheduler nova-placement-api

Edit the /etc/nova/nova.conf,替换NOVA_DBPASS为自己的密码

[api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = 10.0.0.11 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone #替换NOVA_PASS为自己的密码 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp 因为一个bug的原因,要移除log_dir从[default]配置项 #替换PLACEMENT_PASS为自己密码 [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS

su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 109e1d4b-536a-40d0-83c6-5f121b82b650 su -s /bin/sh -c "nova-manage db sync" nova service nova-api restart service nova-consoleauth restart service nova-scheduler restart service nova-conductor restart service nova-novncproxy restart

3|2Install and configure a compute node


apt install nova-compute

编辑 the /etc/nova/nova.conf,替换所有的密码为自己的密码。

#替换my_ip的ip地址为compute node ip地址 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp #替换PLACEMENT_PASS为自己的密码 [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS

如果是使用虚拟机,则如下操作:

编辑 the [libvirt] 配置项 in the /etc/nova/nova-compute.conf

[libvirt] virt_type = qemu

service nova-compute restart

Add the compute node to the cell database

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

或者在 /etc/nova/nova.conf文件中添加如下配置信息:

[scheduler] discover_hosts_in_cells_interval = 300

4|0Networking service


4|1Install and configure controller node


Prerequisites

mysql CREATE DATABASE neutron; #替换NEUTRON_DAPASS为自己的密码 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; . admin-openrc openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696

[安装 neutron 软件包]

apt-get install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent python-neutronclient

编辑 /etc/neutron/neutron.conf 文件

[database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron 注:注释掉 其他sqlite连接 [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True rpc_backend = rabbit auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = stack2015 #替换密码为自己的keystone的密码 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = stack2015 #替换密码为自己的nova密码 [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = stack2015

[修改 ml2 配置文件]

配置 /etc/neutron/plugins/ml2/ml2_conf.ini 文件

[ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch extension_drivers = port_security [ml2_type_flat] flat_networks = external [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True

修改 etc/neutron/plugins/ml2/openvswitch_agent.ini 在[ovs]增加

#local_ip为隧道VTEP的地址,可以为管理网卡IP地址,也可以是隧道特定网卡地址 [ovs] local_ip = TUNNELS_IP bridge_mappings = external:br-ex [agent] tunnel_types = vxlan l2_population = True prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[更新 L3 配置]

配置 /etc/neutron/l3_agent.ini

[DEFAULT] verbose = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver external_network_bridge =br-ex

配置 /etc/neutron/dhcp_agent.ini

[DEFAULT] verbose = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True

编辑 /etc/neutron/dhcp_agent.ini 在[DEFAULT]选项中添加

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

创建/etc/neutron/dnsmasq-neutron.conf 文件

echo 'dhcp-option-force=26,1450' | sudo tee /etc/neutron/dnsmasq-neutron.conf

编辑/etc/neutron/metadata_agent.ini 在[DEFAULT]部分加入以下设置

nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET

修改控制节点 nova 配置文件中[neutron]部分

配置/etc/nova/nova.conf,修改密码为自己的密码

[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = stack2015 service_metadata_proxy = True metadata_proxy_shared_secret = METADATA_SECRET

[同步 neutron 数据库]

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

同步过程大概 2-3 分钟左右

[重启 Nova API Server]

service nova-api restart

启动 openvswitch

service openvswitch-switch restart

增加用于外部网络的网桥

ovs-vsctl add-br br-ex

向外部网桥添加物理网卡

ovs-vsctl add-port br-ex enp3s0(外网网卡)

关闭网卡的 GRO 功能

ethtool -K enp3s0 gro off

[重启 Neutron 服务]

service neutron-server restart service openvswitch-switch restart service neutron-openvswitch-agent restart service neutron-dhcp-agent restart service neutron-metadata-agent restart service neutron-l3-agent restart

验证 Neutron client 來查看外部网络

. admin-openrc neutron ext-list

验证 Neutron client 來查看 Agents 状态

neutron agent-list

4|2Install and configure compute node


[安装计算节点 neutron 软件包]

apt-get install neutron-plugin-ml2 neutron-openvswitch-age

编辑/etc/neutron/neutron.conf 在[DEFAULT]部分加入以下设置

[DEFAULT] verbose = True rpc_backend = rabbit auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True

在[database]部分將所有 connection 与 sqlite 相关的参数注释

[database]

# connection = sqlite:////var/lib/neutron/neutron.sqlite

[oslo_messaging_rabbit]部分加入以下设置

[oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = stack2015

[keystone_authtoken]部分加入以下设置

[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = stack2015

配置修改/etc/neutron/plugins/ml2/ml2_conf.ini 设置如下

[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

配置修改/etc/neutron/plugins/ml2/openvswitch_agent.ini 设置如下

[agent] tunnel_types = vxlan l2_population = False prevent_arp_spoofing = False arp_responder = False vxlan_udp_port = 4789 [ovs] local_ip = 172.171.4.211 tunnel_type = vxlan tunnel_bridge = br-tun integration_bridge = br-int tunnel_id_ranges = 1:1000 tenant_network_type = vxlan enable_tunneling = True [securitygroup] enable_ipset = True enable_security_group = False firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

配置/etc/nova/nova.conf 在[neutron]中添加如下信息,修改密码为自己的密码

[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = stack2015

[重启 nova-compute]

service nova-compute restart

[重启 Open vSwitch Agent]

service openvswitch-switch restart service neutron-openvswitch-agent restart

[验证计算节点 neutron]

. admin-openrc neutron agent-list

5|0Block Storage service


5|1Install and configure controller node


mysql CREATE DATABASE cinder; #替换CINDER_DBPASS为自己的密码 GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; . admin-openrc openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

Install and configure components

apt install cinder-api cinder-scheduler

编辑 the /etc/cinder/cinder.conf 配置

[database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder #修改my_ip值为controller node 的IP地址 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 #修改CINDER_PASS为自己的密码 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp

Edit the /etc/nova/nova.conf file and add the following to it:

[cinder] os_region_name = RegionOne

service nova-api restart su -s /bin/sh -c "cinder-manage db sync" cinder

Finalize installation

service nova-api restart service cinder-scheduler restart service apache2 restart

5|2Install and configure a storage node


Prerequisites

apt install lvm2

根据环境中硬盘的盘符来写,如sdb sda sdc等。在这一步之前必须要先添加一块硬盘到cinder_storage中

pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created

In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices。在cinder_node节点上过滤非添加硬盘。

devices { filter = [ "a/sdb/", "r/.*/"]

Install and configure components

apt install cinder-volume

Edit the /etc/cinder/cinder.conf 替换所有的密码为自己的密码

#注释掉其他数据库连接 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder #替换RABIIT_PASS为自己的密码 #替换my_ip为cinder_storage 的IP地址 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS enabled_backends = lvm glance_api_servers = http://controller:9292 #替换CINDER_PASS为自己密码 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp

Finalize installation

service tgt restart service cinder-volume restart

6|0swift


6|1controller


Prerequisites

. admin-openrc openstack user create --domain default --password-prompt swift User Password: Repeat User Password: openstack role add --project service --user swift admin openstack service create --name swift --description "OpenStack Object Storage" object-store openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

Install and configure components

apt-get install swift swift-proxy python-swiftclient \ python-keystoneclient python-keystonemiddleware memcached

Create the /etc/swift directory.

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

Edit the /etc/swift/proxy-server.conf 替换所有的密码

[DEFAULT] bind_port = 8080 user = swift swift_dir = /etc/swift [pipeline:main] pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server] use = egg:swift#proxy account_autocreate = True [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin,user [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True [filter:cache] use = egg:swift#memcache memcache_servers = controller:11211

6|2storage node


Prerequisites

apt-get install xfsprogs rsync #执行之前要确认是否有添加硬盘,盘符要明确 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc Edit the /etc/fstab file 添加如下信息 /dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 /dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 mount /srv/node/sdb mount /srv/node/sdc

Create or edit the /etc/rsyncd.conf 替换IP地址为storage node的IP地址

uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock

Edit the /etc/default/rsync file and enable the rsync service:

RSYNC_ENABLE=true

Start the rsync service:

service rsync start

Install and configure components

apt-get install swift swift-account swift-container swift-object curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

Edit the /etc/swift/account-server.conf 替换IP地址为storage node的IP地址

[DEFAULT] bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6202 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = True [pipeline:main] pipeline = healthcheck recon account-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift

Edit the /etc/swift/container-server.conf file 替换IP地址为storage node的IP地址

[DEFAULT] bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6201 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = True [pipeline:main] pipeline = healthcheck recon container-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift

Edit the /etc/swift/object-server.conf 替换IP地址为storage node的IP地址

[DEFAULT] bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS bind_port = 6200 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = True [pipeline:main] pipeline = healthcheck recon object-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift recon_lock_path = /var/lock

chown -R swift:swift /srv/node mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift

Create account ring(controller node)

切换到 /etc/swift directory.

Create the base account.builder file。数字比例为( 10 节点数 1)

swift-ring-builder account.builder create 10 3 1 swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6202 --device sdb --weight 100 swift-ring-builder account.builder swift-ring-builder account.builder rebalance

Create the base container.builder file:数字比例为( 10 节点数 1)

swift-ring-builder container.builder create 10 3 1 swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6201 --device sdb --weight 100 swift-ring-builder container.builder swift-ring-builder container.builder rebalance

Create the base object.builder file:数字比例为( 10 节点数 1)

swift-ring-builder object.builder create 10 3 1 swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.0.51 --port 6200 --device sdb --weight 100 swift-ring-builder object.builder swift-ring-builder object.builder rebalance

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the

/etc/swift directory on each storage node and any additional nodes running the proxy service

拷贝生成的ring.gz文件到所有的storage node的/etc/swift文件下。

Finalize installation(controller node)

Obtain the /etc/swift/swift.conf file from the Object Storage source repository:

curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

Edit the /etc/swift/swift.conf file and complete the following actions

[swift-hash] swift_hash_path_suffix = HASH_PATH_SUFFIX swift_hash_path_prefix = HASH_PATH_PREFIX [storage-policy:0] name = Policy-0 default = yes

Copy the swift.conf file to the /etc/swift directory on each storage node and any additional nodes running the proxy service.(拷贝/etc/swift中的swift.conf文件到所有storage node的 /etc/swift文件夹中)

On all nodes, ensure proper ownership of the configuration directory(所有节点改变/etc/swift的用户组关系,确保权限正确)

chown -R root:swift /etc/swift

#controller node重启服务 service memcached restart service swift-proxy restart #storage node初始化swift swift-init all start

7|0Dashboard


Install and configure

apt install openstack-dashboard

Edit the /etc/openstack-dashboard/local_settings.py file and complete the following actions:

OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] #Do not edit the ALLOWED_HOSTS parameter under the Ubuntu configuration section. SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

Finalize installation

service apache2 reload

__EOF__

本文作者goldsunshine
本文链接https://www.cnblogs.com/goldsunshine/p/7440823.html
关于博主:评论和私信会在第一时间回复。或者直接私信我。
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角推荐一下。您的鼓励是博主的最大动力!
posted @   金色旭光  阅读(491)  评论(0编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
阅读排行:
· winform 绘制太阳,地球,月球 运作规律
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· AI与.NET技术实操系列(五):向量存储与相似性搜索在 .NET 中的实现
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
点击右上角即可分享
微信分享提示