Deploying Cloud Foundry on OpenStack Juno and XenServer (Part II)

link

http://rabbitstack.github.io/deploying-cloud-foundry-on-openstack-juno-and-xenserver-part-ii/

 

Let's move on. We should have our OpenStack instance prepared for Cloud Foundry. The most usual way of deploying Cloud Foundry is through BOSH. For the who still didn't hear about it, BOSH is the platform for automation and lifecycle management of software and distributed services. It is also capable of monitoring and failure recovery of processes and virtual machines. There are already a few IT automation platforms in the market like Chef or Puppet, so, why to learn / use BOSH then?

One notable difference is that BOSH is able to perform the deployment from the sterile environment, i.e. package source code and dependencies, create the virtual machines (jobs in BOSH terminology) from the so calledstemcell template (VM which has BOSH agent installed and is used to generate the jobs), and finally install, start and monitor the required services and VMs. Visit the official page from the link above to learn more about BOSH.

Deploying MicroBOSH

MicroBOSH is a single VM which contains all the necessary components to boot BOSH, including the blobstore, nats, director, health manager etc. Once you have an instance of MicroBOSH running, you can deploy BOSH if you wish. Install BOSH CLI gems (Ruby >= 1.9.3 is required).

$ gem install bosh_cli bosh_cli_plugin_micro

You will need to create a keypair in OpenStack and configure bosh security group with the rules shown in the table below. You can do it by accessing the Horizon dashboard or by using nova CLI.

DirectionIP ProtocolPort RangeRemote
Ingress TCP 1-65535 bosh
Ingress TCP 53 (DNS) 0.0.0.0/0 (CIDR)
Ingress TCP 4222 0.0.0.0/0 (CIDR)
Ingress TCP 6868 0.0.0.0/0 (CIDR)
Ingress TCP 4222 0.0.0.0/0 (CIDR)
Ingress TCP 25250 0.0.0.0/0 (CIDR)
Ingress TCP 25555 0.0.0.0/0 (CIDR)
Ingress TCP 25777 0.0.0.0/0 (CIDR)
Ingress UDP 53 0.0.0.0/0 (CIDR)
Ingress UDP 68 0.0.0.0/0 (CIDR)
$ nova keypair-add microbosh > microbosh.pem
$ chmod 600 microbosh.pem

BOSH uses a variety of artifacts in order to complete the deployment life cycle. We can basically distinguish between stemcell, release and deployment. To deploy MicroBOSH we will only need a stemcell which can be downloaded using the bosh CLI. First get a list of available stemcells and download thebosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz.

$ bosh public stemcells
+-----------------------------------------------------------------+
| Name                                                            |
+-----------------------------------------------------------------+
| bosh-stemcell-2427-aws-xen-ubuntu.tgz                           |
| bosh-stemcell-2652-aws-xen-centos.tgz                           |
| bosh-stemcell-2839-aws-xen-centos-go_agent.tgz                  |
| bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz                  |
| bosh-stemcell-2710-aws-xen-ubuntu-lucid-go_agent.tgz            |
| bosh-stemcell-2652-aws-xen-ubuntu-lucid.tgz                     |
| bosh-stemcell-2839-aws-xen-ubuntu-trusty-go_agent.tgz           |
| bosh-stemcell-2690.6-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.1-aws-xen-centos-go_agent.tgz                |
| bosh-stemcell-2719.1-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.2-aws-xen-centos-go_agent.tgz                |
| bosh-stemcell-2719.2-aws-xen-ubuntu-trusty-go_agent.tgz         |
| bosh-stemcell-2719.3-aws-xen-ubuntu-trusty-go_agent.tgz         |
| light-bosh-stemcell-2427-aws-xen-ubuntu.tgz                     |
| light-bosh-stemcell-2652-aws-xen-centos.tgz                     |
| light-bosh-stemcell-2839-aws-xen-centos-go_agent.tgz            |
| light-bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz            |
| light-bosh-stemcell-2710-aws-xen-ubuntu-lucid-go_agent.tgz      |
| light-bosh-stemcell-2652-aws-xen-ubuntu-lucid.tgz               |
| light-bosh-stemcell-2839-aws-xen-ubuntu-trusty-go_agent.tgz     |
| light-bosh-stemcell-2690.6-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.1-aws-xen-centos-go_agent.tgz          |
| light-bosh-stemcell-2719.1-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.2-aws-xen-centos-go_agent.tgz          |
| light-bosh-stemcell-2719.2-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2719.3-aws-xen-ubuntu-trusty-go_agent.tgz   |
| light-bosh-stemcell-2839-aws-xen-hvm-centos-go_agent.tgz        |
| light-bosh-stemcell-2839-aws-xen-hvm-ubuntu-trusty-go_agent.tgz |
| bosh-stemcell-2427-openstack-kvm-ubuntu.tgz                     |
| bosh-stemcell-2624-openstack-kvm-centos.tgz                     |
| bosh-stemcell-2624-openstack-kvm-ubuntu-lucid.tgz               |
| bosh-stemcell-2839-openstack-kvm-centos-go_agent.tgz            |
| bosh-stemcell-2839-openstack-kvm-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2652-openstack-kvm-ubuntu-lucid-go_agent.tgz      |
| bosh-stemcell-2719.1-openstack-kvm-centos-go_agent.tgz          |
| bosh-stemcell-2719.1-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2719.2-openstack-kvm-centos-go_agent.tgz          |
| bosh-stemcell-2719.2-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2719.3-openstack-kvm-ubuntu-trusty-go_agent.tgz   |
| bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz        |
| bosh-stemcell-2839-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz |
| bosh-stemcell-2427-vcloud-esxi-ubuntu.tgz                       |
| bosh-stemcell-2652-vcloud-esxi-ubuntu-lucid.tgz                 |
| bosh-stemcell-2839-vcloud-esxi-ubuntu-trusty-go_agent.tgz       |
| bosh-stemcell-2690.5-vcloud-esxi-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2690.6-vcloud-esxi-ubuntu-trusty-go_agent.tgz     |
| bosh-stemcell-2710-vcloud-esxi-ubuntu-lucid-go_agent.tgz        |
| bosh-stemcell-2427-vsphere-esxi-ubuntu.tgz                      |
| bosh-stemcell-2624-vsphere-esxi-centos.tgz                      |
| bosh-stemcell-2839-vsphere-esxi-centos-go_agent.tgz             |
| bosh-stemcell-2427-vsphere-esxi-ubuntu-go_agent.tgz             |
| bosh-stemcell-2710-vsphere-esxi-ubuntu-lucid-go_agent.tgz       |
| bosh-stemcell-2624-vsphere-esxi-ubuntu-lucid.tgz                |
| bosh-stemcell-2839-vsphere-esxi-ubuntu-trusty-go_agent.tgz      |
| bosh-stemcell-2719.1-vsphere-esxi-centos-go_agent.tgz           |
| bosh-stemcell-2719.1-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2719.2-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2719.2-vsphere-esxi-centos-go_agent.tgz           |
| bosh-stemcell-2719.3-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-2690.6-vsphere-esxi-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-389-warden-boshlite-ubuntu-trusty-go_agent.tgz    |
| bosh-stemcell-53-warden-boshlite-ubuntu.tgz                     |
| bosh-stemcell-389-warden-boshlite-centos-go_agent.tgz           |
| bosh-stemcell-64-warden-boshlite-ubuntu-lucid-go_agent.tgz      |
+-----------------------------------------------------------------+
$ bosh download public stemcell bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz
bosh-stemcell:   4% |ooo                              |  24.4MB 753.0KB/s ETA:  00:11:43

Now we are ready to create the MicroBOSH deployment manifestmicrobosh-openstack.yml file. You will need to change net_id with your OpenStack instance network identifier, ip with the ip address from the network pool. You can find out that information by executing the following commands.

$ nova network-list
+--------------------------------------+----------+----------------+
| ID                                   | Label    | Cidr           |
+--------------------------------------+----------+----------------+
| 3f36d40e-1097-49a0-a023-4606dbf3a1f5 | yuna-net | 192.168.1.0/24 |
+--------------------------------------+----------+----------------+

$ nova network-show 3f36d40e-1097-49a0-a023-4606dbf3a1f5 
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| bridge              | xenbr0                               |
| bridge_interface    | eth0                                 |
| broadcast           | 192.168.1.255                        |
| cidr                | 192.168.1.0/24                       |
| cidr_v6             | -                                    |
| created_at          | 2014-12-28T17:18:14.000000           |
| deleted             | False                                |
| deleted_at          | -                                    |
| dhcp_server         | 192.168.1.50                         |
| dhcp_start          | 192.168.1.51                         |
| dns1                | 8.8.4.4                              |
| dns2                | -                                    |
| enable_dhcp         | True                                 |
| gateway             | 192.168.1.50                         |
| gateway_v6          | -                                    |
| host                | -                                    |
| id                  | 3f36d40e-1097-49a0-a023-4606dbf3a1f5 |
| injected            | False                                |
| label               | yuna-net                             |
| mtu                 | -                                    |
| multi_host          | True                                 |
| netmask             | 255.255.255.0                        |
| netmask_v6          | -                                    |
| priority            | -                                    |
| project_id          | -                                    |
| rxtx_base           | -                                    |
| share_address       | True                                 |
| updated_at          | -                                    |
| vlan                | -                                    |
| vpn_private_address | -                                    |
| vpn_public_address  | -                                    |
| vpn_public_port     | -                                    |
+---------------------+--------------------------------------+

Under the openstack section change the Identity service endpoint, OpenStack credentials, the private key location, and optionally set the timeout for OpenStack resources.

---
name: microbosh-openstack

logging:
  level: DEBUG

network:
  type: manual
  ip: 192.168.1.55
  cloud_properties:
    net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5

resources:
  persistent_disk: 16384
  cloud_properties:
    instance_type: m1.medium

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://controller:5000/v2.0
      username: admin
      api_key: admin
      tenant: admin
      default_security_groups: ["bosh"]
      default_key_name: microbosh
      private_key: /root/microbosh.pem
      state_timeout: 900

apply_spec:
  properties:
    director:
      max_threads: 3
    hm:
      resurrector_enabled: true
    ntp:
      - 0.europe.pool.ntp.org
      - 1.europe.pool.ntp.org

Finally, set the current deployment manifest file and deploy MicroBOSH.

$ bosh micro deployment microbosh-openstack.yml
$ bosh micro deploy bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz

If everything goes well you should login into the MicroBOSH instance (use admin, for both username and password).

$ bosh target 192.168.1.55
Target set to 'microbosh-openstack'
Your username: admin
Enter password: *****
Logged in as 'admin'

Deploying Cloud Foundry

Start by cloning the Cloud Foundry repository. Enter the newly created cf-releasedirectory and execute the update script to update all submodules.

$ git clone https://github.com/cloudfoundry/cf-release.git
$ cd cf-release
$ ./update

Upload the stemcell to the BOSH Director.

$ bosh upload stemcell bosh-stemcell-2839-openstack-kvm-centos-go_agent-raw.tgz

In BOSH terminology, release is a collection of packages and source code, dependencies, configuration properties, and any other components required to perform a deployment. To create a Cloud Foundry release, use this command fromcf-release directory.

$ bosh create release

This will download the required blobs from the S3 storage service and generate a release tarball. You should end up with the similar directory structures.

$ ls blobs
buildpack_cache    git           haproxy         mysql             php-buildpack     rootfs      ruby-buildpack
cli                go-buildpack  java-buildpack  nginx             postgres          ruby        sqlite
debian_nfs_server  golang        libyaml         nodejs-buildpack  python-buildpack  ruby-2.1.4  uaa

$ ls packages
acceptance-tests        buildpack_python     dea_next             golang     loggregator_trafficcontroller  postgres        warden
buildpack_cache         buildpack_ruby       debian_nfs_server    golang1.3  login                          rootfs_lucid64
buildpack_go            cli                  doppler              gorouter   metron_agent                   ruby
buildpack_java          cloud_controller_ng  etcd                 haproxy    mysqlclient                    ruby-2.1.4
buildpack_java_offline  collector            etcd_metrics_server  hm9000     nats                           smoke-tests
buildpack_nodejs        common               git                  libpq      nginx                          sqlite
buildpack_php           dea_logging_agent    gnatsd               libyaml    nginx_newrelic_plugin          uaa

Now you can upload the release to the BOSH Director.

$ bosh upload release

The most complex part of Cloud Foundry BOSH deployment is the manifest file where all components are tied together - computing resource specifications, VMs, software releases, and configuration properties. You can use the deployment which worked great on my environment. Don’t forget to create cf.small and cf.medium flavors in OpenStack.

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475
<%
director_uuid = 'YOUR_DIRECTOR_UUID'
static_ip = 'YOUR_FLOATING_IP'
root_domain = "#{static_ip}.xip.io"
deployment_name = 'cf'
cf_release = '194+dev.2'
protocol = 'http'
common_password = 'YOUR_PASSWORD'
%>
---
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>
 
releases:
- name: cf
version: <%= cf_release %>
 
compilation:
workers: 2
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: cf.medium
 
update:
canaries: 0
canary_watch_time: 30000-600000
update_watch_time: 30000-600000
max_in_flight: 32
serial: false
 
networks:
- name: default
type: dynamic
cloud_properties:
net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5
security_groups:
- default
- bosh
- cf-private
 
- name: external
type: dynamic
cloud_properties:
net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5
security_groups:
- default
- bosh
- cf-private
- cf-public
 
- name: float
type: vip
cloud_properties:
net_id: 3f36d40e-1097-49a0-a023-4606dbf3a1f5
 
resource_pools:
- name: common
network: default
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent-ft
version: latest
cloud_properties:
instance_type: cf.small
 
- name: large
network: default
stemcell:
name: bosh-openstack-kvm-ubuntu-trusty-go_agent-ft
version: latest
cloud_properties:
instance_type: cf.medium
jobs:
- name: nats
templates:
- name: nats
- name: nats_stream_forwarder
instances: 1
resource_pool: common
networks:
- name: default
default: [dns, gateway]
 
- name: nfs_server
templates:
- name: debian_nfs_server
instances: 1
resource_pool: common
persistent_disk: 65535
networks:
- name: default
default: [dns, gateway]
 
- name: postgres
templates:
- name: postgres
instances: 1
resource_pool: common
persistent_disk: 65536
networks:
- name: default
default: [dns, gateway]
properties:
db: databases
 
- name: uaa
templates:
- name: uaa
instances: 1
resource_pool: common
networks:
- name: default
default: [dns, gateway]
 
- name: trafficcontroller
templates:
- name: loggregator_trafficcontroller
instances: 1
resource_pool: common
networks:
- name: default
default: [dns, gateway]
 
- name: cloud_controller
templates:
- name: nfs_mounter
- name: cloud_controller_ng
instances: 1
resource_pool: large
networks:
- name: default
default: [dns, gateway]
properties:
db: ccdb
 
- name: health_manager
templates:
- name: hm9000
instances: 1
resource_pool: common
networks:
- name: default
default: [dns, gateway]
 
- name: dea
templates:
- name: dea_logging_agent
- name: dea_next
instances: 2
resource_pool: large
networks:
- name: default
default: [dns, gateway]
 
- name: router
templates:
- name: gorouter
instances: 1
resource_pool: common
networks:
- name: external
default: [dns, gateway]
- name: float
static_ips:
- <%= static_ip %>
properties:
networks:
apps: external
 
properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: 'admin'
app_domains:
- <%= root_domain %>
 
haproxy: {}
 
networks:
apps: default
 
nats:
user: nats
password: <%= common_password %>
address: 0.nats.default.<%= deployment_name %>.microbosh
port: 4222
machines:
- 0.nats.default.<%= deployment_name %>.microbosh
 
nfs_server:
address: 0.nfs-server.default.<%= deployment_name %>.microbosh
network: "*.<%= deployment_name %>.microbosh"
allow_from_entries:
- 192.168.1.0/24 # change according to your subnet
 
debian_nfs_server:
no_root_squash: true
 
metron_agent:
zone: z1
metron_endpoint:
zone: z1
shared_secret: <%= common_password %>
 
loggregator_endpoint:
shared_secret: <%= common_password %>
host: 0.trafficcontroller.default.<%= deployment_name %>.microbosh
 
loggregator:
zone: z1
servers:
zone:
- 0.loggregator.default.<%= deployment_name %>.microbosh
 
traffic_controller:
zone: 'zone'
 
logger_endpoint:
use_ssl: <%= protocol == 'https' %>
port: 80
 
ssl:
skip_cert_verify: true
 
router:
endpoint_timeout: 60
status:
port: 8080
user: gorouter
password: <%= common_password %>
servers:
z1:
- 0.router.default.<%= deployment_name %>.microbosh
z2: []
 
etcd:
machines:
- 0.etcd.default.<%= deployment_name %>.microbosh
 
dea: &dea
disk_mb: 102400
disk_overcommit_factor: 2
memory_mb: 15000
memory_overcommit_factor: 3
directory_server_protocol: <%= protocol %>
mtu: 1460
deny_networks:
- 169.254.0.0/16 # Google Metadata endpoint
 
advertise_interval_in_seconds: 10
heartbeat_interval_in_seconds: 10
 
 
dea_next: *dea
 
disk_quota_enabled: false
 
dea_logging_agent:
status:
user: admin
password: <%= common_password %>
 
databases: &databases
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true
- tag: uaa
name: uaadb
citext: true
 
ccdb: &ccdb
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: ccadmin
password: <%= common_password %>
databases:
- tag: cc
name: ccdb
citext: true
 
ccdb_ng: *ccdb
 
uaadb:
db_scheme: postgresql
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
- tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
- tag: uaa
name: uaadb
citext: true
 
cc: &cc
internal_api_password: <%= common_password %>
security_group_definitions:
- name: public_networks
rules:
- protocol: all
destination: 0.0.0.0-9.255.255.255
- protocol: all
destination: 11.0.0.0-169.253.255.255
- protocol: all
destination: 169.255.0.0-172.15.255.255
- protocol: all
destination: 172.32.0.0-192.167.255.255
- protocol: all
destination: 192.169.0.0-255.255.255.25
- name: internal_network
rules:
- protocol: all
destination: 10.0.0.0-10.255.255.255
- name: dns
rules:
- destination: 0.0.0.0/0
ports: '53'
protocol: tcp
- destination: 0.0.0.0/0
ports: '53'
protocol: udp
default_running_security_groups:
- public_networks
- internal_network
- dns
default_staging_security_groups:
- public_networks
- internal_network
- dns
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
jobs:
local:
number_of_workers: 2
generic:
number_of_workers: 2
global:
timeout_in_seconds: 14400
app_bits_packer:
timeout_in_seconds: null
app_events_cleanup:
timeout_in_seconds: null
app_usage_events_cleanup:
timeout_in_seconds: null
blobstore_delete:
timeout_in_seconds: null
blobstore_upload:
timeout_in_seconds: null
droplet_deletion:
timeout_in_seconds: null
droplet_upload:
timeout_in_seconds: null
model_deletion:
timeout_in_seconds: null
bulk_api_password: <%= common_password %>
staging_upload_user: upload
staging_upload_password: <%= common_password %>
quota_definitions:
default:
memory_limit: 10240
total_services: 100
non_basic_services_allowed: true
total_routes: 1000
trial_db_allowed: true
resource_pool:
resource_directory_key: cloudfoundry-resources
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
packages:
app_package_directory_key: cloudfoundry-packages
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
droplets:
droplet_directory_key: cloudfoundry-droplets
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
buildpacks:
buildpack_directory_key: cloudfoundry-buildpacks
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
install_buildpacks:
- name: java_buildpack
package: buildpack_java
- name: ruby_buildpack
package: buildpack_ruby
- name: nodejs_buildpack
package: buildpack_nodejs
- name: go_buildpack
package: buildpack_go
db_encryption_key: <%= common_password %>
hm9000_noop: false
diego:
staging: disabled
running: disabled
newrelic:
license_key: null
environment_name: <%= deployment_name %>
 
ccng: *cc
 
login:
enabled: false
 
uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
no_ssl: <%= protocol == 'http' %>
login:
client_secret: <%= common_password %>
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batch
password: <%= common_password %>
clients:
cf:
override: true
authorized-grant-types: password,implicit,refresh_token
authorities: uaa.none
scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write
access-token-validity: 7200
refresh-token-validity: 1209600
admin:
secret: <%= common_password %>
authorized-grant-types: client_credentials
authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin
doppler:
secret: <%= common_password %>
scim:
users:
- admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
- services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin
jwt:
signing_key: |
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB
AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA
Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0
KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J
duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE
xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8
+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek
lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h
jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh
HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+
4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=
-----END RSA PRIVATE KEY-----
verification_key: |
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
spULZVNRxq7veq/fzwIDAQAB
-----END PUBLIC KEY-----
view rawcf-194.yml hosted with ❤ by GitHub

Set and initiate the deploy. This process can take a few hours. Relax.

$ bosh deployment cf-deployment.yml
$ bosh deploy

Pushing an application

Download the cf CLI from https://github.com/cloudfoundry/cli/releases. Make sure you can access the API endpoint of the Cloud Foundry instance. If so, use cf loginwith your username, organization and space.

$ curl http://api.192.168.1.249.xip.io/info
$ cf login -a api.192.168.1.249.xip.io -u user -o rabbitstack -s qa

To test our instance we are going to push a very simple node.js app. Create a new directory and place server.js and the application manifest.yml file in it.

var http = require("http");

var server = http.createServer(function (req, res) {
    res.writeHeader(200, {
        "Content-Type":"text/html"
    });
    res.end("Bunnies on Cloud Foundry. Port is " + process.env.VCAP_APP_PORT);
}).listen(process.env.VCAP_APP_PORT);
---
applications:
- name: rabbitstack
  path: .
  memory: 256M
  instances: 1

From within the directory run cf push and accesshttp://rabbitstack.192.168.1.249.xip.io from the browser. Play with cf scale and see how port number changes on every request.


Congratulations! You now have a fully functional private Cloud Foundry.

posted on 2015-04-05 20:29  Yudar  阅读(806)  评论(0编辑  收藏  举报