Sharing

2012年10月30日 星期二

健保相關規定及省錢方法


最近因為小草莓要出生了, 再加上老爸要退休了, 所以要重新安排阿媽的健保要掛在何處, 就找了一點資料,
文章從這篇開始, 裡面有舉了一些例子來說明如何省錢
http://www.moneynet.com.tw/e_news.php?id=1508

超過65歲的國民有補助

這個連結有列出來各個單位的補助方案, 基本上都有排富條款

http://www.nhi.gov.tw/webdata/webdata.aspx?menu=18&menu_id=682&webdata_id=2393&WD_ID=745

以台北市為例
(一)年滿65歲老人或55歲原住民,且設籍並實際居住臺北市滿1年者。
(二)經稅捐稽徵機關核定之最近1年綜合所得總額合計未達申報標準或綜合所得稅率未達20%者。老人由納稅義務人申報為受扶養人而有上開情事者,亦同。
(三)未獲政府機關健保自付額之全額補助者。]

更詳細一點可以看這裡
http://www.bosa.tcg.gov.tw/i/i0300.asp?fix_code=0425002&l1_code=04&l2_code=25

每人每月至多補助749元(中央健康保險局第六類保險對象自付額),低於749元者核實補助。

基本上這個方案會自動補助, 不用另外提出申請, 除非曾經因為不符規定而停止補助的人, 當又符合規定之後, 必須要重新申請.

<健保局和社會局的說法不同, 一個說會自動重新生效, 一個說一定要提出申請, 我最後是採信社會局的說, 因為真的在處理補助的單位是他們, 我就還是送了文件去申請>

眷屬的保險費, 最多算到三人, 超過的就免費

http://dohlaw.doh.gov.tw/Chi/FLAW/FLAWDAT0201.asp?lsid=FL014028

第十八條
第一類至第三類被保險人及其眷屬之保險費,依被保險人之投保金額及保
險費率計算之;保險費率,以百分之六為上限。
前項眷屬之保險費,由被保險人繳納;超過三口者,以三口計。

如果總共有三個眷屬以上(不包含自己), 那就算三個, 下面是保險負擔金額表, 最多就是列到三人
http://www.nhi.gov.tw/webdata/webdata.aspx?menu=1&menu_id=5&webdata_id=2389&WD_ID=


眷屬依附在薪資較低者

這一點大家應該都知道, 但另外要注意的事情是,
1. 直系血親是不包含姻親, 也就是自己的父母親是不能依附在老婆或先生的健保內,
2. 配偶如果有工作的話也不能依附在自己底下
3. 奶奶不能依附在孫子底下, 除非奶奶所有的子女都已經退休且都依附成別人的眷屬
    <老實說, 我不太清楚這是那一個法條規定的, 這是打去健保局問到的>

第二條
二、眷屬:
(一)被保險人之配偶,且無職業者。
(二)被保險人之直系血親尊親屬,且無職業者。
(三)被保險人二親等內直系血親卑親屬未滿二十歲且無職業,或年滿二
十歲無謀生能力或仍在學就讀且無職業者。

父母用第六類人口投保

如果你的健保費多於 $749 元, 那建議父母或奶奶都改向區公所投保, 而不要依附在你底下
http://www.nhi.gov.tw/webdata/webdata.aspx?menu=1&menu_id=5&webdata_id=3328&WD_ID=
http://www.nhi.gov.tw/webdata/webdata.aspx?menu=18&menu_id=678&webdata_id=3436&WD_ID=722


相關連結:
保險費 DIY 計算
http://www.nhi.gov.tw/webdata/webdata.aspx?menu=18&menu_id=679&webdata_id=3444&WD_ID=679

2012年10月28日 星期日

Openstack Folsom - Boot from Volume (Rados Block Device)


這個是 Folsom 的隱藏新功能, 之前要做這件事都要做偷天換日的行為把 volume 的內容換掉, 但現在不必了, 這個功能已經整合進去 Folsom. 設定及操作都是參考以下這篇

http://ceph.com/docs/master/rbd/rbd-openstack/

Cinder Configuration

主要的安裝及設定請參考 openstack-folsom-installation-of-cinder

/etc/cinder/cinder.conf

這邊因為 cinder 要和 glance 拿到 image template, 所以要在設定檔內加入 glance host 的 ip
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
sql_connection = mysql://cinder:password@localhost:3306/cinder
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
volume_driver=cinder.volume.driver.RBDDriver
rabbit_host=rabbitmq
rabbit_password = password
my_ip = 172.17.123.12
glance_host = 172.17.123.16

Glance Configuration

主要的安裝及設定請參考 openstack-folsom-installation-of-glance

/etc/glance/glance-api.conf

需要 glance 把 rbd url 傳送出去, 所以把這個隱藏選項打開
show_image_direct_url = True

Upload Image

如果要直接從 Block Device 開機, 那原來的 image 的格式目前必須要是 raw image, 而不能用 qcow2. 所以我們先把 qcow2 轉成 raw image, 然後上傳到 glance
root@glance:~$ wget http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
root@glance:~$ kvm-img convert -f qcow2 -O raw precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1.raw
root@glance:~$ glance image-create --name Ubuntu-Precise-Raw --is-public true --container-format bare --disk-format raw < ./precise-server-cloudimg-amd64-disk1.raw
root@glance:~$ glance image-list
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size       | Status |
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| cad779fc-c851-4581-ac4d-474c3773bf89 | Ubuntu-Precise-Raw  | raw         | bare             | 2147483648 | active |
+--------------------------------------+---------------------+-------------+------------------+------------+--------+

Create Volume from Image Template

接下來從 cinder 生成一個新的 volume, 但多加一個參數指定從 Image Template 產生
root@cinder:~# cinder create --image-id cad779fc-c851-4581-ac4d-474c3773bf89 10
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|      created_at     |      2012-10-29T03:12:59.504616      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|          id         | 4e8527a9-eb01-44f1-8fed-fc831c4134f4 |
|       image_id      | cad779fc-c851-4581-ac4d-474c3773bf89 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

root@cinder:~# cinder list
+--------------------------------------+-----------+----------------+------+-------------+--------------------------------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type |             Attached to              |
+--------------------------------------+-----------+----------------+------+-------------+--------------------------------------+
| 4e8527a9-eb01-44f1-8fed-fc831c4134f4 | available |      None      |  10  |     None    |                                      |
+--------------------------------------+-----------+----------------+------+-------------+--------------------------------------+

root@cinder:~# rbd info volume-4e8527a9-eb01-44f1-8fed-fc831c4134f4
rbd image 'volume-4e8527a9-eb01-44f1-8fed-fc831c4134f4':
        size 10240 MB in 1280 objects
        order 23 (8192 KB objects)
        block_name_prefix: rbd_data.2f8e5262f5ff
        format: 2
        features: layering
        parent: images/cad779fc-c851-4581-ac4d-474c3773bf89@snap
        overlap: 2048 MB

觀察 /var/log/cinder/cinder-volume.log
# 拿到 image location
2012-10-29 11:12:59 DEBUG cinder.volume.manager [req-eed17d93-f2da-479b-b04c-4418ca4948b3 fafd0583de8a4a1b93b924a6b2cb7e
b5 eefa301a6a424e7da3d582649ad0e59e] image_location: rbd://77e083f7-de88-4f9e-b654-8ce6949a3039/images/cad779fc-c851-458
1-ac4d-474c3773bf89/snap create_volume /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:151
2012-10-29 11:12:59 DEBUG cinder.utils [req-eed17d93-f2da-479b-b04c-4418ca4948b3 fafd0583de8a4a1b93b924a6b2cb7eb5 eefa30
1a6a424e7da3d582649ad0e59e] Running cmd (subprocess): ceph fsid execute /usr/lib/python2.7/dist-packages/cinder/utils.py
:163

# 檢查一下 image 的 snapshot 
2012-10-29 11:12:59 DEBUG cinder.utils [req-eed17d93-f2da-479b-b04c-4418ca4948b3 fafd0583de8a4a1b93b924a6b2cb7eb5 eefa30
1a6a424e7da3d582649ad0e59e] Running cmd (subprocess): rbd info --pool images --image cad779fc-c851-4581-ac4d-474c3773bf8
9 --snap snap execute /usr/lib/python2.7/dist-packages/cinder/utils.py:163

# 使用 ceph clone 功能 (COW)
2012-10-29 11:13:00 DEBUG cinder.utils [req-eed17d93-f2da-479b-b04c-4418ca4948b3 fafd0583de8a4a1b93b924a6b2cb7eb5 eefa30
1a6a424e7da3d582649ad0e59e] Running cmd (subprocess): rbd clone --pool images --image cad779fc-c851-4581-ac4d-474c3773bf
89 --snap snap --dest-pool rbd --dest volume-4e8527a9-eb01-44f1-8fed-fc831c4134f4 execute /usr/lib/python2.7/dist-packag
es/cinder/utils.py:163

# 最後 resize 大小
2012-10-29 11:13:00 DEBUG cinder.utils [req-eed17d93-f2da-479b-b04c-4418ca4948b3 
fafd0583de8a4a1b93b924a6b2cb7eb5 eefa301a6a424e7da3d582649ad0e59e] Running cmd (subprocess): rbd resize --pool rbd --image volume-4e8527a9-eb01-44f1-8fed-fc831c4134f4 --size 10240 execute /usr/lib/python2.7/dist-packages/cinder/utils.py:163

Create VM

選擇剛剛上傳的 Raw Image

選擇 "Boot from volume", 然後選擇剛剛利用 Cinder 指令從 Image Template 做出來的  Volume


觀察 computer node 上面的 VM, 發現他的 block device 直接從 rbd protocol 連到 Ceph.
root@nova:~$ virsh list
 Id Name                 State
----------------------------------
  1 instance-00000023    running
  4 instance-0000002b    running

root@nova:~$ virsh domblklist 4
Target     Source
------------------------------------------------
vda        rbd/volume-4e8527a9-eb01-44f1-8fed-fc831c4134f4

補充

如果遇到 Glance 出問題, 有可能是介接的部份出問題, 但問題不大, 主要是要把 unicode 轉 str

/usr/lib/python2.7/dist-packages/glance/store/rbd.py

with rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
            with conn.open_ioctx(self.pool) as ioctx:
                if loc.snapshot:
                    # 修改這行
                    with rbd.Image(ioctx, str(loc.image)) as image:
                        try:
                            # 修改這行
                            image.unprotect_snap(str(loc.snapshot))
                        except rbd.ImageBusy:
                            log_msg = _("snapshot %s@%s could not be "
                                        "unprotected because it is in use")
                            LOG.error(log_msg % (loc.image, loc.snapshot))
                            raise exception.InUseByStore()
                        # 修改這行   
                        image.remove_snap(str(loc.snapshot))
                try:
                    # 修改這行
                    rbd.RBD().remove(ioctx, str(loc.image))
                except rbd.ImageNotFound:
                    raise exception.NotFound(
                        _('RBD image %s does not exist') % loc.image)
                except rbd.ImageBusy:
                    log_msg = _("image %s could not be removed"
                                "because it is in use")
                    LOG.error(log_msg % loc.image)
                    raise exception.InUseByStore()



2012年10月24日 星期三

vsftp server with virtual account


第一次架 ftp server 就看這兩篇, 很快就可以架好
https://help.ubuntu.com/12.04/serverguide/ftp-server.html
http://manpages.ubuntu.com/manpages/precise/en/man5/vsftpd.conf.5.html

如果想要用 virtual user 來搭配 ftp server, 而不是用 server 上的帳號, 那就看這篇

http://sigerr.org/linux/setup-vsftpd-custom-multiple-directories-users-accounts-ubuntu-step-by-step

Package Installation

總共要裝三個東西, 一個是 vsftpd, 一個是 PAM Pluggable Authentication Modules), 用來建立虛擬帳號以及認証, 最後是 apache 裡的一個小工具, 在建立帳號時會使用到
root@ubuntu:~$ apt-get install vsftpd libpam-pwdfile apache2-utils

Configuration

/etc/pam.d/vsftpd-virtual

建立一個認証的機制, 我們將密碼存在 /nfsroot/ftp/ftpd.passwd 之中
# Customized login using htpasswd file
auth required pam_pwdfile.so pwdfile /nfsroot/ftp/ftpd.passwd
account required pam_permit.so

/etc/vsftpd.conf

前面幾項都是原來的設定檔有的, 我就沒有拿掉, 因為我只是要開放一個 ftp 站供人下載東西, 而不支援上傳, 所以我沒有把 write_enable 打開.
listen=YES
# 改成 NO
anonymous_enable=NO
# 改成 YES
local_enable=YES
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
# 改成 YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
# 使用另一個 PAM 設定檔
pam_service_name=vsftpd-virtual
rsa_cert_file=/etc/ssl/private/vsftpd.pem
# 讓所有的虛擬帳號都改用 guest 登入
guest_enable=YES
# 讓所有的虛擬帳號都有各自的目錄, 互相不影響
user_sub_token=$USER
local_root=/nfsroot/ftp/$USER
# 讓檔案擁有者顯示成 ftp
hide_ids=YES
# 使用目錄設定的權限
virtual_use_local_privs=YES

Register User

以下是創造虛擬帳號的方法, 因為在 vsftp.conf 之中我們把 local_root 設定在 /nfsroot/ftp/$USER, 所以我們要主動幫他把目錄建出來, 擁有者改成 ftp, 因為預設的 guest 帳號是 ftp, 權限要改成不能寫入, 不然會無法登入
root@ubuntu:~$ touch /nfsroot/ftp/ftpd.passwd
root@ubuntu:~$ htpasswd -bd /nfsroot/ftp/ftpd.passwd <username> <password>
root@ubuntu:~$ mkdir /nfsroot/ftp/<username>
root@ubuntu:~$ chown ftp:nogroup /nfsroot/ftp/<username>
root@ubuntu:~$ chmod -w /nfsroot/ftp/<username>

以上都做完之後, 就記得要重啟 vsftpd, 就完成囉~

其它雷同的介紹, 但有不同的設定, 可以看看

http://www.onaxer.com/2010/12/01/virtual-users-and-directories-in-vsftpd/
http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/




2012/11/13 補充:

Writable Root Folder

如果虛擬帳號的 root folder 不是唯讀的, 就無法登入, 你會看到以下的訊息

500 OOPS: vsftpd: refusing to run with writable root inside chroot()

後來在網路上找了一些文章, 才發現原來這是 vsftpd 2.3.5 才出現的設計,
原來是沒有這個限制的, 也很多人抱怨這個功能很難用, 必須要多建立一個可寫入的子目錄,
才能讓使用者上傳檔案到 ftp server, 所以就有一些解法
http://www.benscobie.com/fixing-500-oops-vsftpd-refusing-to-run-with-writable-root-inside-chroot/#comment-2051

解法1:升級到 vsftpd 3.0.0, 設定檔內加上 allow_writeable_chroot=YES

解法2:改成使用 vsftpd 2.3.2

解法3:修改 2.3.5 的 code, 把這個限制解除掉, 網路上有人已經做了這件事

root@ubuntu:~$ apt-get install python-software-properties
root@ubuntu:~$ add-apt-repository ppa:thefrontiergroup/vsftpd
root@ubuntu:~$ apt-get update
root@ubuntu:~$ apt-get install vsftpd

以下我採取解法3, 在這種設定下, 如果你想要新增一個 read-only 的帳號, 簡單的作法就是照樣把 root folder 的 write permission 去掉, 如果想要讓使用者可以上傳檔案, root folder 就要記得加上 write permission

/etc/vsftpd.conf

listen=YES
# 改成 NO
anonymous_enable=NO
# 改成 YES
local_enable=YES
# 改成 YES
write_enable=YES
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
# 改成 YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
# 使用另一個 PAM 設定檔
pam_service_name=vsftpd-virtual
rsa_cert_file=/etc/ssl/private/vsftpd.pem
# 讓所有的虛擬帳號都改用 guest 登入
guest_enable=YES
# 讓所有的虛擬帳號都有各自的目錄, 互相不影響
user_sub_token=$USER
local_root=/nfsroot/ftp/$USER
# 讓檔案擁有者顯示成 ftp
hide_ids=YES
# 使用目錄設定的權限
virtual_use_local_privs=YES
allow_writeable_chroot=YES



2012年10月21日 星期日

Git filter-branch 的使用


最近因為幫忙重新整理 git repository, 所以有機會接觸 git filter-branch, 過去因為 policy 沒有定義好, 所以大家都將 binary 檔也加進 repository, 造成整個 repository 很肥大, 要 checkout 時也必須要等很久, 所以我的任務就是把這些 binary 檔拔掉, 改成用 md5 or sha1 檔案代替, 如果真的需要用到時, 再去另外一個 http server 下載.
http://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html

假設整個 repository 有 1000 個版本, 那filter-branch 執行的過程, 其實就是依序把這些版本 checkout, 然後執行你指定的指令來改變檔案的內容, 或是甚至新增/刪除檔案, 之後再把修改過的檔案 commit 進去新的 repository. 所以執行完之後, 整個 repository 中每一筆項目的 SHA ID 都會改變, 是個很暴力的作法, 但也因為很暴力, 所以基本上你想要做什麼事應該都可以達成.

Filter

filter-branch 提供了許多不同 filter, 可以幫助你在正確的時間點執行指令

--env-filter

幫助你修改 author name 或是 author e-mail, 可以參考以下網址提供的 script
https://help.github.com/articles/changing-author-info

--tree-filter

幫助你在每一個版本去修改檔案的內容, 新增/刪除檔案, 最常使用的應該也是這個, 以我自己的例子來說, 我會寫一個像以下的 script 把大檔案轉成 sha1 檔
#!/bin/bash
function transform {

    file=$1
    sha1_name=$file.sha1

    if [ -f /tmp/git/$file ]; then
        rm -f $file
        cp /tmp/git/$sha1_name $sha1_name
    else
        sha1sum $file > $sha1_name
        mv $file /tmp/git
        cp $sha1_name /tmp/git
    fi

}

pushd ./bigfile
for file in *.tgz *.zip; do
    if [ ! -f $file ]; then
        continue;
    fi

    transform $file
popd

--index-filter

這個其實是 --tree-filter 的快速版本, 如果你沒有要改變檔案的內容, 只是單純改變 repository 的 history, 那可以使用這個, 因為他不會真的 checkout 檔案, 速度上快很多, 下面兩個網址都有示範, 最常用的功能就是把某一個特定的檔案永久的從 repository 中刪除.
https://help.github.com/articles/remove-sensitive-data
http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository

--msg-filter

這個是用來改變 commit message 的內容, 原來的內容會介由 standard input 輸入, 而你輸出到 standard output 的內容就會作為新的 commit message.
# 舉例來說, 因為 cat 就不會改變內容, 所以 commit message 完全不變
$ git filter-branch --msg-filter cat
# tac 會把內容全部反過來
$ git filter-branch --msg-filter tac

--tag-name-filter

用來改變 tag name, 如果該版本有加上 tag, 那當 checkout 這個版本時, 就會乎叫你指定的指令, 原來的內容會介由 standard input 輸入, 而你輸出到 standard output 的內容就會作為新的 tag name

--subdirectory-filter

把某一個 folder 下的 commit 獨立出來變成一個新的 repository, 如果當你的專案越來越大時, 你可能會想把某個資料夾獨立出來變成一個新專案, 那這個功能就很好用
http://gugod.org/2012/07/split-a-git-repository/

Other Command

--prune-empty

如果有些 filter 產生了空的 commit, 那會主動清除掉

-f --force

強制把暫存區的東西清掉

2012年10月11日 星期四

Openstack Folsom - Installation of Horizon

Before Installation


Add repository

root@horizon:~$ apt-get install -y python-software-properties
root@horizon:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-trunk-testing
root@horizon:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-deps-staging
root@horizon:~$ apt-get update && apt-get -y dist-upgrade


Hostname Setting

最簡單的方式是在 /etc/hosts 內設定會用到的 hostname

172.17.123.12   rabbitmq
172.17.123.12   mysql
172.17.123.12   cinder
172.17.123.13   keystone
172.17.123.14   swift-proxy
172.17.123.16   glance
172.17.123.17   nova
172.17.123.18   horizon

Environment Variable Setting

編輯一個設定檔: novarc, 設定一些等下會用到的區域變數, 並且匯到 .bashrc 之中, 下次再進來就不需要重新設定
root@glance:~$ cat novarc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://keystone:5000/v2.0/"
export SERVICE_ENDPOINT="http://keystone:35357/v2.0"
export SERVICE_TOKEN=password
root@horizon:~$ source novarc
root@horizon:~$ echo "source novarc">>.bashrc


Horizon Installation

Install Package


root@horizon:~$ apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard

Configuration


設定好 keystone 的 ip address
/etc/openstack-dashboard/local_settings.py
...
OPENSTACK_HOST = "keystone"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
...

dashboard 在這版多了一個 ubuntu_theme.py 的設定, 可以套用自己設計的 theme, 不過預設的 theme 看起來怪怪的, 所以看不習慣的人可以把 /etc/openstack-dashboard/ubuntu_theme.py 砍掉, 會讓整個 UI 順眼不少.


另外要提醒的是, 官網上有提到可以設定 SWIFT_ENABLED 以及 QUANTUM_ENABLED, 但這兩個設定其實已經不能用了, dashboard 會直接詢問 keystone 有提供那些服務, 所以如果是用 script 安裝 keystone, 那可能會遇到 dashboard 上面出現 "Network" 設定, 但其實沒有安裝 Quantum, 所以記得把 keystone 內的 quantum endpoint service 砍掉, 然後 restart apache2.
https://answers.launchpad.net/horizon/+question/210437

Login

設定完成後就登入 http://horizon's ip/horizon, 帳號密碼是看當初在 keystone 的設定, 我這邊是用 admin/password



Create VM

選擇用之前上傳的 Ubuntu-Precise

塞入 mykey

啟動之後還可以使用 vnc 界面操作, 如果無法順利看到 vnc 的人, 可能是因為 nova-novncproxy 沒有順利啟動, 可以看一下是否遇到和我同樣的問題, 我補充在上一篇的最後面
Openstack Folsom - Installation of Nova



2012年10月9日 星期二

Openstack Folsom - Installation of Nova

Before Installation

Add repository

root@nova:~$ apt-get install -y python-software-properties
root@nova:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-trunk-testing
root@nova:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-deps-staging
root@nova:~$ apt-get update && apt-get -y dist-upgrade

Hostname Setting

最簡單的方式是在 /etc/hosts 內設定會用到的 hostname
172.17.123.12   rabbitmq
172.17.123.12   mysql
172.17.123.12   cinder
172.17.123.13   keystone
172.17.123.14   swift-proxy
172.17.123.16   glance
172.17.123.17   nova

MySQL Setting

在 MySQL 內加入一個新的 database, 如果 MySQL 和 Glance 在同一台 Server 上, 記得也要設定從 localhost 登入的權限及密碼
mysql> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

在 Nova 這邊記得也要裝 mysql-client, 然後可以測試看看能不能連線成功
root@nova:~$ apt-get install mysql-client python-mysqldb
root@nova:~$ mysql -h 172.17.123.12 -u nova -ppassword
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 236
Server version: 5.5.24-0ubuntu0.12.04.1 (Ubuntu)

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Environment Variable Setting

編輯一個設定檔: novarc, 設定一些等下會用到的區域變數, 並且匯到 .bashrc 之中, 下次再進來就不需要重新設定
root@glance:~$ cat novarc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://keystone:5000/v2.0/"
export SERVICE_ENDPOINT="http://keystone:35357/v2.0"
export SERVICE_TOKEN=password
root@glance:~$ source novarc
root@glance:~$ echo "source novarc">>.bashrc

Keystone Setting

把 endpoint 的網址設定好
root@nova:~$ apt-get install python-keystoneclient
root@nova:~$ keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+--------------------------------------+
|                id                |   region  |                   publicurl                   |                  internalurl                  |               adminurl               |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+--------------------------------------+
| 580b71d126804c5197b91c79fd74a330 | RegionOne |           http://keystone:5000/v2.0           |           http://keystone:5000/v2.0           |      http://keystone:35357/v2.0      |
| 5ef55e38e5c54477bd659d4185d0a776 | RegionOne |             http://glance:9292/v2             |             http://glance:9292/v2             |        http://glance:9292/v2         |
| 6c788747593d475f831b6ff128bde995 | RegionOne |      http://cinder:8776/v1/$(tenant_id)s      |      http://cinder:8776/v1/$(tenant_id)s      | http://cinder:8776/v1/$(tenant_id)s  |
| 95e16e71a8f04ac68ae401df5284ce3e | RegionOne | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s |      http://swift-proxy:8080/v1      |
| c9659fab79454ee38bd926a2b78fa351 | RegionOne |       http://nova:8774/v2/$(tenant_id)s       |       http://nova:8774/v2/$(tenant_id)s       |  http://nova:8774/v2/$(tenant_id)s   |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+--------------------------------------+

CEPH Installation

Optional, 如果 Cinder 的設定是 LVM 就不必安裝這個, 想要看深入一點介紹的可以看這個連結
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
root@nova:~$ wget -q -O - https://raw.github.com/ceph/ceph/master/keys/release.asc | apt-key add -
OK
# 手動增加一個 ceph.list 在 /etc/apt/sources.list.d 下
root@nova:/etc/apt/sources.list.d$ cat ceph.list
deb http://ceph.newdream.net/debian/ precise main
deb-src http://ceph.newdream.net/debian/ precise main
root@nova:~$ apt-get update
root@nova:~$ apt-get install -y ceph python-ceph
root@nova:~$ dpkg -l | grep ceph
ii  ceph                                            0.48.2argonaut-1precise                     distributed storage and file system
ii  ceph-common                                     0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph storage cluster
ii  ceph-fs-common                                  0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph file system
ii  ceph-fuse                                       0.48.2argonaut-1precise                     FUSE-based client for the Ceph distributed file system
ii  ceph-mds                                        0.48.2argonaut-1precise                     metadata server for the ceph distributed file system
ii  libcephfs1                                      0.48.2argonaut-1precise                     Ceph distributed file system client library
ii  python-ceph                                     0.48.2argonaut-1precise                     Python libraries for the Ceph distributed filesystem

# 安裝好之後, 就把你的 ceph cluster 的設定檔 copy 到 /etc/ceph 下, 正常就可以使用
# 至於怎麼安裝 ceph cluster 就請到 ceph 的官網去看囉~ 
root@nova:~$ ceph -s
   health HEALTH_OK
   monmap e1: 3 mons at {wistor-003=172.17.123.92:6789/0,wistor-006=172.17.123.94:6789/0,wistor-007=172.17.123.95:6789/0}, election epoch 10, quorum 0,1,2 wistor-003,wistor-006,wistor-007
   osdmap e24: 23 osds: 23 up, 23 in
    pgmap v2242: 4416 pgs: 4416 active+clean; 8362 MB data, 156 GB used, 19850 GB / 21077 GB avail
   mdsmap e1: 0/0/1 up

Nova Installation

Nova Package

因為是要和 Cinder 整合, 所以沒有安裝 nova-volume
root@nova:~$ apt-get install nova-compute nova-api nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler nova-network nova-novncproxy novnc python-novnc
root@nova:~$ ii  nova-ajax-console-proxy          2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - AJAX console proxy - transitional package
ii  nova-api                         2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - API frontend
ii  nova-cert                        2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - certificate management
ii  nova-common                      2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - common files
ii  nova-compute                     2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - compute node
ii  nova-compute-kvm                 2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - compute node (KVM)
ii  nova-consoleauth                 2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - Console Authenticator
ii  nova-doc                         2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - documentation
ii  nova-network                     2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - Network manager
ii  nova-novncproxy                  2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - NoVNC proxy
ii  nova-scheduler                   2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute - virtual machine scheduler
ii  novnc                            2012.2~20120906+dfsg-0ubuntu2~precise              HTML5 VNC client
ii  python-nova                      2012.2+git201210091907~precise-0ubuntu1            OpenStack Compute Python libraries
ii  python-novaclient                1:2.9.0.10+git201210101300~precise-0ubuntu1        client library for OpenStack Compute API
ii  python-novnc                     2012.2~20120906+dfsg-0ubuntu2~precise              HTML5 VNC client - libraries

/etc/nova/nova.conf

內容是設定成和 Keystone, Cinder(Ceph), Glance(Ceph), 整合在一起, 中間其實用不到 iscsi server.
[DEFAULT]
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
ec2_private_dns_show_ip=True

# LOGS/STATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova

# AUTHENTICATION
auth_strategy=keystone
keystone_ec2_url=http://keystone:5000/v2.0/ec2tokens

# VOLUMES
#volume_driver=nova.volume.driver.ISCSIDriver
#volume_group=nova-volumes
#volume_name_template=volume-%08x
#iscsi_helper=tgtadm
volume_driver=nova.volume.driver.RBDDriver
volume_api_class=nova.volume.cinder.API
volumes_path=/var/lib/nova/volumes

# DATABASE
sql_connection=mysql://nova:password@mysql/nova

# COMPUTE
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
libvirt_use_virtio_for_bridges=True

# RABBITMQ
rabbit_host=rabbitmq
rabbit_password=password

# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=glance:9292

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
my_ip=172.17.123.17
public_interface=br100
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0
fixed_range=192.168.100.0/27

# NOVNC CONSOLE
novnc_enable=true
novncproxy_base_url=http://172.17.123.17:6080/vnc_auto.html
vncserver_proxyclient_address=127.0.0.1
vncserver_listen=0.0.0.0

/etc/nova/api.paste.ini

要設定 keystone 的相關資訊
...
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = keystone
auth_port = 35357
auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dirname = /tmp/keystone-signing-nova

/etc/nova/nova_restart.sh

另外準備一個 nova_restart.sh, 方便等下使用
#!/bin/bash
for a in nova-network nova-compute nova-api nova-scheduler; do sudo service $a stop; done
for a in nova-consoleauth nova-cert novnc libvirt-bin; do sudo service $a stop; done
for a in nova-network nova-compute nova-api nova-scheduler; do sudo service $a start; done
for a in nova-consoleauth nova-cert novnc libvirt-bin; do sudo service $a start; done

啟動 nova 之後, 要記得成生成 database, 完成後就可以把服務啟動
root@nova:~$ chown -R nova:nova *
root@nova:~$ nova-manage db sync
root@nova:~$ /etc/nova/nova_restart.sh
root@nova:~$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        nova                                 nova             enabled    :-)   2012-10-09 02:38:29
nova-scheduler   nova                                 nova             enabled    :-)   2012-10-09 02:38:29
nova-consoleauth nova                                 nova             enabled    :-)   2012-10-09 02:38:29
nova-compute     nova                                 nova             enabled    :-)   2012-10-09 02:38:31
nova-network     nova                                 nova             enabled    :-)   2012-10-09 02:38:30

root@nova:~$ ps aux | grep nova | grep python
nova      3611  0.3  0.7 209044 58800 ?        S    15:05   0:07 /usr/bin/python /usr/bin/nova-network --config-file=/etc/nova/nova.conf
nova      3623  0.4  0.8 1367636 67448 ?       Sl   15:05   0:10 /usr/bin/python /usr/bin/nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf
nova      3634  0.0  0.7 136864 58780 ?        S    15:05   0:01 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova      3645  0.2  0.7 276524 61796 ?        S    15:05   0:05 /usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf
nova      3656  0.2  0.6 202344 52512 ?        S    15:05   0:04 /usr/bin/python /usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf
nova      3667  0.2  0.6 202216 52248 ?        S    15:05   0:05 /usr/bin/python /usr/bin/nova-cert --config-file=/etc/nova/nova.conf
nova      3755  0.0  0.2  95772 22824 ?        S    15:05   0:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova      4164  0.0  0.9 250300 79596 ?        S    15:05   0:01 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova      4171  0.0  0.6 135852 54420 ?        S    15:05   0:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova      4173  0.0  0.6 137112 55960 ?        S    15:05   0:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova      7092  0.0  0.2 122972 24216 ?        S    15:13   0:00 /usr/bin/python /usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
nova      7137  0.0  0.3 137492 25832 ?        S    15:13   0:00 /usr/bin/python /usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf

Test and Verification

驗証一下和所有的 component 是否正確連結, 首先驗証一下 Cinder
root@nova:~$ nova volume-create --display-name test 1
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| created_at          | 2012-10-09T08:35:06.518633           |
| display_description | None                                 |
| display_name        | test                                 |
| id                  | 728832a1-32d8-44f3-ba1a-8944adbeca11 |
| metadata            | {}                                   |
| size                | 1                                    |
| snapshot_id         | None                                 |
| status              | creating                             |
| volume_type         | None                                 |
+---------------------+--------------------------------------+

root@nova:~$ nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 728832a1-32d8-44f3-ba1a-8944adbeca11 | available | test         | 1    | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+

root@nova:~$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 728832a1-32d8-44f3-ba1a-8944adbeca11 | available |     test     |  1   |     None    |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+

root@nova:~$ nova volume-delete test

再來驗証一下和 Glance 的連結是否正確
root@nova:~$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu-Precise      | ACTIVE |        |
| ad46b050-a03e-4d31-bc60-84f81806853b | tty-linux           | ACTIVE |        |
| e504fcf2-fdbd-4d15-be1c-49e24dd28458 | tty-linux-kernel    | ACTIVE |        |
| 5897d530-b625-4b7c-91eb-56313cf2741c | tty-linux-ramdisk   | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

root@nova:~$ wget -c https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O stackimages/cirros.img

root@nova:~$ glance add name=cirros-0.3.0-x86_64 disk_format=qcow2 container_format=bare < cirros.img
Added new image with ID: 1e4a8f0c-235f-46ce-9aef-fc7fa143f141

root@nova:~$ glance image-list
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| 1e4a8f0c-235f-46ce-9aef-fc7fa143f141 | cirros-0.3.0-x86_64 | qcow2       | bare             | 9761280   | active |
| 5897d530-b625-4b7c-91eb-56313cf2741c | tty-linux-ramdisk   | ari         | ari              | 96629     | active |
| ad46b050-a03e-4d31-bc60-84f81806853b | tty-linux           | ami         | ami              | 25165824  | active |
| e504fcf2-fdbd-4d15-be1c-49e24dd28458 | tty-linux-kernel    | aki         | aki              | 4404752   | active |
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu-Precise      | qcow2       | ovf              | 232718336 | active |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+

root@nova:~$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu-Precise      | ACTIVE |        |
| 1e4a8f0c-235f-46ce-9aef-fc7fa143f141 | cirros-0.3.0-x86_64 | ACTIVE |        |
| ad46b050-a03e-4d31-bc60-84f81806853b | tty-linux           | ACTIVE |        |
| e504fcf2-fdbd-4d15-be1c-49e24dd28458 | tty-linux-kernel    | ACTIVE |        |
| 5897d530-b625-4b7c-91eb-56313cf2741c | tty-linux-ramdisk   | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

Network Creation

要先成兩個網段, 一個是對內的 private ip, 一個是對外的floating ip
root@nova:~$ nova-manage network create private --multi_host=T --fixed_range_v4=192.168.100.0/27 --bridge=br100 --bridge_interface=eth0 --num_networks=1 --network_size=32
root@nova:~$ nova-manage floating create --ip_range=172.17.123.192/28
root@nova:~$ nova network-list
+--------------------------------------+---------+------------------+
| ID                                   | Label   | Cidr             |
+--------------------------------------+---------+------------------+
| 471a3258-1f30-458d-8476-262521597fbf | private | 192.168.100.0/27 |
+--------------------------------------+---------+------------------+

root@nova:~$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

root@nova:~$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Add Keypair

root@nova:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8b:12:db:df:24:9e:31:05:da:8d:ed:7e:37:46:5f:9b root@nova
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|        .        |
|       o =       |
|    . . S +      |
|     + . +    . .|
|    o o = o  . .+|
|     . o O  . +E.|
|        + o. o . |
+-----------------+
root@nova:~$ nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
root@nova:~$ nova keypair-list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 8b:12:db:df:24:9e:31:05:da:8d:ed:7e:37:46:5f:9b |
+-------+-------------------------------------------------+

Boot a Virtual Machine

我們用 cirros-0.3.0-x86_64 來測試是否能成功新增一個 Virtual Machine
root@nova:~$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      | {}          |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      | {}          |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      | {}          |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      | {}          |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      | {}          |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
root@nova:~$ nova boot --flavor 1 --image 1e4a8f0c-235f-46ce-9aef-fc7fa143f141 --key_name mykey --security_group default vm1
+-------------------------------------+----------------------------------------------------------+
| Property                            | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-SRV-ATTR:host                | nova                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname | nova                                                     |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000004                                        |
| OS-EXT-STS:power_state              | 0                                                        |
| OS-EXT-STS:task_state               | scheduling                                               |
| OS-EXT-STS:vm_state                 | building                                                 |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| adminPass                           | qmmhTYWn5N8K                                             |
| config_drive                        |                                                          |
| created                             | 2012-10-09T08:06:53Z                                     |
| flavor                              | m1.tiny                                                  |
| hostId                              | 09574c18e8c0a491179c061b91f64d31726f3d0c19ea4cee36ee0cc7 |
| id                                  | 5c983f6f-9d94-4f97-a6fb-1bf4a3aaa487                     |
| image                               | cirros-0.3.0-x86_64                                      |
| key_name                            | mykey                                                    |
| metadata                            | {}                                                       |
| name                                | vm1                                                      |
| progress                            | 0                                                        |
| security_groups                     | [{u'name': u'default'}]                                  |
| status                              | BUILD                                                    |
| tenant_id                           | eefa301a6a424e7da3d582649ad0e59e                         |
| updated                             | 2012-10-09T08:06:54Z                                     |
| user_id                             | fafd0583de8a4a1b93b924a6b2cb7eb5                         |
+-------------------------------------+----------------------------------------------------------+

root@nova:~$ nova list
+--------------------------------------+------+--------+-----------------------+
| ID                                   | Name | Status | Networks              |
+--------------------------------------+------+--------+-----------------------+
| 5c983f6f-9d94-4f97-a6fb-1bf4a3aaa487 | vm1  | ACTIVE | private=192.168.100.2 |
+--------------------------------------+------+--------+-----------------------+

# 網路有正確的接到 br100
root@nova:~$ brctl show
bridge name     bridge id               STP enabled     interfaces
br100           8000.00505682d12a       no              eth0
virbr0          8000.000000000000       yes

# libvirt 內也有看到這個 VM
root@nova:~$ virsh list
 Id Name                 State
----------------------------------
  2 instance-00000004    running

# 可以透過 vnc 去看
root@nova:~$ virsh vncdisplay 2
:0

# ssh 也沒問題
root@nova:~$ ssh cirros@192.168.100.2
The authenticity of host '192.168.100.2 (192.168.100.2)' can't be established.
RSA key fingerprint is 36:4c:6f:9c:40:a7:9f:07:13:6a:28:67:e2:1d:08:1c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.2' (RSA) to the list of known hosts.

# 連外也沒問題
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=52 time=10.089 ms
64 bytes from 8.8.8.8: seq=1 ttl=52 time=8.558 ms
64 bytes from 8.8.8.8: seq=2 ttl=52 time=11.982 ms
64 bytes from 8.8.8.8: seq=3 ttl=52 time=11.889 ms


# 驗証完後就砍掉吧!
root@nova:~$ nova delete 5c983f6f-9d94-4f97-a6fb-1bf4a3aaa487

Attach Volume

這裡我要測試透過 Cinder 產生的 volume 是否能 attach 到 vm 上
root@nova:~$ nova list
+--------------------------------------+------+--------+-----------------------+
| ID                                   | Name | Status | Networks              |
+--------------------------------------+------+--------+-----------------------+
| 321b2521-b144-4ec4-88ac-1916ae9d8427 | vm1  | ACTIVE | private=192.168.100.2 |
+--------------------------------------+------+--------+-----------------------+

root@nova:~$ nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 6051f6a4-c507-4d39-91f7-be7214b8d326 | available | test         | 30   | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+

root@nova:~$ nova volume-attach 321b2521-b144-4ec4-88ac-1916ae9d8427 6051f6a4-c507-4d39-91f7-be7214b8d326 auto

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 6051f6a4-c507-4d39-91f7-be7214b8d326 |
| serverId | 321b2521-b144-4ec4-88ac-1916ae9d8427 |
| volumeId | 6051f6a4-c507-4d39-91f7-be7214b8d326 |
+----------+--------------------------------------+

root@nova:~$ nova volume-list
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| ID                                   | Status | Display Name | Size | Volume Type | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| 6051f6a4-c507-4d39-91f7-be7214b8d326 | in-use | test         | 30   | None        | 321b2521-b144-4ec4-88ac-1916ae9d8427 |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+

root@nova:~$ virsh list
 Id Name                 State
----------------------------------
  4 instance-00000006    running

root@nova:~$ virsh domblklist instance-00000006
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/instance-00000006/disk
vdb        rbd/volume-6051f6a4-c507-4d39-91f7-be7214b8d326.

root@nova:~$ ssh cirros@192.168.100.2
$ sudo fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *       16065    83875364    41929650   83  Linux

Disk /dev/vdb: 32.2 GB, 32212254720 bytes
16 heads, 63 sectors/track, 62415 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/vdb doesn't contain a valid partition table

看起來是成功了, 再多看一下這個 VM 的 xml 設定, 透過以下這個指令 virsh dumpxml instance-00000006 可以看到他新增了一個 disk, 是透過 rbd protocol
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/instance-00000006/disk'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='rbd/volume-6051f6a4-c507-4d39-91f7-be7214b8d326'/>
      <target dev='vdb' bus='virtio'/>
      <serial>6051f6a4-c507-4d39-91f7-be7214b8d326
      <alias name='virtio-disk1'/>       
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>     
    </disk> 
...  


Create Ubuntu(Precise) Virtual Machine

Ubuntu 的 image 可以到這邊下載
http://uec-images.ubuntu.com/precise/current/
裡面分成兩種, 一種是把 kernel image & machine image 分開的, 另一種則是用 qcow2 包起來

拆開的: precise-server-cloudimg-amd64-root.tar.gz
包起來的: precise-server-cloudimg-amd64-disk1.img
Openstack Folsom - Installation of Glance with Ceph 內有紀錄怎麼第二種的上傳方式, 至於第一種就要解壓縮後, 分別上傳 aki & ami, 然後連結起來. 這邊假設已經上傳好了

root@nova:~$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu-Precise      | ACTIVE |        |
| 1e4a8f0c-235f-46ce-9aef-fc7fa143f141 | cirros-0.3.0-x86_64 | ACTIVE |        |
| ad46b050-a03e-4d31-bc60-84f81806853b | tty-linux           | ACTIVE |        |
| e504fcf2-fdbd-4d15-be1c-49e24dd28458 | tty-linux-kernel    | ACTIVE |        |
| 5897d530-b625-4b7c-91eb-56313cf2741c | tty-linux-ramdisk   | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+
root@nova:~$ nova boot --flavor 2 --image Ubuntu-Precise --key_name mykey --security_group default vm
+-------------------------------------+----------------------------------------------------------+
| Property                            | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-SRV-ATTR:host                | nova                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname | nova                                                     |
| OS-EXT-SRV-ATTR:instance_name       | instance-0000000c                                        |
| OS-EXT-STS:power_state              | 0                                                        |
| OS-EXT-STS:task_state               | scheduling                                               |
| OS-EXT-STS:vm_state                 | building                                                 |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| adminPass                           | 5q4sWjHENyvP                                             |
| config_drive                        |                                                          |
| created                             | 2012-10-10T02:52:40Z                                     |
| flavor                              | m1.small                                                 |
| hostId                              | 09574c18e8c0a491179c061b91f64d31726f3d0c19ea4cee36ee0cc7 |
| id                                  | 019e2db9-cabe-4711-9b95-ceaefd97f22e                     |
| image                               | Ubuntu-Precise                                           |
| key_name                            | mykey                                                    |
| metadata                            | {}                                                       |
| name                                | vm                                                       |
| progress                            | 0                                                        |
| security_groups                     | [{u'name': u'default'}]                                  |
| status                              | BUILD                                                    |
| tenant_id                           | eefa301a6a424e7da3d582649ad0e59e                         |
| updated                             | 2012-10-10T02:52:40Z                                     |
| user_id                             | fafd0583de8a4a1b93b924a6b2cb7eb5                         |
+-------------------------------------+----------------------------------------------------------+
root@nova:~$ nova list
+--------------------------------------+------+--------+-----------------------+
| ID                                   | Name | Status | Networks              |
+--------------------------------------+------+--------+-----------------------+
| 019e2db9-cabe-4711-9b95-ceaefd97f22e | vm   | ACTIVE | private=192.168.100.4 |
+--------------------------------------+------+--------+-----------------------+

root@nova:~$ ssh ubuntu@192.168.100.4
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-31-virtual x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Oct 10 03:17:49 UTC 2012

  System load:  0.0               Processes:           60
  Usage of /:   3.3% of 19.67GB   Users logged in:     0
  Memory usage: 2%                IP address for eth0: 192.168.100.4
  Swap usage:   0%

  Graph this data and manage this system at https://landscape.canonical.com/

0 packages can be updated.
0 updates are security updates.

Get cloud support with Ubuntu Advantage Cloud Guest
  http://www.ubuntu.com/business/services/cloud
Last login: Wed Oct 10 02:55:30 2012 from 192.168.100.3
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@vm:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=52 time=9.96 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=52 time=7.63 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=52 time=5.18 ms


註1:

在執行 nova 指令時, 會不斷的叫你輸入 keyring 的 password, 目前也有人把他提報成一個 bug.
https://bugs.launchpad.net/python-novaclient/+bug/1020238
http://wiki.openstack.org/KeyringSupport
如果不想要一直輸入 password, 可以在 .bashrc 內加上一行
alias nova='nova --no-cache'

註2:

在裝完 horizon 之後, 發現 vnc 起不來, 才發現 vncproxy 啟動失敗
在 /var/log/upstart/nova-novncproxy.log 內
Traceback (most recent call last):
  File "/usr/bin/nova-novncproxy", line 29, in 
    import websockify
在網路上找了一下發現是 known issue, 只需要去下載最新的 package, 就會解掉這個問題
https://bugs.launchpad.net/ubuntu/+source/websockify/+bug/1060374

root@nova:~$ wget https://launchpad.net/ubuntu/+archive/primary/+files/websockify_0.2~20121002-0ubuntu1_amd64.deb
root@nova:~$ dpkg -i websockify_0.2~20121002-0ubuntu1_amd64.deb
root@nova:~$ ./nova_restart.sh

2012年10月7日 星期日

Openstack Folsom - Installation of Glance with Ceph


Before Installation


Add repository

root@glance:~$ apt-get install -y python-software-properties
root@glance:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-trunk-testing
root@glance:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-deps-staging
root@glance:~$ apt-get update && apt-get -y dist-upgrade


Hostname Setting

最簡單的方式是在 /etc/hosts 內設定會用到的 hostname

172.17.123.12   rabbitmq
172.17.123.12   mysql
172.17.123.12   cinder
172.17.123.13   keystone
172.17.123.14   swift-proxy
172.17.123.16   glance

MySQL Setting

在 MySQL 內加入一個新的 database, 如果 MySQL 和 Glance 在同一台 Server 上, 記得也要設定從 localhost 登入的權限及密碼
mysql> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

在 Glance 這邊記得也要裝 mysql-client, 然後可以測試看看能不能連線成功
root@glance:~$ sudo apt-get install mysql-client python-mysqldb
root@glance:~$ mysql -h 172.17.123.12 -u glance -ppassword
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 236
Server version: 5.5.24-0ubuntu0.12.04.1 (Ubuntu)

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>


Environment Variable Setting

編輯一個設定檔: novarc, 設定一些等下會用到的區域變數, 並且匯到 .bashrc 之中, 下次再進來就不需要重新設定
root@glance:~$ cat novarc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://keystone:5000/v2.0/"
export SERVICE_ENDPOINT="http://keystone:35357/v2.0"
export SERVICE_TOKEN=password
root@glance:~$ source novarc
root@glance:~$ echo "source novarc">>.bashrc


Keystone Setting

把 endpoint 的網址設定好
root@glance:~$ apt-get install python-keystoneclient
root@glance:~$ keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+
|                id                |   region  |                   publicurl                   |                  internalurl                  |                adminurl                |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+
| 580b71d126804c5197b91c79fd74a330 | RegionOne |           http://keystone:5000/v2.0           |           http://keystone:5000/v2.0           |       http://keystone:35357/v2.0       |
| 5ef55e38e5c54477bd659d4185d0a776 | RegionOne |             http://glance:9292/v2             |             http://glance:9292/v2             |         http://glance:9292/v2          |
| 6c788747593d475f831b6ff128bde995 | RegionOne |      http://cinder:8776/v1/$(tenant_id)s      |      http://cinder:8776/v1/$(tenant_id)s      |  http://cinder:8776/v1/$(tenant_id)s   |
| 95e16e71a8f04ac68ae401df5284ce3e | RegionOne | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s |       http://swift-proxy:8080/v1       |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+


CEPH Installation

Optional, 如果只是要存在 local filesystem 或是 Swift Server, 就不必安裝這個
root@glance:~$ wget -q -O - https://raw.github.com/ceph/ceph/master/keys/release.asc | apt-key add -
OK
# 手動增加一個 ceph.list 在 /etc/apt/sources.list.d 下
root@glance:/etc/apt/sources.list.d$ cat ceph.list
deb http://ceph.newdream.net/debian/ precise main
deb-src http://ceph.newdream.net/debian/ precise main
root@glance:~$ apt-get update
root@glance:~$ apt-get install -y ceph python-ceph
root@glance:~$ dpkg -l | grep ceph
ii  ceph                                            0.48.2argonaut-1precise                     distributed storage and file system
ii  ceph-common                                     0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph storage cluster
ii  ceph-fs-common                                  0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph file system
ii  ceph-fuse                                       0.48.2argonaut-1precise                     FUSE-based client for the Ceph distributed file system
ii  ceph-mds                                        0.48.2argonaut-1precise                     metadata server for the ceph distributed file system
ii  libcephfs1                                      0.48.2argonaut-1precise                     Ceph distributed file system client library
ii  python-ceph                                     0.48.2argonaut-1precise                     Python libraries for the Ceph distributed filesystem

# 安裝好之後, 就把你的 ceph cluster 的設定檔 copy 到 /etc/ceph 下, 正常就可以使用
# 至於怎麼安裝 ceph cluster 就請到 ceph 的官網去看囉~ 
root@glance:~$ ceph -s
   health HEALTH_OK
   monmap e1: 3 mons at {wistor-003=172.17.123.92:6789/0,wistor-006=172.17.123.94:6789/0,wistor-007=172.17.123.95:6789/0}, election epoch 10, quorum 0,1,2 wistor-003,wistor-006,wistor-007
   osdmap e24: 23 osds: 23 up, 23 in
    pgmap v2242: 4416 pgs: 4416 active+clean; 8362 MB data, 156 GB used, 19850 GB / 21077 GB avail
   mdsmap e1: 0/0/1 up

# 記得產生一個 pool, 取名為 images
root@glance:~$ rados mkpool images
successfully created pool images

Glance Installation

Install Package

root@glance:~$ apt-get install glance
The following NEW packages will be installed:
  glance glance-api glance-common glance-registry libjs-sphinxdoc libjs-underscore libxslt1.1 libyaml-0-2
  python-amqplib python-anyjson python-boto python-dateutil python-decorator python-eventlet python-formencode
  python-gevent python-glance python-glanceclient python-greenlet python-iso8601 python-jsonschema python-keystone
  python-keystoneclient python-kombu python-lxml python-migrate python-openid python-passlib python-paste
  python-pastedeploy python-pastescript python-prettytable python-requests python-routes python-scgi python-setuptools
  python-sqlalchemy python-sqlalchemy-ext python-support python-swiftclient python-tempita python-warlock python-webob
  python-xattr python-yaml

root@glance:~$ dpkg -l | grep glance
ii  glance                                          2012.2+git201209250330~precise-0ubuntu1            OpenStack Image Registry and Delivery Service - Daemons
ii  glance-api                                      2012.2+git201209250330~precise-0ubuntu1            OpenStack Image Registry and Delivery Service - API
ii  glance-common                                   2012.2+git201209250330~precise-0ubuntu1            OpenStack Image Registry and Delivery Service - Common
ii  glance-registry                                 2012.2+git201209250330~precise-0ubuntu1            OpenStack Image Registry and Delivery Service - Registry
ii  python-glance                                   2012.2+git201209250330~precise-0ubuntu1            OpenStack Image Registry and Delivery Service - Python library
ii  python-glanceclient                             1:0.5.1.8.cdc06d9+git201210051430~precise-0ubuntu1 Client library for Openstack glance server.

root@glance:~$ rm /var/lib/glance/glance.sqlite

Configuration

glance-api.conf

  1. default_store 預設是 filesystem, 在這裡我們把它改成 rbd
  2. sql server 改成是 mysql
  3. 設定 keystone_authtoken
  4. 設定 paste_deploy 的 flavor 選項
...

# Which backend scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
#default_store = filesystem
default_store = rbd

# List of which store classes and store class locations are
# currently known to glance at startup.
#known_stores = glance.store.filesystem.Store,
#               glance.store.http.Store,
#               glance.store.rbd.Store,
#               glance.store.s3.Store,
#               glance.store.swift.Store,

...

# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine
#sql_connection = sqlite:////var/lib/glance/glance.sqlite
sql_connection = mysql://glance:password@mysql/glance

...

[keystone_authtoken]
auth_host = keystone
auth_port = 35357
auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
admin_tenant_name = service
admin_user = glance
admin_password = password

[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file = glance-api-paste.ini

# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor = keystone

glance-registry.conf

  1. sql server 改成是 mysql
  2. 設定 keystone_authtoken
  3. 設定 paste_deploy 的 flavor 選項
...

# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine
# sql_connection = sqlite:////var/lib/glance/glance.sqlite
sql_connection = mysql://glance:password@mysql/glance

...

[keystone_authtoken]
auth_host = keystone
auth_port = 35357
auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
admin_tenant_name = service
admin_user = glance
admin_password = password

[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file = glance-registry-paste.ini

# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-registry-keystone], you would configure the flavor below
# as 'keystone'.
flavor = keystone

重啟 glance
root@glance:~$ glance-manage version_control 0
root@glance:~$ glance-manage db_sync
root@glance:~$ service glance-api restart && service glance-registry restart


Upload Image

root@glance:~$ wget http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

root@glance:~$ glance image-create --name Ubuntu-Precise --is-public true --container-format ovf --disk-format qcow2 < precise-server-cloudimg-amd64-disk1.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | c09f658c9f07f7dacdf75dd9e6610b29     |
| container_format | ovf                                  |
| created_at       | 2012-10-10T02:52:13                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | fdc49609-6047-426c-a382-75928c0deb17 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | Ubuntu-Precise                       |
| owner            | eefa301a6a424e7da3d582649ad0e59e     |
| protected        | False                                |
| size             | 232718336                            |
| status           | active                               |
| updated_at       | 2012-10-10T02:52:20                  |
+------------------+--------------------------------------+



root@glance:~$ glance image-list
+--------------------------------------+--------+-------------+------------------+------------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size       | Status |
+--------------------------------------+--------+-------------+------------------+------------+--------+
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu | qcow2       | ovf              | 232718336 | active |
+--------------------------------------+--------+-------------+------------------+------------+--------+

root@glance:~$ rbd -p images list
fdc49609-6047-426c-a382-75928c0deb17

root@glance:~$ rbd -p images info fdc49609-6047-426c-a382-75928c0deb17
rbd image 'fdc49609-6047-426c-a382-75928c0deb17':
        size 221 MB in 27 objects
        order 23 (8192 KB objects)
        block_name_prefix: rb.0.1a05.4885c754
        parent:  (pool -1)

再加入 openstack 上的示範 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/images-verifying-install.html
root@glance:~$ wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz
root@glance:~$ tar -zxvf ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz
ttylinux-uec-amd64-12.1_2.6.35-22_1-floppy
ttylinux-uec-amd64-12.1_2.6.35-22_1.img
ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
ttylinux-uec-amd64-12.1_2.6.35-22_1-loader
ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
root@glance:~$ glance add name="tty-linux-kernel" disk_format=aki container_format=aki < ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz                                                                                                Added new image with ID: e504fcf2-fdbd-4d15-be1c-49e24dd28458

root@glance:~$ glance add name="tty-linux-ramdisk" disk_format=ari container_format=ari < ttylinux-uec-amd64
-12.1_2.6.35-22_1-loader
Added new image with ID: 5897d530-b625-4b7c-91eb-56313cf2741c

# 要記得把 kernel_id, ramdisk_id 改掉
root@glance:~$ glance add name="tty-linux" disk_format=ami container_format=ami kernel_id=e504fcf2-fdbd-4d15-be1c-49e24dd28458 ramdisk_id=5897d530-b625-4b7c-91eb-56313cf2741c < ttylinux-uec-amd64-12.1_2.6.35-22_1.img
Added new image with ID: ad46b050-a03e-4d31-bc60-84f81806853b

root@glance:~$ glance image-list
+--------------------------------------+-------------------+-------------+------------------+------------+--------+
| ID                                   | Name              | Disk Format | Container Format | Size       | Status |
+--------------------------------------+-------------------+-------------+------------------+------------+--------+
| 5897d530-b625-4b7c-91eb-56313cf2741c | tty-linux-ramdisk | ari         | ari              | 96629      | active |
| ad46b050-a03e-4d31-bc60-84f81806853b | tty-linux         | ami         | ami              | 25165824   | active |
| e504fcf2-fdbd-4d15-be1c-49e24dd28458 | tty-linux-kernel  | aki         | aki              | 4404752    | active |
| fdc49609-6047-426c-a382-75928c0deb17 | Ubuntu            | qcow2       | ovf              | 232718336 | active |
+--------------------------------------+-------------------+-------------+------------------+------------+--------+

2012/10/26 補充:
其它格式的上傳設定
http://docs.openstack.org/developer/glance/formats.html

root@nova:~$ kvm-img convert -f qcow2 -O raw precise-cloudimg.img1 precise-cloudimg.raw
root@nova:~$ glance image-create --name Ubuntu-Precise-Raw --is-public true --container-format bare --disk-format raw < ./precise-cloudimg.raw
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | db5504e488d761ac0cf7c0e490aba85f     |
| container_format | bare                                 |
| created_at       | 2012-10-26T02:46:36                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | raw                                  |
| id               | 80f477f8-af7b-4261-9d8e-c4cdb27260e7 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | Ubuntu-Precise-Raw                   |
| owner            | eefa301a6a424e7da3d582649ad0e59e     |
| protected        | False                                |
| size             | 2147483648                           |
| status           | active                               |
| updated_at       | 2012-10-26T02:47:40                  |
+------------------+--------------------------------------+

2012年10月5日 星期五

Openstack Folsom - Swift Installation

Swift Installation


http://docs.openstack.org/developer/swift/howto_installmultinode.html#config-proxy
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/ch_installing-openstack-object-storage.html

Add Repository


先加入 Openstack Folsom 的 PPA 路徑

root@swift-proxy:~$ apt-get install -y python-software-properties
root@swift-proxy:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-trunk-testing
root@swift-proxy:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-deps-staging
root@swift-proxy:~$ apt-get update && apt-get -y dist-upgrade

Basic Package Installation


在 proxy-server 及所有的 storage server 都要裝這些 package, 並且 swift.conf 內的 hash 值要相同
root@swift-proxy:~$ apt-get install swift openssh-server rsync memcached python-netifaces python-xattr python-memcache python-swiftclient
root@swift-proxy:~$ mkdir -p /etc/swift
root@swift-proxy:~$ chown -R swift:swift /etc/swift/
root@swift-proxy:~$ cat swift.conf
[swift-hash]
# random unique string that can never change (DO NOT LOSE). I’m using 03c9f48da2229770.
# od -t x8 -N 8 -A n < /dev/random
# The above command can be used to generate random a string.
swift_hash_path_suffix = 34c486c41efd7f62
root@swift-storage:/etc/swift$ dpkg -l | grep swift
ii  python-swift                                    1.7.1+git201209042100~precise-0ubuntu1             distributed virtual object store - Python libraries
ii  python-swiftclient                              1:1.2.0.6.a99a37f+git201210020230~precise-0ubuntu1 Client libary for Openstack Swift API.
ii  swift                                           1.7.1+git201209042100~precise-0ubuntu1             distributed virtual object store - common files

Storage Server Installation

root@swift-storage:~$ apt-get install swift-account swift-container swift-object xfsprogs
root@swift-storage:~$ dpkg -l | grep swift
ii  swift-account                                   1.7.1+git201209042100~precise-0ubuntu1             distributed virtual object store - account server
ii  swift-container                                 1.7.1+git201209042100~precise-0ubuntu1             distributed virtual object store - container server
ii  swift-object                                    1.7.1+git201209042100~precise-0ubuntu1             distributed virtual object store - object server
設定 partition
root@swift-storage:~$ mkfs.xfs -i size=1024 /dev/vda3
meta-data=/dev/vda3              isize=1024   agcount=4, agsize=2949120 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=11796480, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=5760, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@swift-storage:~$ echo "/dev/vda3 /srv/node/vda3 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
root@swift-storage:~$ mkdir -p /srv/node/vda3
root@swift-storage:~$ mount /srv/node/vda3
root@swift-storage:~$ chown -R swift:swift /srv/node
root@swift-storage:~$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/vda2        3094288 1592280   1344824  55% /
udev             4080360       4   4080356   1% /dev
tmpfs            1635780     252   1635528   1% /run
none                5120       0      5120   0% /run/lock
none             4089444       0   4089444   0% /run/shm
/dev/vda3       47162880   32976  47129904   1% /srv/node/vda3
設定rsync
# 設定 /etc/rsyncd.conf , 記得 address 要改一下
root@swift:~$ cat /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 172.17.123.15

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

# 把 RSYNC_ENABLE 改成 true
root@swift:~$ head /etc/default/rsync
# defaults file for rsync daemon mode
# start rsync in daemon mode from init.d script?
#  only allowed values are "true", "false", and "inetd"
#  Use "inetd" if you want to start the rsyncd from inetd,
#  all this does is prevent the init.d script from printing a message
#  about not starting rsyncd (you still need to modify inetd's config yourself).
RSYNC_ENABLE=true

root@swift:~$ service rsync restart
 * Restarting rsync daemon rsync
 * rsync daemon not running, attempting to start.
   ...done.
裝完後會在 /etc/swift 下看到三個 .conf 檔, account-server.conf、container-server.conf、object-server.conf, 但在裝的過程中, 發現他還需要一個 object-expirer.conf, 目前不確定這個檔案一定要裝, 但至少我加上去也沒有造成什麼問題, 然後 container-server.conf 預設的內容也有少, 後來我是直接去看 git 內的 container-server.conf-sample 才知道的.

account-server.conf 的內容

[DEFAULT]
bind_ip = 0.0.0.0
workers = 2

[pipeline:main]
pipeline = account-server

[app:account-server]
use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

container-server.conf 的內容

記得要多加 container-sync
[DEFAULT]
bind_ip = 0.0.0.0
workers = 2

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

object-server.conf 的內容

[DEFAULT]
bind_ip = 0.0.0.0
workers = 2

[pipeline:main]
pipeline = object-server

[app:object-server]
use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

container-expirer.conf 的內容

[DEFAULT]

[object-expirer]
interval = 300

[pipeline:main]
pipeline = catch_errors cache proxy-server

[app:proxy-server]
use = egg:swift#proxy

[filter:cache]
use = egg:swift#memcache

[filter:catch_errors]
use = egg:swift#catch_errors

Proxy Server Installation

root@swift-proxy:~$ apt-get install swift-proxy memcached python-keystoneclient keystone
root@swift-proxy:~$ dpkg -l | grep swift
ii  python-swift                                    1.7.1+git201209042100~precise-0ubuntu1 distributed virtual object store - Python libraries
ii  swift                                           1.7.1+git201209042100~precise-0ubuntu1 distributed virtual object store - common files
ii  swift-proxy                                     1.7.1+git201209042100~precise-0ubuntu1 distributed virtual object store - proxy server
root@swift-proxy:~$ dpkg -l | grep keystone
ii  python-keystone                                 2012.2+git201209252030~precise-0ubuntu1            OpenStack identity service - Python library
ii  python-keystoneclient                           1:0.1.3.19+git201210011900~precise-0ubuntu1        Client libary for Openstack Keystone API
在 /etc/swift 下生成 certificate
root@swift-proxy:~$ cd /etc/swift/
root@swift-proxy:/etc/swift$ openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
Generating a 1024 bit RSA private key
.....++++++
..........................++++++
writing new private key to 'cert.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:TW
State or Province Name (full name) [Some-State]:Taiwan
Locality Name (eg, city) []:Taipei
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
設定 memcache
root@swift:~$ cat /etc/memcached.conf | grep -B 3 0.0.0.0
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
-l 172.17.123.14
root@swift:~$ service memcached restart
然後設定 proxy-server 的內容, 因為前一篇我在 keystone 內設定 swift 的 port 是 8080, 所以我在這裡也要設定成 8080. 另外 [filter:keystoneauth]、[filter:authtoken] 的內容, 我都直接參考 git 內的 sample 來設定, 按照官網上的設定方式似乎會有問題. 另外我會在 /etc/hosts 先加上 keystone 的 ip, 所以在設定檔內的 service_host 我填的是 hostname, 而不是 ip
[DEFAULT]
bind_port = 8080
user = swift

[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystoneauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
#paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = Member,admin,swiftoperator

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_port = 5000
service_host = keystone
auth_port = 35357
auth_host = keystone
auth_protocol = http
admin_tenant_name = service
admin_user = swift
admin_password = password
signing_dir = /etc/swift

[filter:cache]
use = egg:swift#memcache
set log_name = cache
#memcache_servers = 172.17.123.14:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:ratelimit]
use = egg:swift#ratelimit
設定 ring, 我的 storage server ip 為 172.17.123.15, 剛剛加入的 partition 是 vda3, 我把它設定到 z1 去, 如果要加入更多其它 partition, 就必須要多執行幾次 add 的動作, 而且要規劃好放在那一個 zone, 可以參考以下兩份文件.
http://cssoss.files.wordpress.com/2012/05/openstackbookv3-0_csscorp2.pdf
http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin/appendix-c-setting-op-1
root@swift:~$ cd /etc/swift/
root@swift:/etc/swift$ swift-ring-builder account.builder create 18 3 1
root@swift:/etc/swift$ swift-ring-builder container.builder create 18 3 1
root@swift:/etc/swift$ swift-ring-builder object.builder create 18 3 1
root@swift:/etc/swift$ swift-ring-builder account.builder add z1-172.17.123.15:6002/vda3 100
Device z1-172.17.123.15:6002/vda3_"" with 100.0 weight got id 0
root@swift:/etc/swift$ swift-ring-builder container.builder add z1-172.17.123.15:6001/vda3 100
Device z1-172.17.123.15:6001/vda3_"" with 100.0 weight got id 0
root@swift:/etc/swift$ swift-ring-builder object.builder add z1-172.17.123.15:6000/vda3 100
Device z1-172.17.123.15:6000/vda3_"" with 100.0 weight got id 0
root@swift:/etc/swift$ swift-ring-builder account.builder
account.builder, build version 1
262144 partitions, 3 replicas, 1 zones, 1 devices, 100.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices:    id  zone      ip address  port      name weight partitions balance meta
             0     1   172.17.123.15  6002      vda3 100.00          0 -100.00
root@swift:/etc/swift$ swift-ring-builder container.builder
container.builder, build version 1
262144 partitions, 3 replicas, 1 zones, 1 devices, 100.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices:    id  zone      ip address  port      name weight partitions balance meta
             0     1   172.17.123.15  6001      vda3 100.00          0 -100.00
root@swift:/etc/swift$ swift-ring-builder object.builder
object.builder, build version 1
262144 partitions, 3 replicas, 1 zones, 1 devices, 100.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices:    id  zone      ip address  port      name weight partitions balance meta
             0     1   172.17.123.15  6000      vda3 100.00          0 -100.00
root@swift-proxy:/etc/swift$ swift-ring-builder account.builder rebalance
Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
root@swift-proxy:/etc/swift$ swift-ring-builder container.builder rebalance
Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
root@swift-proxy:/etc/swift$ swift-ring-builder object.builder rebalance
Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
root@swift-proxy:/etc/swift$ chown -R swift:swift /etc/swift
做完 rebalance 之後會產生三個 ring.gz 檔案, 記得把這些檔案放到所有的 storage server 的 /etc/swift 下, 而且傳過去之後, 記得要把 owner 改成 swift:swift

Startup Service

在 proxy server 下
root@swift-proxy:~$ swift-init proxy start
Starting proxy-server...(/etc/swift/proxy-server.conf)
在 storage server 下
root@swift-storage:~$ swift-init all start
Starting container-updater...(/etc/swift/container-server.conf)
Starting account-auditor...(/etc/swift/account-server.conf)
Starting object-replicator...(/etc/swift/object-server.conf)
Unable to locate config for proxy-server
Starting container-replicator...(/etc/swift/container-server.conf)
Starting object-auditor...(/etc/swift/object-server.conf)
Starting object-expirer...(/etc/swift/object-expirer.conf)
Starting container-auditor...(/etc/swift/container-server.conf)
Starting container-server...(/etc/swift/container-server.conf)
Starting account-server...(/etc/swift/account-server.conf)
Starting account-reaper...(/etc/swift/account-server.conf)
Starting container-sync...(/etc/swift/container-server.conf)
Starting account-replicator...(/etc/swift/account-server.conf)
Starting object-updater...(/etc/swift/object-server.conf)
Starting object-server...(/etc/swift/object-server.conf)

Test and Verify

編輯一個檔, 設定一些等下會用到的區域變數, 並且匯到 .bashrc 之中, 下次再進來就不需要重新設定
root@swift-proxy:~$ cat novarc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://keystone:5000/v2.0/"
export SERVICE_ENDPOINT="http://keystone:35357/v2.0"
export SERVICE_TOKEN=password
root@swift-proxy:~$ source novarc
root@swift-proxy:~$ echo "source novarc">>.bashrc
另外在 keystone 的 endpoint-list 也請記得修改一下 ip, 改成 proxy server 的 ip, 不要再設成 127.0.0.1, 除非你是裝在同一台
root@swift-proxy:~$ keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+
|                id                |   region  |                   publicurl                   |                  internalurl                  |                adminurl                |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+
| 580b71d126804c5197b91c79fd74a330 | RegionOne |           http://keystone:5000/v2.0           |           http://keystone:5000/v2.0           |       http://keystone:35357/v2.0       |
| 6c788747593d475f831b6ff128bde995 | RegionOne |      http://cinder:8776/v1/$(tenant_id)s      |      http://cinder:8776/v1/$(tenant_id)s      |  http://cinder:8776/v1/$(tenant_id)s   |
| 95e16e71a8f04ac68ae401df5284ce3e | RegionOne | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s | http://swift-proxy:8080/v1/AUTH_$(tenant_id)s |       http://swift-proxy:8080/v1       |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+----------------------------------------+
測試的時間到了
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/verify-swift-installation.html
root@swift-proxy:~$ swift list
root@swift-proxy:~$ swift post test
root@swift-proxy:~$ swift list
test
root@swift-proxy:~$ swift upload test /etc/motd
etc/motd
root@swift-proxy:~$ swift list test
etc/motd
root@swift-proxy:~$ swift stat
   Account: AUTH_eefa301a6a424e7da3d582649ad0e59e
Containers: 1
   Objects: 1
     Bytes: 451
Accept-Ranges: bytes
X-Timestamp: 1349422137.92607
X-Trans-Id: tx251024dd19464f55b2945092b6f3678a

What is Swift?

Mirantis 有一篇介紹 Swift 及 Ceph, 但我覺得他對於 Swift 的簡介十分的易懂明瞭, 對於第一次接觸的人來說, 是個很棒的入門篇
http://www.mirantis.com/blog/object-storage-openstack-cloud-swift-ceph/

再深入一點的文章, 這篇講解 Ring 的概念, 並且也點出了 Swift 的不足之處
http://julien.danjou.info/blog/2012/openstack-swift-consistency-analysis
http://www.mirantis.com/blog/under-the-hood-of-swift-the-ring/

這篇就比較純文字一點, 先看過前面幾篇再來看這篇會比較適合
http://programmerthoughts.com/openstack/swift-tech-overview/
他也從 CAP 理論的角度來探討 Swift 的特性是滿足 AP (Available + Partition Tolerance), 所以在 Consistency 方面會放鬆一些, 也就是偶而會拿到不一致的資料
Swift achieves high scalability by relaxing constraints on consistency. While swift provides read-your-writes consistency for new objects, listings and aggregate metadata (like usage information) may not be immediately accurate. Similarly, reading an object that has been overwritten with new data may return an older version of the object data. However, swift provides the ability for the client to request the most up-to-date version at the cost of request latency. 







2012年10月2日 星期二

Openstack Folsom - Installation of Cinder with Ceph

Openstack Folsom Release

前幾天 Openstack Folsom 正式 Release
Release Software Site
http://www.openstack.org/software/folsom/
Release Note
http://wiki.openstack.org/ReleaseNotes/Folsom
Architecture
http://ken.pepple.info/openstack/2012/09/25/openstack-folsom-architecture/

這次的 Release 多了 Quantum 及 Cinder 兩個 project. Quantum 是將 Open vSwitch 整合進 Openstack, 加強了原本 Openstack 對於網路虛擬化不足的地方. Cinder 則是將原來 nova-volume 拆出來變成一個獨立的 module, 我想原因不外乎背後有太多 Storage 的大廠想要整合進 Openstack, ex: nexenta、netapp... 所以把這塊的 API 獨立出來, 和各家整合才會順利.

PPA of openstack testing
root@ubuntu12:~$ apt-get install -y python-software-properties
root@ubuntu12:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-trunk-testing
root@ubuntu12:~$ add-apt-repository ppa:openstack-ubuntu-testing/folsom-deps-staging
root@ubuntu12:~$ apt-get update && apt-get -y dist-upgrade


PPA of Ubuntu Cloud
root@ubuntu12:~$ add-apt-repository ppa:ubuntu-cloud-archive/folsom-staging
You are about to add the following PPA to your system:

 More info: https://launchpad.net/~ubuntu-cloud-archive/+archive/folsom-staging
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpdzKHU_/secring.gpg' created
gpg: keyring `/tmp/tmpdzKHU_/pubring.gpg' created
gpg: requesting key 9F68104E from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpdzKHU_/trustdb.gpg: trustdb created
gpg: key 9F68104E: public key "Launchpad PPA for Ubuntu Cloud Archive Team" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
OK

Openstack 的更新速度會比 Ubuntu Cloud 來的快, 以下的流程是以 Opentack 的 PPA 來進行

Installation

這次我先從 Cinder 下手, 並且目標是要把它和 Ceph 整合在一起. 原本想要單獨裝 Cinder 就好, 但研究 Cinder Source Code 後, 發現他目前不支援 noauth 的模式, 所以就一定要裝 Keystone, 所以整體來說重要的 Component 有: Cinder + Keystone + MySQL + CEPH

首先先裝基本的套件, MySQL 及 RabbitMQ

MySQL Installation

root@ubuntu12:~$ apt-get -y install mysql-server python-mysqldb
root@ubuntu12:~$ sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
root@ubuntu12:~$ service mysql restart
root@ubuntu12:~$ mysql -u root -ppassword
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 36
Server version: 5.5.24-0ubuntu0.12.04.1 (Ubuntu)

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE cinder;
Query OK, 1 row affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)


RabbitMQ Installation

root@ubuntu12:~$ sudo apt-get -y install rabbitmq-server
root@ubuntu12:~$ rabbitmqctl change_password guest password
Changing password for user "guest" ...
...done.

Keystone Installation

root@ubuntu12:~$ apt-get -y install keystone python-keystone python-keystoneclient
root@ubuntu12:~$ dpkg -l | grep keystone
ii  keystone                                        2012.2+git201209252030~precise-0ubuntu1     OpenStack identity service - Daemons
ii  python-keystone                                 2012.2+git201209252030~precise-0ubuntu1     OpenStack identity service - Python library
ii  python-keystoneclient                           1:0.1.3.19+git201210011900~precise-0ubuntu1 Client libary for Openstack Keystone API

# 修改 keyston.conf 的內容, 隨機取一個 admin_token
root@ubuntu12:~$ cat /etc/keystone/keystone.conf                                                              [DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = password

# The IP address of the network interface to listen on
bind_host = 0.0.0.0

# The port number which the public service listens on
public_port = 5000

# The port number which the public admin listens on
admin_port = 35357

# The port number which the OpenStack Compute service listens on
compute_port = 8774

# === Logging Options ===
# Print debugging output
verbose = True

# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
debug = True

...

[sql]
# The SQLAlchemy connection string used to connect to the database
# connection = sqlite:////var/lib/keystone/keystone.db
connection = mysql://keystone:password@localhost:3306/keystone

# the timeout before idle sql connections are reaped
idle_timeout = 200
...

root@ubuntu12:~$ service keystone restart
root@ubuntu12:~$ keystone-manage db_sync

# 編輯一個檔, 設定一些等下會用到的區域變數, 並且匯到 .bashrc 之中, 下次再進來就不需要重新設定
root@ubuntu12:~# cat novarc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://localhost:5000/v2.0/"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=password
root@ubuntu12:~$ source novarc
root@ubuntu12:~$ echo "source novarc">>.bashrc

接下來下載兩個網路上別人提供好的 script, 裡面預設的 token 也是 "password", 如果你要使用別的 token, 請記得要修改 script 的內容
root@ubuntu12:~$ wget https://raw.github.com/EmilienM/openstack-folsom-guide/master/scripts/keystone-data.sh
root@ubuntu12:~$ wget https://raw.github.com/EmilienM/openstack-folsom-guide/master/scripts/keystone-endpoints.sh
root@ubuntu12:~$ chmod a+x *.sh
# 記得把 keystone-endpoints.sh 內的 MASTER 改成你自己的 ip
root@ubuntu12:~$ less keystone-endpoints.sh
....
# other definitions
MASTER="172.17.123.13"
....

root@ubuntu12:~$ ./keystone-data.sh
root@ubuntu12:~$ ./keystone-endpoints.sh
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    OpenStack Compute Service     |
|      id     | 597fff05550043efb530ab05fa85d818 |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Volume Service     |
|      id     | 27e3539fea104d159fcc7ec9766ac8b3 |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|      id     | b7047156ef81464b8c6754fc7994ecea |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    OpenStack Storage Service     |
|      id     | 86e08436f65944be8ab0e23657a9d3e2 |
|     name    |              swift               |
|     type    |           object-store           |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|      id     | 4ec3ab138f2a4bd2800b5a4a5d407ef1 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |      OpenStack EC2 service       |
|      id     | 15bb5a5a0c3d446e8b3bfa293df3b10e |
|     name    |               ec2                |
|     type    |               ec2                |
+-------------+----------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |   OpenStack Networking service   |
|      id     | cacfce49cb2141a1ac48b3e31cef5c01 |
|     name    |             quantum              |
|     type    |             network              |
+-------------+----------------------------------+
+-------------+--------------------------------------------+
|   Property  |                   Value                    |
+-------------+--------------------------------------------+
|   adminurl  | http://172.17.123.13:8774/v2/$(tenant_id)s |
|      id     |      090f849da5a94dcb817aa340b39eb83c      |
| internalurl | http://172.17.123.13:8774/v2/$(tenant_id)s |
|  publicurl  | http://172.17.123.13:8774/v2/$(tenant_id)s |
|    region   |                 RegionOne                  |
|  service_id |      597fff05550043efb530ab05fa85d818      |
+-------------+--------------------------------------------+
+-------------+--------------------------------------------+
|   Property  |                   Value                    |
+-------------+--------------------------------------------+
|   adminurl  | http://172.17.123.13:8776/v1/$(tenant_id)s |
|      id     |      6363131b18974040a3dd1276ddc2c72e      |
| internalurl | http://172.17.123.13:8776/v1/$(tenant_id)s |
|  publicurl  | http://172.17.123.13:8776/v1/$(tenant_id)s |
|    region   |                 RegionOne                  |
|  service_id |      27e3539fea104d159fcc7ec9766ac8b3      |
+-------------+--------------------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://172.17.123.13:9292/v2   |
|      id     | 1db94d180acb4d64adab45c3116812cb |
| internalurl |   http://172.17.123.13:9292/v2   |
|  publicurl  |   http://172.17.123.13:9292/v2   |
|    region   |            RegionOne             |
|  service_id | b7047156ef81464b8c6754fc7994ecea |
+-------------+----------------------------------+
+-------------+-------------------------------------------------+
|   Property  |                      Value                      |
+-------------+-------------------------------------------------+
|   adminurl  |           http://172.17.123.13:8080/v1          |
|      id     |         b9006b5f4fc64e4d854c171ea157b7b4        |
| internalurl | http://172.17.123.13:8080/v1/AUTH_$(tenant_id)s |
|  publicurl  | http://172.17.123.13:8080/v1/AUTH_$(tenant_id)s |
|    region   |                    RegionOne                    |
|  service_id |         86e08436f65944be8ab0e23657a9d3e2        |
+-------------+-------------------------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  | http://172.17.123.13:35357/v2.0  |
|      id     | faf6ca350f4842799b3801d0b7571a59 |
| internalurl |  http://172.17.123.13:5000/v2.0  |
|  publicurl  |  http://172.17.123.13:5000/v2.0  |
|    region   |            RegionOne             |
|  service_id | 4ec3ab138f2a4bd2800b5a4a5d407ef1 |
+-------------+----------------------------------+
+-------------+------------------------------------------+
|   Property  |                  Value                   |
+-------------+------------------------------------------+
|   adminurl  | http://172.17.123.13:8773/services/Admin |
|      id     |     da745295e7ef419682d45516456047c5     |
| internalurl | http://172.17.123.13:8773/services/Cloud |
|  publicurl  | http://172.17.123.13:8773/services/Cloud |
|    region   |                RegionOne                 |
|  service_id |     15bb5a5a0c3d446e8b3bfa293df3b10e     |
+-------------+------------------------------------------+
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |    http://172.17.123.13:9696/    |
|      id     | 8b570092f8614cba98fe227de6e65e27 |
| internalurl |    http://172.17.123.13:9696/    |
|  publicurl  |    http://172.17.123.13:9696/    |
|    region   |            RegionOne             |
|  service_id | cacfce49cb2141a1ac48b3e31cef5c01 |
+-------------+----------------------------------+

安裝好之後驗証一下

root@ubuntu12:~$ keystone endpoint-list
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
|                id                |   region  |                    publicurl                    |                   internalurl                   |                  adminurl                  |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
| 090f849da5a94dcb817aa340b39eb83c | RegionOne |    http://172.17.123.13:8774/v2/$(tenant_id)s   |    http://172.17.123.13:8774/v2/$(tenant_id)s   | http://172.17.123.13:8774/v2/$(tenant_id)s |
| 1db94d180acb4d64adab45c3116812cb | RegionOne |           http://172.17.123.13:9292/v2          |           http://172.17.123.13:9292/v2          |        http://172.17.123.13:9292/v2        |
| 6363131b18974040a3dd1276ddc2c72e | RegionOne |    http://172.17.123.13:8776/v1/$(tenant_id)s   |    http://172.17.123.13:8776/v1/$(tenant_id)s   | http://172.17.123.13:8776/v1/$(tenant_id)s |
| 8b570092f8614cba98fe227de6e65e27 | RegionOne |            http://172.17.123.13:9696/           |            http://172.17.123.13:9696/           |         http://172.17.123.13:9696/         |
| b9006b5f4fc64e4d854c171ea157b7b4 | RegionOne | http://172.17.123.13:8080/v1/AUTH_$(tenant_id)s | http://172.17.123.13:8080/v1/AUTH_$(tenant_id)s |        http://172.17.123.13:8080/v1        |
| da745295e7ef419682d45516456047c5 | RegionOne |     http://172.17.123.13:8773/services/Cloud    |     http://172.17.123.13:8773/services/Cloud    |  http://172.17.123.13:8773/services/Admin  |
| faf6ca350f4842799b3801d0b7571a59 | RegionOne |          http://172.17.123.13:5000/v2.0         |          http://172.17.123.13:5000/v2.0         |      http://172.17.123.13:35357/v2.0       |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
root@ubuntu12:~$ sudo apt-get install -y curl openssl
root@ubuntu12:~$ curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "pass
word"}}}' -H "Content-type: application/json" http://172.17.123.13:35357/v2.0/tokens | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2946    0  2844  100   102  18698    670 --:--:-- --:--:-- --:--:-- 18834
{
    "access": {
        "metadata": {
            "is_admin": 0,
            "roles": [
                "5660593decab401bbc8f13aa8b19dc23",
                "833b1617cb404466ba8546c8194f8ad6",
                "7ec43588e3a14b69ae2278334bee1423"
            ]
        },
        "serviceCatalog": [
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:8774/v2/4b310104ee3345fd988fe16dd2f1f79d",
                        "id": "090f849da5a94dcb817aa340b39eb83c",
                        "internalURL": "http://172.17.123.13:8774/v2/4b310104ee3345fd988fe16dd2f1f79d",
                        "publicURL": "http://172.17.123.13:8774/v2/4b310104ee3345fd988fe16dd2f1f79d",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "nova",
                "type": "compute"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:9696/",
                        "id": "8b570092f8614cba98fe227de6e65e27",
                        "internalURL": "http://172.17.123.13:9696/",
                        "publicURL": "http://172.17.123.13:9696/",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "quantum",
                "type": "network"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:9292/v2",
                        "id": "1db94d180acb4d64adab45c3116812cb",
                        "internalURL": "http://172.17.123.13:9292/v2",
                        "publicURL": "http://172.17.123.13:9292/v2",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "glance",
                "type": "image"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:8776/v1/4b310104ee3345fd988fe16dd2f1f79d",
                        "id": "6363131b18974040a3dd1276ddc2c72e",
                        "internalURL": "http://172.17.123.13:8776/v1/4b310104ee3345fd988fe16dd2f1f79d",
                        "publicURL": "http://172.17.123.13:8776/v1/4b310104ee3345fd988fe16dd2f1f79d",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "cinder",
                "type": "volume"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:8773/services/Admin",
                        "id": "da745295e7ef419682d45516456047c5",
                        "internalURL": "http://172.17.123.13:8773/services/Cloud",
                        "publicURL": "http://172.17.123.13:8773/services/Cloud",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "ec2",
                "type": "ec2"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:8080/v1",
                        "id": "b9006b5f4fc64e4d854c171ea157b7b4",
                        "internalURL": "http://172.17.123.13:8080/v1/AUTH_4b310104ee3345fd988fe16dd2f1f79d",
                        "publicURL": "http://172.17.123.13:8080/v1/AUTH_4b310104ee3345fd988fe16dd2f1f79d",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "swift",
                "type": "object-store"
            },
            {
                "endpoints": [
                    {
                        "adminURL": "http://172.17.123.13:35357/v2.0",
                        "id": "faf6ca350f4842799b3801d0b7571a59",
                        "internalURL": "http://172.17.123.13:5000/v2.0",
                        "publicURL": "http://172.17.123.13:5000/v2.0",
                        "region": "RegionOne"
                    }
                ],
                "endpoints_links": [],
                "name": "keystone",
                "type": "identity"
            }
        ],
        "token": {
            "expires": "2012-10-03T06:55:20Z",
            "id": "5f2797bdf8fd4380bfe919a05b01772e",
            "tenant": {
                "description": null,
                "enabled": true,
                "id": "4b310104ee3345fd988fe16dd2f1f79d",
                "name": "admin"
            }
        },
        "user": {
            "id": "c7f17dfd798242fc9065afd2ea251a6d",
            "name": "admin",
            "roles": [
                {
                    "name": "admin"
                },
                {
                    "name": "KeystoneAdmin"
                },
                {
                    "name": "KeystoneServiceAdmin"
                }
            ],
            "roles_links": [],
            "username": "admin"
        }
    }
}


CEPH Installation

root@ubuntu12:~$ wget -q -O - https://raw.github.com/ceph/ceph/master/keys/release.asc | apt-key add -
OK
# 手動增加一個 ceph.list 在 /etc/apt/sources.list.d 下
root@ubuntu12:/etc/apt/sources.list.d$ cat ceph.list
deb http://ceph.newdream.net/debian/ precise main
deb-src http://ceph.newdream.net/debian/ precise main
root@ubuntu12:~$ apt-get update
root@ubuntu12:~$ apt-get install -y ceph python-ceph
root@ubuntu12:~$ dpkg -l | grep ceph
ii  ceph                                            0.48.2argonaut-1precise                     distributed storage and file system
ii  ceph-common                                     0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph storage cluster
ii  ceph-fs-common                                  0.48.2argonaut-1precise                     common utilities to mount and interact with a ceph file system
ii  ceph-fuse                                       0.48.2argonaut-1precise                     FUSE-based client for the Ceph distributed file system
ii  ceph-mds                                        0.48.2argonaut-1precise                     metadata server for the ceph distributed file system
ii  libcephfs1                                      0.48.2argonaut-1precise                     Ceph distributed file system client library
ii  python-ceph                                     0.48.2argonaut-1precise                     Python libraries for the Ceph distributed filesystem

# 安裝好之後, 就把你的 ceph cluster 的設定檔 copy 到 /etc/ceph 下, 正常就可以使用
# 至於怎麼安裝 ceph cluster 就請到 ceph 的官網去看囉~ 
root@ubuntu12:~$ ceph -s
   health HEALTH_OK
   monmap e1: 3 mons at {wistor-003=172.17.123.92:6789/0,wistor-006=172.17.123.94:6789/0,wistor-007=172.17.123.95:6789/0}, election epoch 10, quorum 0,1,2 wistor-003,wistor-006,wistor-007
   osdmap e24: 23 osds: 23 up, 23 in
    pgmap v2242: 4416 pgs: 4416 active+clean; 8362 MB data, 156 GB used, 19850 GB / 21077 GB avail
   mdsmap e1: 0/0/1 up


如果只是要和一般的 iscsi server 整合, 可以參考官網, 大致上就是把一個 block device (ex:/dev/sdb) 丟到 LVM 去管理, 然後記得要建一個 cinder-volume
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html

Cinder Installation

root@ubuntu12:~$ apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms python-cinderclient tgt
root@ubuntu12:~$ dpkg -l | grep cinder
ii  cinder-api                                      2012.2+git201209252100~precise-0ubuntu1            Cinder storage service - api server
ii  cinder-common                                   2012.2+git201209252100~precise-0ubuntu1            Cinder starage service - common files
ii  cinder-scheduler                                2012.2+git201209252100~precise-0ubuntu1            Cinder storage service - api server
ii  cinder-volume                                   2012.2+git201209252100~precise-0ubuntu1            Cinder storage service - api server
ii  python-cinder                                   2012.2+git201209252100~precise-0ubuntu1            Cinder python libraries
ii  python-cinderclient                             1:0.2.26+git201209201100~precise-0ubuntu1          python bindings to the OpenStack Volume API

修改 /etc/cinder/api_paste.ini
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
# 修改這三行
admin_tenant_name = service
admin_user = cinder
admin_password = password

修改 /etc/cinder/api_paste.ini, 改成使用 mysql 及 RBD driver, 並且把 rabbitmq 的 password 修改一下
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
rabbit_password = password
sql_connection = mysql://cinder:password@localhost:3306/cinder
volume_driver=cinder.volume.driver.RBDDriver


root@ubuntu12:~$ cinder-manage db sync
root@ubuntu12:~$ service cinder-api restart
root@ubuntu12:~$ service cinder-scheduler restart
root@ubuntu12:~$ service cinder-volume restart
root@ubuntu12:~$ cinder create --display_name test 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|      created_at     |      2012-10-02T07:14:34.815546      |
| display_description |                 None                 |
|     display_name    |                 test                 |
|          id         | e7e83b13-761e-40e3-8b4c-415126404e40 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
root@ubuntu12:~$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| e7e83b13-761e-40e3-8b4c-415126404e40 | available |     test     |  1   |     None    |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
root@ubuntu12:~$ rbd list
volume-e7e83b13-761e-40e3-8b4c-415126404e40


Reference

Folsom Announcement
http://lists.openstack.org/pipermail/openstack-announce/2012-September/000035.html
Folsom: How it was made
http://blog.bitergia.com/2012/09/27/how-the-new-release-of-openstack-was-built/

Cinder Installation Document
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html
https://github.com/EmilienM/openstack-folsom-guide/blob/master/doc/out/pdf/openstack-folsom-guide.pdf
Cinder Developer document
http://docs.openstack.org/developer/cinder/
Cinder Source Code
https://github.com/openstack/cinder.git
https://github.com/openstack/python-cinderclient

The Top 3 New Swift Features in OpenStack Folsom
http://swiftstack.com/blog/2012/09/27/top-three-swift-features-in-openstack-folsom/