This Topology works fine ......with bzr 984

Separate Nova-network into single box

I'll write an instruction for this one later

similar with on a different networks (compute box on a different network than corp, letting nova-network (with 2NICS) do the routing between instances and corp network)





----------  Specs  ----------
Nova_Host Network>>>>>>192.168.1.0/24 gw 192.168.1.2
Nova_instance Network>>192.168.2.0/24 gw 192.168.2.1
FlatDHCP mode
OpenStack trunk version 2011.2~gamma2~bzr984-0ubu
----------Nova.conf----------
Ubuntu1
--lock_path=/var/lock/nova
--sql_connection=mysql://root:nova@192.168.1.1/nova
--s3_host=192.168.1.1
--rabbit_host=192.168.1.1
--cc_host=192.168.1.1
--ec2_url=http://192.168.1.1:8773/services/Cloud
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--FAKE_subdomain=ec2
--ca_path=/var/lib/nova/CA
--keys_path=/var/lib/nova/keys
--networks_path=/var/lib/nova/networks
--instances_path=/var/lib/nova/instances
--images_path=/var/lib/nova/images
--buckets_path=/var/lib/nova/buckets
--libvirt_type=kvm
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=eth1
--logdir=/var/log/nova
--verbose
--fixed_range=192.168.2.0/24
--network_size=256

Ubuntu2
--lock_path=/var/lock/nova
--sql_connection=mysql://root:nova@192.168.1.1/nova
--s3_host=192.168.1.1
--rabbit_host=192.168.1.1
--cc_host=192.168.1.1
--ec2_url=http://192.168.1.1:8773/services/Cloud
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--FAKE_subdomain=ec2
--ca_path=/var/lib/nova/CA
--keys_path=/var/lib/nova/keys
--networks_path=/var/lib/nova/networks
--instances_path=/var/lib/nova/instances
--images_path=/var/lib/nova/images
--buckets_path=/var/lib/nova/buckets
--libvirt_type=kvm
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=eth1
--logdir=/var/log/nova
--verbose
--fixed_range=192.168.2.0/24
--network_size=256
--flat_network_dhcp_start=192.168.2.2
--my_ip=192.168.1.1


Ubuntu3 , Ubuntu4

--lock_path=/var/lock/nova
--sql_connection=mysql://root:nova@192.168.1.1/nova
--s3_host=192.168.1.1
--rabbit_host=192.168.1.1
--cc_host=192.168.1.1
--ec2_url=http://192.168.1.1:8773/services/Cloud
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--FAKE_subdomain=ec2
--ca_path=/var/lib/nova/CA
--keys_path=/var/lib/nova/keys
--networks_path=/var/lib/nova/networks
--instances_path=/var/lib/nova/instances
--images_path=/var/lib/nova/images
--buckets_path=/var/lib/nova/buckets
--libvirt_type=kvm
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=eth1
--logdir=/var/log/nova
--verbose
--fixed_range=192.168.2.0/24
--network_size=256


----Ubuntu2 iptables----

# Generated by iptables-save v1.4.4 on Thu Apr 14 15:02:32 2011
*nat
:PREROUTING ACCEPT [121884:11255605]
:OUTPUT ACCEPT [4269:809308]
:POSTROUTING ACCEPT [2296:138248]
:nova-network-OUTPUT - [0:0]
:nova-network-POSTROUTING - [0:0]
:nova-network-PREROUTING - [0:0]
:nova-network-floating-snat - [0:0]
:nova-network-snat - [0:0]
:nova-postrouting-bottom - [0:0]
-A PREROUTING -j nova-network-PREROUTING
-A OUTPUT -j nova-network-OUTPUT
-A POSTROUTING -j nova-network-POSTROUTING
-A POSTROUTING -j nova-postrouting-bottom
-A POSTROUTING -o eth0 -j MASQUERADE
-A nova-network-POSTROUTING -s 192.168.2.0/24 -d 10.128.0.0/24 -j ACCEPT
-A nova-network-POSTROUTING -s 192.168.2.0/24 -d 192.168.2.0/24 -j ACCEPT
-A nova-network-POSTROUTING -s 192.168.2.0/24 -d 192.168.1.0/24 -j ACCEPT
-A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.1:8773
-A nova-network-snat -j nova-network-floating-snat
-A nova-network-snat -s 192.168.2.0/24 -j SNAT --to-source 172.16.3.130
-A nova-postrouting-bottom -j nova-network-snat
COMMIT
# Completed on Thu Apr 14 15:02:32 2011
# Generated by iptables-save v1.4.4 on Thu Apr 14 15:02:32 2011
*filter
:INPUT ACCEPT [4273059:373682815]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8174584:661722353]
:nova-filter-top - [0:0]
:nova-network-FORWARD - [0:0]
:nova-network-INPUT - [0:0]
:nova-network-OUTPUT - [0:0]
:nova-network-local - [0:0]
-A INPUT -j nova-network-INPUT
-A FORWARD -j nova-filter-top
-A FORWARD -j nova-network-FORWARD
-A OUTPUT -j nova-filter-top
-A OUTPUT -j nova-network-OUTPUT
-A nova-filter-top -j nova-network-local
-A nova-network-FORWARD -i br100 -j ACCEPT
-A nova-network-FORWARD -o br100 -j ACCEPT
COMMIT
# Completed on Thu Apr 14 15:02:32 2011


------Ubuntu2 route------

Kernel IP routing table
Destination  Gateway      Genmask       Flags Metric RefUse Iface
192.168.2.0  0.0.0.0    255.255.255.0     U     0       0   br100
172.16.3.0   0.0.0.0    255.255.255.0     U     0       0    eth0
192.168.1.0  0.0.0.0    255.255.255.0     U     0       0   br100
0.0.0.0      172.16.3.1 0.0.0.0           UG   100      0    eth0

-----Ubuntu2 ifconfig-----
<pre>
br100     Link encap:Ethernet  HWaddr 20:cf:30:e7:45:0b
          inet addr:192.168.2.1  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::e025:73ff:fe06:e91f/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:4341303 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8377274 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:380988249 (380.9 MB)  TX bytes:980900540 (980.9 MB)

eth0      Link encap:Ethernet  HWaddr 00:80:c8:4e:d0:13
          inet addr:172.16.3.130  Bcast:172.16.3.255  Mask:255.255.255.0
          inet6 addr: fe80::280:c8ff:fe4e:d013/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2826328 errors:0 dropped:0 overruns:0 frame:0
          TX packets:139844 errors:2 dropped:0 overruns:2 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:366428402 (366.4 MB)  TX bytes:10468948 (10.4 MB)
          Interrupt:17 Base address:0xec00

eth1      Link encap:Ethernet  HWaddr 20:cf:30:e7:45:0b
          inet6 addr: fe80::22cf:30ff:fee7:450b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4343014 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8383004 errors:0 dropped:0 overruns:0 carrier:1
          collisions:0 txqueuelen:1000
          RX bytes:442027225 (442.0 MB)  TX bytes:981335684 (981.3 MB)
          Interrupt:44

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:336 (336.0 B)  TX bytes:336 (336.0 B)
</pre>

-----Ubuntu2 ip addr show-----
<pre>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 169.254.169.254/32 scope link lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:80:c8:4e:d0:13 brd ff:ff:ff:ff:ff:ff
    inet 172.16.3.130/24 brd 172.16.3.255 scope global eth0
    inet6 fe80::280:c8ff:fe4e:d013/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 20:cf:30:e7:45:0b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::22cf:30ff:fee7:450b/64 scope link
       valid_lft forever preferred_lft forever
4: br100: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 20:cf:30:e7:45:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global br100
    inet 192.168.1.2/24 brd 192.168.1.255 scope global br100
    inet6 fe80::e025:73ff:fe06:e91f/64 scope link
       valid_lft forever preferred_lft forever
</pre>

-----Result-----

Two instances , each for tty image and UEC image
<pre>
RESERVATION     r-s4goud8f      hugopro default
INSTANCE        i-00000014      ami-508455f7    192.168.2.5     192.168.2.5     running hugo (hugopro, ubuntu3) 0               m1.tiny 2011-04-14T02:49:54Z    nova
RESERVATION     r-bzxyus3n      hugopro default
INSTANCE        i-00000015      ami-18b939f8    192.168.2.2     192.168.2.2     running hugo (hugopro, ubuntu3) 0               m1.tiny 2011-04-14T03:15:44Z    nova
</pre>

Ping test
<pre>
root@ubuntu1:~# ping 192.168.2.5
PING 192.168.2.5 (192.168.2.5) 56(84) bytes of data.
64 bytes from 192.168.2.5: icmp_req=1 ttl=63 time=0.403 ms
From 192.168.1.2: icmp_seq=2 Redirect Host(New nexthop: 192.168.2.5)
64 bytes from 192.168.2.5: icmp_req=2 ttl=63 time=0.316 ms
^C
--- 192.168.2.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.316/0.359/0.403/0.047 ms
root@ubuntu1:~# ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_req=1 ttl=63 time=0.337 ms
From 192.168.1.2: icmp_seq=2 Redirect Host(New nexthop: 192.168.2.2)
64 bytes from 192.168.2.2: icmp_req=2 ttl=63 time=0.272 ms
^C
--- 192.168.2.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.272/0.304/0.337/0.036 ms
</pre>

h3. UEC image test

<pre>
#euca-get-console-output
init: plymouth-splash main process (276) terminated with status 2
init: plymouth main process (49) killed by SEGV signal
cloud-init start-local running: Thu, 14 Apr 2011 03:15:53 +0000. up 3.08 seconds
no instance data found in start-local
init: cloud-init-local main process (253) terminated with status 1
cloud-init start running: Thu, 14 Apr 2011 03:15:53 +0000. up 3.55 seconds
found data source: DataSourceEc2
Generating locales...
  en_US.UTF-8... done
Generation complete.
mountall: Event failed
init: plymouth-log main process (441) terminated with status 1
2011-04-14 03:16:02,153 - cc_mounts.py[WARNING]: Failed to enable swap
 * Starting AppArmor profiles       2011-04-14 03:16:02,171 - cc_mounts.py[WARNING]: 'mount -a' failed
                                                                                Generating public\/private rsa key pair.
                                                                         [ OK ]
Your identification has been saved in \/etc\/ssh\/ssh_host_rsa_key.
Your public key has been saved in \/etc\/ssh\/ssh_host_rsa_key.pub.
The key fingerprint is:
d5:1b:76:e0:34:ae:5e:a0:33:64:05:25:f1:f1:e2:34 root@i-00000015
The key's randomart image is:
+--[ RSA 2048]----+
|        +++ +    |
|         + B o   |
|        o E B .  |
|       o = * +   |
|        S o o    |
|         + .     |
|          .      |
|                 |
|                 |
+-----------------+
Generating public\/private dsa key pair.
Your identification has been saved in \/etc\/ssh\/ssh_host_dsa_key.
Your public key has been saved in \/etc\/ssh\/ssh_host_dsa_key.pub.
The key fingerprint is:
77:57:b3:ea:d4:ad:4e:0d:dc:a8:7e:de:1e:68:a7:a5 root@i-00000015
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|               ..|
|             . +o|
|        S . . =..|
|         . . o+o.|
|            .=.=o|
|           .+.*o.|
|            .E+.o|
+-----------------+
ec2:
ec2: #############################################################
ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
ec2: 2048 d5:1b:76:e0:34:ae:5e:a0:33:64:05:25:f1:f1:e2:34 \/etc\/ssh\/ssh_host_rsa_key.pub (RSA)
ec2: 1024 77:57:b3:ea:d4:ad:4e:0d:dc:a8:7e:de:1e:68:a7:a5 \/etc\/ssh\/ssh_host_dsa_key.pub (DSA)
ec2: -----END SSH HOST KEY FINGERPRINTS-----
ec2: #############################################################
landscape-client is not configured, please run landscape-config.
</pre>
<pre>
#SSH
root@ubuntu1:~# ssh -i hugo.priv root@192.168.2.2
Linux i-00000015 2.6.35-28-virtual #49-Ubuntu SMP Tue Mar 1 15:12:28 UTC 2011 x86_64 GNU/Linux
Ubuntu 10.10

Welcome to Ubuntu!
 * Documentation:  https://help.ubuntu.com/

  System information as of Thu Apr 14 07:10:27 UTC 2011

  System load:  0.0               Processes:           64
  Usage of /:   39.4% of 1.35GB   Users logged in:     0
  Memory usage: 12%               IP address for eth0: 192.168.2.2
  Swap usage:   0%

  Graph this data and manage this system at https://landscape.canonical.com/
---------------------------------------------------------------------
At the moment, only the core of the system is installed. To tune the
system to your needs, you can choose to install one or more
predefined collections of software by running the following
command:

   sudo tasksel --section server
---------------------------------------------------------------------

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@i-00000015:~#
</pre>
<pre>
#ping google.com
root@i-00000015:~# ping google.com
PING google.com (72.14.203.104) 56(84) bytes of data.
64 bytes from tx-in-f104.1e100.net (72.14.203.104): icmp_req=1 ttl=53 time=30.4 ms
64 bytes from tx-in-f104.1e100.net (72.14.203.104): icmp_req=2 ttl=53 time=71.3 ms
64 bytes from tx-in-f104.1e100.net (72.14.203.104): icmp_req=3 ttl=53 time=21.8 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 21.834/41.194/71.345/21.605 ms
</pre>

h3. TTY image test

<pre>
#euca-get-console-output
</pre>

Comments

  1. Hi, Hugo.
    Can I ask a question here?

    My settings are very similar with yours except for that the network node is located at the same node with the cloud controller(nova API services).

    I added PREROUTING rules on compute node and I can get meta data from the compute node with the following command.

    # GET http://169.254.169.254/
    1.0
    2007-01-19
    2007-03-01
    2007-08-29
    2007-10-10
    2007-12-15
    2008-02-01
    2008-09-01
    2009-04-04

    Inside the instance, however, metadata server cannot be reached because I can see the following error log from the instance console log.

    cloud-init running: Tue, 19 Apr 2011 07:09:37 +0000. up 82.22 seconds
    waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id
    07:09:37 [ 1/100]: url error [[Errno 101] Network is unreachable]
    07:09:38 [ 2/100]: url error [[Errno 101] Network is unreachable]
    07:09:39 [ 3/100]: url error [[Errno 101] Network is unreachable]
    07:09:40 [ 4/100]: url error [[Errno 101] Network is unreachable]
    07:09:41 [ 5/100]: url error [[Errno 101] Network is unreachable]


    Did I miss something?
    Looking for your help

    ReplyDelete
  2. does your instance network in same segment with controller ?

    could you plz post your console output on pastebin ?

    1. make sure instance already got it's ip
    2. make sure instance network segment can reach host's network

    need to check iptables & console output

    ReplyDelete
  3. btw , which network mode you are using ?

    ReplyDelete
  4. I'm using VlanManager.
    Here's iptalbes-save result on compute node.

    ---------------------------------------------------
    # Generated by iptables-save v1.4.4 on Tue Apr 19 20:27:13 2011
    *nat
    :PREROUTING ACCEPT [229:43562]
    :POSTROUTING ACCEPT [10:622]
    :OUTPUT ACCEPT [11:693]
    :nova-compute-OUTPUT - [0:0]
    :nova-compute-POSTROUTING - [0:0]
    :nova-compute-PREROUTING - [0:0]
    :nova-compute-floating-snat - [0:0]
    :nova-compute-snat - [0:0]
    :nova-postrouting-bottom - [0:0]
    -A PREROUTING -j nova-compute-PREROUTING
    -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 121.166.195.54:8773
    -A POSTROUTING -j nova-compute-POSTROUTING
    -A POSTROUTING -j nova-postrouting-bottom
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
    -A OUTPUT -j nova-compute-OUTPUT
    -A nova-compute-snat -j nova-compute-floating-snat
    -A nova-postrouting-bottom -j nova-compute-snat
    COMMIT
    # Completed on Tue Apr 19 20:27:13 2011
    # Generated by iptables-save v1.4.4 on Tue Apr 19 20:27:13 2011
    *filter
    :INPUT ACCEPT [475311:42792648]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [928238:74704384]
    :nova-compute-FORWARD - [0:0]
    :nova-compute-INPUT - [0:0]
    :nova-compute-OUTPUT - [0:0]
    :nova-compute-inst-18 - [0:0]
    :nova-compute-local - [0:0]
    :nova-compute-sg-fallback - [0:0]
    :nova-filter-top - [0:0]
    -A INPUT -j nova-compute-INPUT
    -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
    -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
    -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
    -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
    -A FORWARD -j nova-filter-top
    -A FORWARD -j nova-compute-FORWARD
    -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
    -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
    -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A OUTPUT -j nova-filter-top
    -A OUTPUT -j nova-compute-OUTPUT
    -A nova-compute-FORWARD -i br100 -j ACCEPT
    -A nova-compute-FORWARD -o br100 -j ACCEPT
    -A nova-compute-inst-18 -m state --state INVALID -j DROP
    -A nova-compute-inst-18 -m state --state RELATED,ESTABLISHED -j ACCEPT
    -A nova-compute-inst-18 -s 10.0.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT
    -A nova-compute-inst-18 -s 10.0.0.0/26 -j ACCEPT
    -A nova-compute-inst-18 -j nova-compute-sg-fallback
    -A nova-compute-local -d 10.0.0.3/32 -j nova-compute-inst-18
    -A nova-compute-sg-fallback -j DROP
    -A nova-filter-top -j nova-compute-local
    COMMIT
    # Completed on Tue Apr 19 20:27:13 2011

    -----------------------------------------------------

    ReplyDelete
  5. and iptables-save on the node for other services


    ----------------------------------------------------

    # Generated by iptables-save v1.4.4 on Tue Apr 19 20:34:29 2011
    *nat
    :PREROUTING ACCEPT [292:51082]
    :POSTROUTING ACCEPT [28:1727]
    :OUTPUT ACCEPT [28:1727]
    :nova-network-OUTPUT - [0:0]
    :nova-network-POSTROUTING - [0:0]
    :nova-network-PREROUTING - [0:0]
    :nova-network-floating-snat - [0:0]
    :nova-network-snat - [0:0]
    :nova-postrouting-bottom - [0:0]
    -A PREROUTING -j nova-network-PREROUTING
    -A POSTROUTING -j nova-network-POSTROUTING
    -A POSTROUTING -j nova-postrouting-bottom
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
    -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
    -A OUTPUT -j nova-network-OUTPUT
    -A nova-network-POSTROUTING -s 10.0.0.0/8 -d 10.128.0.0/24 -j ACCEPT
    -A nova-network-POSTROUTING -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
    -A nova-network-PREROUTING -d 121.166.195.54/32 -p udp -m udp --dport 1000 -j DNAT --to-destination 10.0.0.2:1194
    -A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 121.166.195.54:87 73
    -A nova-network-snat -j nova-network-floating-snat
    -A nova-network-snat -s 10.0.0.0/8 -j SNAT --to-source 121.166.195.54
    -A nova-postrouting-bottom -j nova-network-snat
    COMMIT
    # Completed on Tue Apr 19 20:34:29 2011
    # Generated by iptables-save v1.4.4 on Tue Apr 19 20:34:29 2011
    *filter
    :INPUT ACCEPT [4007025:290609944]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [3517486:254895457]
    :nova-filter-top - [0:0]
    :nova-network-FORWARD - [0:0]
    :nova-network-INPUT - [0:0]
    :nova-network-OUTPUT - [0:0]
    :nova-network-local - [0:0]
    -A INPUT -j nova-network-INPUT
    -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
    -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
    -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
    -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
    -A FORWARD -j nova-filter-top
    -A FORWARD -j nova-network-FORWARD
    -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
    -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
    -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
    -A OUTPUT -j nova-filter-top
    -A OUTPUT -j nova-network-OUTPUT
    -A nova-filter-top -j nova-network-local
    -A nova-network-FORWARD -i br100 -j ACCEPT
    -A nova-network-FORWARD -o br100 -j ACCEPT
    -A nova-network-FORWARD -d 10.0.0.2/32 -p udp -m udp --dport 1194 -j ACCEPT
    COMMIT
    # Completed on Tue Apr 19 20:34:29 2011

    ReplyDelete
  6. Console output is too long.
    Error stars here. If you want me to check something from the console output, let me know.

    ----------------------------------------------


    init: plymouth main process (48) killed by SEGV signal^M
    init: plymouth-splash main process (296) terminated with status 2^M
    cloud-init running: Tue, 19 Apr 2011 07:09:37 +0000. up 82.22 seconds
    waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id
    07:09:37 [ 1/100]: url error [[Errno 101] Network is unreachable]
    07:09:38 [ 2/100]: url error [[Errno 101] Network is unreachable]
    07:09:39 [ 3/100]: url error [[Errno 101] Network is unreachable]
    07:09:40 [ 4/100]: url error [[Errno 101] Network is unreachable]
    07:09:41 [ 5/100]: url error [[Errno 101] Network is unreachable]
    07:09:42 [ 6/100]: url error [[Errno 101] Network is unreachable]
    07:09:44 [ 7/100]: url error [[Errno 101] Network is unreachable]
    07:09:47 [ 8/100]: url error [[Errno 101] Network is unreachable]
    07:09:49 [ 9/100]: url error [[Errno 101] Network is unreachable]
    07:09:51 [10/100]: url error [[Errno 101] Network is unreachable]
    07:09:53 [11/100]: url error [[Errno 101] Network is unreachable]
    07:09:56 [12/100]: url error [[Errno 101] Network is unreachable]
    07:09:59 [13/100]: url error [[Errno 101] Network is unreachable]
    07:10:02 [14/100]: url error [[Errno 101] Network is unreachable]
    07:10:05 [15/100]: url error [[Errno 101] Network is unreachable]
    07:10:08 [16/100]: url error [[Errno 101] Network is unreachable]
    07:10:12 [17/100]: url error [[Errno 101] Network is unreachable]
    07:10:16 [18/100]: url error [[Errno 101] Network is unreachable]

    giving up on md after 1051 seconds
    Could not find data source
    Failed to get instance data
    init: cloud-init main process (351) terminated with status 1^M
    mountall: Event failed
    mountall: Plymouth command failed
    mountall: Plymouth command failed
    mountall: Plymouth command failed
    mountall: Plymouth command failed
    mountall: Disconnected from Plymouth
    init: plymouth-log main process (394) terminated with status 1^M
    * Starting AppArmor profiles ^[[80G ^M^[[74G[ OK ]
    Traceback (most recent call last):
    Traceback (most recent call last):
    File "/usr/bin/cloud-init-cfg", line 56, in
    main()
    File "/usr/bin/cloud-init-cfg", line 43, in main
    cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
    File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig.py", line 42, in __init__
    File "/usr/bin/cloud-init-cfg", line 56, in
    main()
    File "/usr/bin/cloud-init-cfg", line 43, in main
    cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
    File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig.py", line 42, in __init__
    self.cfg = self.get_config_obj(cfgfile)
    File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig.py", line 53, in get_config_obj
    f=file(cfgfile)
    IOError: [Errno 2] No such file or directory: '/var/lib/cloud/data/cloud-config.txt'
    self.cfg = self.get_config_obj(cfgfile)
    File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig.py", line 53, in get_config_obj
    f=file(cfgfile)
    IOError: [Errno 2] No such file or directory: '/var/lib/cloud/data/cloud-config.txt'

    ReplyDelete
  7. Here's console output.

    http://pastebin.com/QGmjQhaz

    ReplyDelete
  8. I got network unreachable before due to my dnsmasq service can not assign IP for instance....

    but your's is Vlan , I have no idea about that .

    But I'm sure that's the problem on network.

    you can check 10.0.0.x network if curl 169.254.169.254 works....

    In my guessing ,
    1. instance did not get ip from nova-network
    2. instance network can not reach 121.166.195.54
    3. would you like to try FlatDHCP ?

    ReplyDelete
  9. Hello 선현문
    In my recommend , let jump to nova Q&A and IRC channle freenode #openstack

    I wonder to know the solution too , I believe Vlan mode will be used in my lab.

    https://answers.launchpad.net/nova/+questions

    ReplyDelete
  10. Hi Hugo,
    Thanks for the quick response.

    I found that other people have got into the same problem. I have reported a bug with this issue.
    https://bugs.launchpad.net/nova/+bug/766697
    This may be the fastest way to resolve the problem.

    ReplyDelete
  11. Quick question.
    Which node did you ping or ssh on to the instance?

    In compute node,
    there's no route to the instance IP.
    Here's route result on the compute node.

    ------------------------------------------------------
    Destination Gateway Genmask Flags Metric Ref Use Iface
    localnet * 255.255.255.0 U 0 0 0 eth0
    link-local * 255.255.0.0 U 1000 0 0 eth0
    default 121.166.195.254 0.0.0.0 UG 100 0 0 eth0

    ReplyDelete
  12. Regularly from controller node.....
    But you can ssh instance from any other host which can route to instance....

    You can check ifconfig on controller node...
    #ip addr show
    You'll see instance's network first ip already bind to br100.....

    Once you want to ssh instance from compute node , route compute node to instance , gateway from "nova-network host"!!!!!

    for example , on controller node
    1. Ipv_4 forward
    2. iptable POSTROUTING between two network

    ReplyDelete
  13. Hi, Hugo.
    It was because of missing one flag.

    --flat_interface=eth0

    Adding the flag, it works find with the FlatDHCP mode.
    I'll move on to VLAN.
    Thanks for all your help.

    ReplyDelete
  14. Hugo,

    I have nearly the same setup (see https://answers.launchpad.net/nova/+question/154362). I can SSH to VMs only from the nova-network machine.

    I noticed one difference is in our iptables-save output. Yours includes the following lines mine does not:

    -A POSTROUTING -o eth0 -j MASQUERADE

    and

    -A nova-network-POSTROUTING -s 192.168.2.0/24 -d 192.168.1.0/24 -j ACCEPT

    I am new to iptables. Would these make the difference?
    Thanks for a great posting.
    Cheers,
    Graham

    ReplyDelete
  15. GrahamH

    Yes , that's the key point.
    Iptables is very important for routing and network manage.

    Explanation
    1.-A POSTROUTING -o eth0 -j MASQUERADE
    Due to my eth0 is external NIC on nova-network, all traffic must have a outbound route , this rule means apply all NIC can route out from eth0.

    2.-A nova-network-POSTROUTING -s 192.168.2.0/24 -d 192.168.1.0/24 -j ACCEPT
    (I think there's your problem, you can focus over there)
    My Nova management network is 192.168.1.0/24 and instance's network is 192.168.2.0/24.
    Two networks in nova-network host , that's why you don't need any other rule for instance to communicate with Management network.

    For example, let's assume an instance IP 192.168.2.10. nova-network will bind 192.168.2.1 to it's host as DHCP listen address, so that nova-network host is include the network segment of instance. You can verify it by #ip addr show.
    But other hosts did not include 192.168.2.0/24 , how can they talk with instance ? So we need to do some tricky stuff. The key point is let other hosts to reach 192.168.2.0/24.
    *Other host route from nova-network
    *Apply 192.168.2.0/24 & 192.168.1.0/24 route to each other through nova-network host which is the only place include two networks.


    Cheers
    Hugo Kuo

    ReplyDelete
  16. Hi,Hugo.

    I have a question about the top illustration.
    By the constitution of the illustration, which range assumes that it is a DMZ(demilitarized zone)?
    Does the Firewall have only to locate it between Ubuntu2 and Internet?

    Best Regards,
    Nakamiti

    ReplyDelete
  17. Hello , Nakamasa

    192.168.1.0/24
    Internet = my corporate network 172.16.0.0/16
    The firewall locate on Ubuntu2....
    In clearly , Ubuntu 2 is the firewall between corporate network and OpenStack Nova..
    It's for internal test only now.

    Hope it's helpful..

    ReplyDelete
  18. Hi,Hugo

    Thanks for an answer.
    I want to know the assumption that an illustration shows.
    For example, Ubuntu1(API, MQ, DB..) and Ubuntu3, Ubuntu4(Compute-Node) places it for the local network and does not admit access from the external network etc...
    Please tell it if you have an idea.

    Best Regards,
    Nakamiti

    ReplyDelete
  19. And do you want ur instance colud be accessed from external network ?
    I this illustration , it's possible.
    How will you access into Ubuntu3 or Ubuntu4?
    You should access into Ubuntu2 first then jump to Ubuntu 3 or 4 from Ubntun2...
    Is that matching your desire?

    Cheers
    Hugo Kuo

    ReplyDelete
  20. Hi,Hugo

    Your answer is quite matching my desire.
    An intention of my question is a thing about the security.
    I understand what this system could be managed almost as follows.
    The general user who is not a cloud administrator can access only Ubuntu2 and can access VM instance on Ubuntu3 or Ubuntu4 via this. On the other hand, only a cloud administrator can access physical server(Ubuntu1 and Ubuntu3, Ubuntu4).

    Thanks!
    Nakamiti

    ReplyDelete
  21. Hello Nakamasa
    1. How could a normal access Ubuntu2 ????
    There's no any account for a end user. We don't need to create a user who is a Cloud consumer in OS. The account info of Nova is been added in mysql DB.

    2.Nova user management is based on RBAC. You can specify a role to an account. Search for RBAC on openstack.org.

    Conclusion :
    OS System Administrator could access any physical machiness.
    OS system Admin account != Cloud admin...
    End user can not access any physical machine unless the user has a OS account. So it's full of security.
    I can understand that why you feel confusing about user management. Read more and test more. You'll know the policy.
    And it's important too.

    Cheers
    Hugo Kuo

    ReplyDelete
  22. Hi, Hugo

    > 1. How could a normal access Ubuntu2 ????
    The meaning that I said "The general user can access only Ubuntu2..." is not such a thing.
    I wanted to express that it is necessary to be able to transmit a packet to Ubuntu2(accepted ping, but not accepted ssh) at least in order to access VM instance launching on Compute-Node(Ubuntu3 or Ubuntu4).

    By the way, I am surely confused in rolls of Openstack and the OS account about the user management...

    Thanks,
    Nakamiti

    ReplyDelete
  23. In the illustration , Ubuntu2 and instances are pingable.

    You need to set additonal iptable rule on ubntu2 for general user's nova-api request redirect to Ubuntu1.
    Once you do that , and download personal cert of openstack. The account could use cloud IaaS any where.

    RBAC
    http://nova.openstack.org/runnova/managing.users.html

    ReplyDelete

Post a Comment

Popular posts from this blog