Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atomic vagrant box doesn't work with private_network #258

Open
gtirloni opened this issue Mar 20, 2017 · 2 comments
Open

Atomic vagrant box doesn't work with private_network #258

gtirloni opened this issue Mar 20, 2017 · 2 comments

Comments

@gtirloni
Copy link

Vagrantfile created by vagrant init centos/atomic-host and private network enabled:

Vagrant.configure("2") do |config|
  config.vm.box = "centos/atomic-host"
  config.vm.network "private_network", ip: "192.168.33.10"
end

Result:

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'centos/atomic-host'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'centos/atomic-host' is up to date...
==> default: Setting the name of the VM: centos-atomic_default_1490027763982_65539
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: No guest additions were detected on the base box for this VM! Guest
    default: additions are required for forwarded ports, shared folders, host only
    default: networking, and more. If SSH fails on this machine, please install
    default: the guest additions and repackage the box to continue.
    default: 
    default: This is not an error message; everything may continue to work properly,
    default: in which case you may ignore this message.
==> default: Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

# Down the interface before munging the config file. This might
# fail if the interface is not actually set up yet so ignore
# errors.
/sbin/ifdown 'enp0s8'
# Move new config into place
mv -f '/tmp/vagrant-network-entry-enp0s8-1490027783-0' '/etc/sysconfig/network-scripts/ifcfg-enp0s8'
# attempt to force network manager to reload configurations
nmcli c reload || true

# Restart network
service network restart


Stdout from the command:

Restarting network (via systemctl):  [FAILED]


Stderr from the command:

usage: ifdown <configuration>
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

Network interfaces:

$ vagrant ssh -c "ifconfig -a"
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:34:82:4d:79  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::2390:d464:7556:7596  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:bd:c2:7a  txqueuelen 1000  (Ethernet)
        RX packets 420  bytes 53125 (51.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 297  bytes 50859 (49.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.33.10  netmask 255.255.255.0  broadcast 192.168.33.255
        inet6 fe80::a00:27ff:feab:9c44  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:ab:9c:44  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22  bytes 2196 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 74  bytes 6324 (6.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 74  bytes 6324 (6.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Error messages:

Mar 20 16:37:59 localhost.localdomain systemd[1]: Starting LSB: Bring up/down networking...
-- Subject: Unit network.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit network.service has begun starting up.
Mar 20 16:37:59 localhost.localdomain network[12956]: Bringing up loopback interface:  [  OK  ]
Mar 20 16:37:59 localhost.localdomain network[12956]: Bringing up interface enp0s8:  [  OK  ]
Mar 20 16:38:00 localhost.localdomain NetworkManager[712]: <info>  [1490027880.1176] audit: op="connection-activate" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" result="fail" reason="No suitable device found for thi
Mar 20 16:38:00 localhost.localdomain network[12956]: Bringing up interface eth0:  Error: Connection activation failed: No suitable device found for this connection.
Mar 20 16:38:00 localhost.localdomain network[12956]: [FAILED]
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain network[12956]: RTNETLINK answers: File exists
Mar 20 16:38:00 localhost.localdomain systemd[1]: network.service: control process exited, code=exited status=1
Mar 20 16:38:00 localhost.localdomain systemd[1]: Failed to start LSB: Bring up/down networking.

Vagrant Box

$ vagrant box update
==> default: Checking for updates to 'centos/atomic-host'
    default: Latest installed version: 7.20170131
    default: Version constraints: 
    default: Provider: virtualbox
==> default: Box 'centos/atomic-host' (v7.20170131) is running the latest version.
@gtirloni
Copy link
Author

Creating this here but if there's a chance this is actually a bug in Vagrant itself, feel free to close it.

@alexsorkin
Copy link

This is long time unattended issue by RedHat/CentOS teams...
STOCK CentOS vagrant box has following ifcfg preventing network.service from being restarted even WITHOUT private network configured...

-bash-4.2# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"

-bash-4.2# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

Workaround (not so flexible)...
Rebuild and repackage Stock Box by adding:
config.vm.provision "shell", privileged: true,
inline: "[[ -s /etc/sysconfig/network-scripts/ifcfg-eth0 ]] && rm -rf /etc/sysconfig/network-scripts/ifcfg-eth0"

  • Usual box packaging steps...e.g: deploying Insecured vagrant SSH key...etc..
    Then reference your repackaged box as source for your Local Vagrantfile's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants