Friday, September 28, 2018

LXD Increase ZFS loop Storage

Today I had the need to grow a loop device which was the root of a ZFS that was hosting several LXC managed via LXD.

I discovered that LXD doesn't let you directly grow a loop backed ZFS pool, but you can do so with:

sudo truncate -s +XXXG /var/lib/lxd/disks/<POOL>.img  sudo zpool set autoexpand=on lxd  sudo zpool online -e lxd /var/lib/lxd/disks/<POOL>.img  sudo zpool set autoexpand=off lxd

Monday, January 23, 2017

Windows Image Creation from every Operating

Requirements :
Procedure :

Boot Virtual Box VM from Windows ISO
Qcow disk type
40GB Root Disk
Load your Windows ISO to the Primary CD drive
Add a secondary CD drive and attach the VirtIO ISO to it

Proceed with Windows installation
Load VirtIO drivers from the attached ISO during the installation
Enable RDP & user account 
ref http://www.andreamonguzzi.it/windows-server-2012-installare-e-configurare-il-ruolo-rds/
Once Image spawn and get into machine Enable RDP, Create User account with admin Privileges.

Install Cloudbase-Init
http://www.cloudbase.it/downloads/CloudbaseInitSetup_Beta.msi

Overwrite the default configuration file at C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.conf with the following

[DEFAULT]
username=Admin
groups=Administrators
inject_user_password=true
plugins=cloudbaseinit.plugins.windows.sethostname.SetHostNamePlugin,cloudbaseinit.plugins.windows.createuser.CreateUserPlugin,cloudbaseinit.plugins.windows.networkconfig.NetworkConfigPlugin,cloudbaseinit.plugins.windows.sshpublickeys.SetUserSSHPublicKeysPlugin,cloudbaseinit.plugins.windows.extendvolumes.ExtendVolumesPlugin,cloudbaseinit.plugins.windows.userdata.UserDataPlugin
network_adapter=
config_drive_raw_hhd=true
config_drive_cdrom=true
verbose=true
logdir=C:\Program Files\Cloudbase Solutions\Cloudbase-Init\log\
logfile=cloudbase-init.log

Disable Windows Firewall or setup the services that you want to allow for a finer control (this was just a rough test)

Apply customization to image
Install packages, add users, modify configurations, etc.
Run Windows Update
Run Sysprep:
C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Convert disk image into qcow2
Sorry to get into Ubuntu.. We need some what. VirtualBox only supports Qcow images, not Qcow2, so we'll use qemu-img to convert the image to Qcow2 for use with OpenStack. as below
qemu-img convert -f qcow -O qcow2 windows.qcow windows.qcow2
Reboot the VM
Now import the qcow2 image in glance
# glance image-create –name window –is-public=true –disk-format=qcow2 –container-format=bare –file (location of qcow2 image that you want to import into glance ).


please comment with your experience 

reference:
http://docs.openstack.org/image-guide/windows-image.html
https://maestropandy.wordpress.com/2014/12/05/create-a-windows-openstack-vm-with-virtualbox/


Alex Barchiesi

Tuesday, October 18, 2016

root access a VM without root password

so here is how to get into your VMs without knowing the root pass or having the ssh key to reach them. 
Basically we are going to mount a qemu disk image and make the changes needed. 
In order to mount a QUMU / KVM disk image you need to use qemu-nbd, which lets you use the NBD protocol to share the disk image on the network.

sudo modprobe nbd max_part=8
sudo qemu-nbd -c /dev/nbd0 -P 1 /var/lib/libvirt/images/img_name.qcow2
sudo mount /dev/nbd0p1 /mnt/kvm

Do whatever change you need to do (i.e. to have root access on ubuntu change the /etc/ssh/sshd_config file + the /etc/shadow with the appropriate hash of the pass)

sudo umount /mnt/kvm
sudo nbd-client -d /dev/nbd0


That's it.
Hope this helps Alex Barchiesi

Thursday, July 28, 2016

LXC manual migration on ZFS

easy notes to migrate an LXC container to zfs filesystem
I'll use as an example a zfs pool called vd_lxc_container

basically if the <LXC_name> is the non-zfs LXC and <LXC_ZFS> is the zfs version here are the commands to issue:

lxc-stop -n <LXC_name>
mv /var/lib/lxc/<LXC_name> /var/lib/lxc/<LXC_name>_OLD
lxc-copy -B zfs -n <LXC_name>_OLD -N <LXC_ZFS>
zfs list 
in case needed change the mount point and rename the pool:
zfs set mountpoint=/var/lib/lxc/<LXC_damigrare>/rootfs vd_lxc_container/<LXC_ZFS>
zfs rename vd_lxc_container/<LXC_ZFS> vd_lxc_container/<LXC_damigrare>
(zfs list)
Modify config file to match names and mac address in case needed 
lxc-start -n <LXC_ZFS>
lxc-destroy -s -n <LXC_name>_OLD

best Alex 

Tuesday, July 19, 2016

how to make LXC forwarding again traffic towards real network in ubuntu 16

We had a problem recently when migrating from ubuntu14 to ubuntu16 our LXC infrastructure:
the overall networking from outside (MASQUERADE and DNAT) was not working all of a sudden...

Apparently the difference is in the host machine:
-ubuntu 14 has the bridge module charged in the kernel with by default (check with sysctl -a)
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

in this case we used to forward the traffic "from" and "to" the bridges where we had LXC attached and to masq the ips when needed.

-ubuntu 16 has not (even if you create bridges and set iptables to forward the bridges traffic) unless you add the following rule: 
(check with sysctl -a|grep bridges)
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
(check again with sysctl -a|grep bridges)

this way we had the same behaviour as with the Ubuntu 14 (well...more or less, you may need to trim a bit the forwarding table)

hope this will help... it took quite a while to figure this out
ciao
Alex 

how to use an LXC to setup a MAAS region controller

First a word of warning: this feature is experimental. As far as I know it should be stable, but there are known security concerns, specifically with mounting ext2/3/4 volumes. 
It should only be enabled in trusted environments where potentially malicious users do not have shell access to your system.

A lxc container will not have the device nodes needed for mounting (/dev/fuse for fuse and some block device for ext4, e.g. /dev/loop0) and will not be permitted to mount by AppArmor. 
This how to shows how to create a lxc config which will run the container without AppArmor confinement and will allow you to mount the devices.

Requirements
In order to use this feature, you will need a 4.4.0-6.21 or later kernel in Ubuntu xenial. To follow these instructions you will also need to have lxc installed on the host machine

Setup:
in the HOST 
You need to flip the module parameters to enable user namespace mounts for ext4.
$ echo Y | sudo tee /sys/module/ext4/parameters/userns_mounts

In the LXC 
add the following to the /etc/rc.local 
for i in `seq 0 7`; do /bin/mknod /dev/loop$i b 7 0; done
to create the needed loop devices (they are needed by MAAS top manage the tftp images)

run it or reboot the LXC 
$ /etc/rc.local 

check for /dev/loopNN devices and you are done 

Test

$ dd if=/dev/zero of=ext4.img bs=1M count=8
$ mkfs.ext4 ext4.img
$ sudo losetup /dev/loop0 ext4.img
 
$ mkdir -p mount
$ mount /dev/loop0 mount  
$ df

This filesystem can be unmounted in the usual way.

Now you have to follow the MAAS install guide and you are free to move the MAAS LXC wherever you want. 



Friday, April 1, 2016

pxe install KALI linux with a broadcom non free driver

Today I needed to install a KALI linux through the PXE on a server with a broadcom nic.
The kali is a powerful penetration suite Debian based (for info kali.org) and uses by default only free repositories.
The installer complains about non-free firmware when trying to load the proper nic firmware during the boot (and from there take note of the name of the firmware)

Find online the proper .deb (in my case:  firmware-bnx2x_20160110-1_all.deb) containing the firmware that you noted before (bnx2x-e2-7.12.30.0.fw), 
download it and check that it correctly contains what you need with:

#dpkg -c firmware-bnx2x_20160110-1_all.deb 
[...]
-rw-r--r-- root/root    321320 2016-01-10 22:35 ./lib/firmware/bnx2x/bnx2x-e2-7.12.30.0.fw
[...]

then:
#mkdir /tmp/firmware ; cd .. 
#pax -x sv4cpio -w firmware | gzip -c >firmware.cpio.gz

now copy the firmware.cpio.gz in your tftp_root_dir/kali_install/amd64 where you have the initrd.gz file and brutally attach it to your initrd:

#cp  initrd.gz initrd.gz.orig; cat initrd.gz.orig firmware.cpio.gz > initrd.gz

works like a charm.

now boot and enjoy your pxe installed Kali farm
alex barchiesi