Monday, March 11, 2013

oVirt Virtualization

oVirt, the first truly open and comprehensive data center virtualization management initiative, provides a venue for user and developer cooperation. The heart of the project is the open source code of oVirt, and the community is governed openly, modeled after the Apache Foundation, Eclipse, LVM, and many other well functioning Linux communities.


http://resources.ovirt.org/releases/stable/tools/ovirt-live-1.0.iso  #Ovirt Managment


http://resources.ovirt.org/releases/stable/iso/ovirt-node-iso-2.6.1-20120228.fc18.iso   #Hyperviosr OS

You’ll need the following…

Minimum hardware

4 GB memory
20 GB disk space

Optional 

Network storage

Software
Mozilla Firefox 17 or later
Internet Explorer 9 or later

Using Ovirt Live ISO boot server.

ovirt-live-1.0.iso

#vi /etc/hostname
ovirt.virt.com

#mkdir /ovirt-demo

#mkdir /ovirt-demo/iso

#mkdir /ovirt-demo/data

#chown 36:36 –R  /ovirt-demo/*

#vi /etc/sysconfig/nfs (# add at the last line)
NFS4_SUPPORT="no“

#vi /etc/exports
/ovirt-demo/iso      0.0.0.0/0.0.0.0(rw)   #ovirt installer
/ovirt-demo/data   0.0.0.0/0.0.0.0(rw)   #ovirt data center 

#chkconfig nfs on; service nfs start

#reboot


Connect to oVirt Engine

Now that you have installed the oVirt Engine and hosts, you can log in to the Engine administration portal to start configuring your virtualization environment.

Log In to Administration Portal

Ensure that you have the administrator password configured during installation as instructed in Example 2.1, “oVirt Engine installation”.
To connect to oVirt web management portal
  1. Open a browser and navigate to https://ovirt.virt.com/webadmin. Substitute domain.example.com with the URL provided during installation.
  2. If this is your first time connecting to the administration portal, oVirt Engine will issue security certificates for your browser. Click the link labelled this certificate to trust the ca.cer certificate. A pop-up displays, click Open to launch the Certificate dialog. Click Install Certificate and select to place the certificate in Trusted Root Certification Authorities store.
  3. The portal login screen displays. Enter admin as your User Name, and enter the Password that you provided during installation. Ensure that your domain is set to Internal. Click Login.
You have now successfully logged in to the oVirt web administration portal. Here, you can configure and manage all your virtual resources. The functions of the oVirt Engine graphical user interface are described in the following figure and list:
Figure 2.4. Administration Portal Features
  1. Header: This bar contains the name of the logged in user, the sign out button, the option to configure user roles.
  2. Navigation Pane: This pane allows you to navigate between the Tree, Bookmarks and Tags tabs. In the Tree tab, tree mode allows you to see the entire system tree and provides a visual representation your virtualization environment's architecture.
  3. Resources Tabs: These tabs allow you to access the resources of oVirt. You should already have a Default Data Center, a Default Cluster, a Host waiting to be approved, and available Storage waiting to be attached to the data center.
  4. Results List: When you select a tab, this list displays the available resources. You can perform a task on an individual item or multiple items by selecting the item(s) and then clicking the relevant action button. If an action is not possible, the button is disabled.
  5. Details Pane: When you select a resource, this pane displays its details in several subtabs. These subtabs also contain action buttons which you can use to make changes to the selected resource.
Once you are familiar with the layout of the administration portal, you can start configuring your virtual environment.

Configure oVirt

Now that you have logged in to the administration portal, configure your oVirt environment by defining the data center, host cluster, networks and storage. Even though this guide makes use of the default resources configured during installation, if you are setting up a oVirt environment with completely new components, you should perform the configuration procedure in the sequence given here.

Configure Data Centers

A data center is a logical entity that defines the set of physical and logical resources used in a managed virtual environment. Think of it as a container which houses clusters of hosts, virtual machines, storage and networks.
By default, oVirt creates a data center at installation. Its type is configured from the installation script. To access it, navigate to the Tree pane, click Expand All, and select the Default data center. On the Data Centers tab, the Default data center displays.
Figure 3.2. Data Centers Tab
The Default data center is used for this document, however if you wish to create a new data center see the oVirt Administration Guide.

Configure Cluster

A cluster is a set of physical hosts that are treated as a resource pool for a set of virtual machines. Hosts in a cluster share the same network infrastructure, the same storage and the same type of CPU. They constitute a migration domain within which virtual machines can be moved from host to host. By default, oVirt creates a cluster at installation. To access it, navigate to the Tree pane, click Expand All and select the Default cluster. On the Clusters tab, the Default cluster displays.
Figure 3.4. Clusters Tab
For this document, the oVirt Node and Fedora hosts will be attached to the Default host cluster. If you wish to create new clusters, or live migrate virtual machines between hosts in a cluster, see the oVirt Evaluation Guide.

Configure Networking

At installation, oVirt defines a Management network for the default data center. This network is used for communication between the manager and the host. New logical networks - for example for guest data, storage or display - can be added to enhance network speed and performance. All networks used by hosts and clusters must be added to data center they belong to.
To access the Management network, click on the Clusters tab and select the default cluster. Click the Logical Networks tab in the Details pane. The ovirtmgmt network displays.
Figure 3.6. Logical Networks Tab
The ovirtmgmt Management network is used for this document, however if you wish to create new logical networks see the oVirt Administration Guide.

Configure Hosts

You have already installed your oVirt Node and Fedora hosts, but before they can be used, they have to be added to the Engine. The oVirt Node is specifically designed for the oVirt platform, therefore it only needs a simple click of approval. Conversely, Fedora is a general purpose operating system, therefore reprogramming it as a host requires additional configuration.

Approve oVirt Node Host

The Hypervisor you installed in Section 2.2.1, “Install oVirt Node” is automatically registered with the oVirt platform. It displays in the oVirt Engine, and needs to be approved for use.
To set up a oVirt Node host
On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts tab, select the name of your newly installed hypervisor.
Figure 3.8. oVirt Node pending approval
Click the Approve button. The Edit and Approve Host dialog displays. Accept the defaults or make changes as necessary, then click OK.
Figure 3.9. Approve oVirt Node
The host status will change from Non Operational to Up.

Attach Fedora Host

In contrast to the hypervisor host, the Fedora host you installed in Section 2.2.2, “Install Fedora Host” is not automatically detected. It has to be manually attached to the oVirt platform before it can be used.
To attach a Fedora host
1. On the Tree pane, click Expand All and select Hosts under the Default cluster. On the Hosts tab, click New.
2. The New Host dialog displays.
Figure 3.10. Attach Fedora Host
Enter the details in the following fields:
  • Data Center: the data center to which the host belongs. Select the Default data center.
  • Host Cluster: the cluster to which the host belongs. Select the Default cluster.
  • Name: a descriptive name for the host.
  • Address: the IP address, or resolvable hostname of the host, which was provided during installation.
  • Root Password: the password of the designated host; used during installation of the host.
  • Configure iptables rules: This checkbox allows you to override the firewall settings on the host with the default rules for oVirt.
3. If you wish to configure this host for Out of Band (OOB) power management, select the Power Management tab. Tick the Enable Power Management checkbox and provide the required information in the following fields:
  • Address: The address of the host.
  • User Name: A valid user name for the OOB management.
  • Password: A valid, robust password for the OOB management.
  • Type: The type of OOB management device. Select the appropriate device from the drop down list.
    • alom Sun Advanced Lights Out Manager
    • apc American Power Conversion Master MasterSwitch network power switch
    • bladecenter IBM Bladecentre Remote Supervisor Adapter
    • drac5 Dell Remote Access Controller for Dell computers
    • eps ePowerSwitch 8M+ network power switch
    • ilo HP Integrated Lights Out standard
    • ilo3 HP Integrated Lights Out 3 standard
    • ipmilan Intelligent Platform Management Interface
    • rsa IBM Remote Supervisor Adaptor
    • rsb Fujitsu-Siemens RSB management interface
    • wti Western Telematic Inc Network PowerSwitch
    • cisco_ucs Cisco Unified Computing System Integrated Management Controller
  • Options: Extra command line options for the fence agent. Detailed documentation of the options available is provided in the man page for each fence agent.
Click the Test button to test the operation of the OOB management solution.
If you do not wish to configure power management, leave the Enable Power Management checkbox unmarked.
4. Click OK. If you have not configured power management, a pop-up window prompts you to confirm if you wish to proceed without power management. Select OK to continue.
5. The new host displays in the list of hosts with a status of Installing. Once installation is complete, the status will update to Reboot and then Awaiting. When the host is ready for use, its status changes to Up.
You have now successfully configured your hosts to run virtual machines. The next step is to prepare data storage domains to house virtual machine disk images.

Configure Storage

After configuring your logical networks, you need to add storage to your data center.
oVirt uses a centralized shared storage system for virtual machine disk images and snapshots. Storage can be implemented using Network File System (NFS), Internet Small Computer System Interface (iSCSI) or Fibre Channel Protocol (FCP). Storage definition, type and function, are encapsulated in a logical entity called a Storage Domain. Multiple storage domains are supported.
For this guide you will use two types of storage domains. The first is an NFS share for ISO images of installation media. You have already created this ISO domain during the oVirt Engine installation.
The second storage domain will be used to hold virtual machine disk images. For this domain, you need at least one of the supported storage types. You have already set a default storage type during installation as described in Section 2.1, “Install oVirt Engine”. Ensure that you use the same type when creating your data domain.
Select your next step by checking the storage type you should use:
  1. Navigate to the Tree pane and click the Expand All button. Under System, click Default. On the results list, the Default data center displays.
  2. On the results list, the Storage Type column displays the type you should add.
  3. Now that you have verified the storage type, create the storage domain:
  • For NFS storage, refer to Section 3.5.1, “Create an NFS Data Domain”.
  • For iSCSI storage, refer to Section 3.5.2, “Create an iSCSI Data Domain”.
  • For FCP storage, refer to Section 3.5.3, “Create an FCP Data Domain”.
Note: This document provides instructions to create a single storage domain, which is automatically attached and activated in the selected data center. If you wish to create additional storage domains within one data center, see the oVirt Administration Guide for instructions on activating storage domains.

Create an NFS Data Domain

Because you have selected NFS as your default storage type during the Manager installation, you will now create an NFS storage domain. An NFS type storage domain is a mounted NFS share that is attached to a data center and used to provide storage for virtual machine disk images.
Important: If you are using NFS storage, you must first create and export the directories to be used as storage domains from the NFS server. These directories must have their numerical user and group ownership set to 36:36 on the NFS server, to correspond to the vdsm user and kvm group respectively on the oVirt Engine server. In addition, these directories must be exported with the read write options (rw).
To add NFS storage:
1. Navigate to the Tree pane and click the Expand All button. Under System, select the Default data center and click on Storage. The available storage domains display on the results list. Click New Domain.
2. The New Storage dialog box displays.
Figure 3.12. Add New Storage
Configure the following options:
  • Name: Enter a suitably descriptive name.
  • Data Center: The Default data center is already pre-selected.
  • Domain Function / Storage Type: In the drop down menu, select Data → NFS. The storage domain types not compatible with the Default data center are grayed out. After you select your domain type, the Export Path field appears.
Use Host: Select any of the hosts from the drop down menu. Only hosts which belong in the pre-selected data center will display in this list.
  • Export path: Enter the IP address or a resolvable hostname of the chosen host. The export path should be in the format of 192.168.0.10:/data or domain.example.com:/data
3. Click OK. The new NFS data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
You have created an NFS storage domain. Now, you need to attach an ISO domain to the data center and upload installation images so you can use them to create virtual machines. Proceed to Section 3.5.4, “Attach and Populate ISO domain”.

Create an iSCSI Data Domain

Because you have selected iSCSI as your default storage type during the Manager installation, you will now create an iSCSI storage domain. oVirt platform supports iSCSI storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).
To add iSCSI storage:
1. On the side pane, select the Tree tab. On System, click the + icon to display the available data centers.
2. Double click on the Default data center and click on Storage. The available storage domains display on the results list. Click New Domain.
3. The New Domain dialog box displays.
Figure 3.13. Add iSCSI Storage
Configure the following options:
  • Name: Enter a suitably descriptive name.
  • Data Center: The Default data center is already pre-selected.
  • Domain Function / Storage Type: In the drop down menu, select Data → iSCSI. The storage domain types which are not compatible with the Default data center are grayed out. After you select your domain type, the Use Host and Discover Targets fields display.
  • Use host: Select any of the hosts from the drop down menu. Only hosts which belong in this data center will display in this list.
4. To connect to the iSCSI target, click the Discover Targets bar. This expands the menu to display further connection information fields.
Figure 3.14. Attach LUNs to iSCSI domain
Enter the required information:
  • Address: Enter the address of the iSCSI target.
  • Port: Select the port to connect to. The default is 3260.
  • User Authentication: If required, enter the username and password.
5. Click the Discover button to find the targets. The iSCSI targets display in the results list with a Login button for each target.
6. Click Login to display the list of existing LUNs. Tick the Add LUN checkbox to use the selected LUN as the iSCSI data domain.
7. Click OK. The new iSCSI data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
You have created an iSCSI storage domain. Now, you need to attach an ISO domain to the data center and upload installation images so you can use them to create virtual machines. Proceed to Section 3.5.4, “Attach and Populate ISO domain”.

Create an FCP Data Domain

Because you have selected FCP as your default storage type during the Manager installation, you will now create an FCP storage domain. oVirt platform supports FCP storage domains spanning multiple pre-defined Logical Unit Numbers (LUNs).
To add FCP storage:
1. On the side pane, select the Tree tab. On System, click the + icon to display the available data centers.
2. Double click on the Default data center and click on Storage. The available storage domains display on the results list. Click New Domain.
3. The New Domain dialog box displays.
Figure 3.15. Add FCP Storage
Configure the following options:
  • Name: Enter a suitably descriptive name.
  • Data Center: The Default data center is already pre-selected.
  • Domain Function / Storage Type: Select FCP.
  • Use Host: Select the IP address of either the hypervisor or Red Hat Enterprise Linux host.
  • The list of existing LUNs display. On the selected LUN, tick the Add LUN checkbox to use it as the FCP data domain.
4. Click OK. The new FCP data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
You have created an FCP storage domain. Now, you need to attach an ISO domain to the data center and upload installation images so you can use them to create virtual machines. Proceed to Section 3.5.4, “Attach and Populate ISO domain”.

Attach and Populate ISO domain

You have defined your first storage domain to store virtual guest data, now it is time to configure your second storage domain, which will be used to store installation images for creating virtual machines. You have already created a local ISO domain during the installation of the oVirt Engine. To use this ISO domain, attach it to a data center.
To attach the ISO domain
1. Navigate to the Tree pane and click the Expand All button. Click Default. On the results list, the Default data center displays.
2. On the details pane, select the Storage tab and click the Attach ISO button.
3. The Attach ISO Library dialog appears with the available ISO domain. Select it and click OK.
Figure 3.16. Attach ISO Library
4. The ISO domain appears in the results list of the Storage tab. It displays with the Locked status as the domain is being validated, then changes to Inactive.
5. Select the ISO domain and click the Activate button. The status changes to Locked and then to Active.
Media images (CD-ROM or DVD-ROM in the form of ISO images) must be available in the ISO repository for the virtual machines to use. To do so, oVirt provides a utility that copies the images and sets the appropriate permissions on the file. The file provided to the utility and the ISO share have to be accessible from the oVirt Engine.
Log in to the oVirt Engine server console to upload images to the ISO domain.
To upload ISO images
1. Create or acquire the appropriate ISO images from boot media. Ensure the path to these images is accessible from the oVirt Engine server.
2. The next step is to upload these files. First, determine the available ISO domains by running:
   # engine-iso-uploader list
You will be prompted to provide the admin user password which you use to connect to the administration portal. The tool lists the name of the ISO domain that you attached in the previous section.
   ISO Storage Domain List:
     local-iso-share
Now you have all the information required to upload the required files. On the Manager console, copy your installation images to the ISO domain. For your images, run:
   # engine-iso-uploader upload -i local-iso-share [file1] [file2] .... [fileN]
You will be prompted for the admin user password again, provide it and press Enter.
Note that the uploading process can be time consuming, depending on your storage performance.
3. After the images have been uploaded, check that they are available for use in the Manager administration portal.
  • Navigate to the Tree and click the Expand All button.
  • Under Storage, click on the name of the ISO domain. It displays in the results list. Click on it to display its details pane.
  • On the details pane, select the Images tab. The list of available images should be populated with the files which you have uploaded.
Figure 3.17. Uploaded ISO images

Now that you have successfully prepared the ISO domain for use, you are ready to start creating virtual machines.

Manage Virtual Machines

The final stage of setting up oVirt is the virtual machine lifecycle - spanning the creation, deployment and maintenance of virtual machines; using templates; and configuring user permissions. This chapter will also show you how to log in to the user portal and connect to virtual machines.

Create Virtual Machines

On oVirt, you can create virtual machines from an existing template, as a clone, or from scratch. Once created, virtual machines can be booted using ISO images, a network boot (PXE) server, or a hard disk. This document provides instructions for creating a virtual machine using an ISO image.

Create a Fedora Virtual Machine

In your current configuration, you should have at least one host available for running virtual machines, and uploaded the required installation images to your ISO domain. This section guides you through the creation of a Fedora virtual server. You will perform a normal attended installation using a virtual DVD.
To create a Fedora server
1. Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster. On the Virtual Machines tab, click New Server.
Figure 4.2. Create New Linux Virtual Machine
You only need to fill in the Name field and select Red Hat Enterprise Linux 6.x as your Operating System. You may alter other settings but in this example we will retain the defaults. Click OK to create the virtual machine.
2. A New Virtual Machine - Guide Me window opens. This allows you to add networks and storage disks to the virtual machine.
Figure 4.3. Create Virtual Machines
3. Click Configure Network Interfaces to define networks for your virtual machine. The parameters in the following figure are recommended, but can be edited as necessary. When you have configured your required settings, click OK.
Figure 4.4. New Network Interface configurations
4. You are returned to the Guide Me window. This time, click Configure Virtual Disks to add storage to the virtual machine. The parameters in the following figure are recommended, but can be edited as necessary. When you have configured your required settings, click OK.
Figure 4.5. New Virtual Disk configurations
5. Close the Guide Me window by clicking Configure Later. Your new Fedora virtual machine will display in the Virtual Machines tab.
You have now created your first Fedora virtual machine. Before you can use your virtual machine, install an operating system on it.
To install the Fedora guest operating system
1. Right click the virtual machine and select Run Once. Configure the following options:
Figure 4.6. Run Linux Virtual Machine
  • Attach CD: Fedora 18
  • Boot Sequence: CD-ROM
  • Display protocol: SPICE
Retain the default settings for the other options and click OK to start the virtual machine.
2. Select the virtual machine and click the Console ( ) icon. This displays a window to the virtual machine, where you will be prompted to begin installing the operating system. For further instructions.
3. After the installation has completed, shut down the virtual machine and reboot from the hard drive.
You can now connect to your Fedora virtual machine and start using it.

Create a Windows Virtual Machine

You now know how to create a Red Hat Enterprise Linux virtual machine from scratch. The procedure of creating a Windows virtual machine is similar, except that it requires additional virtio drivers. This example uses Windows 7, but you can also use other Windows operating systems. You will perform a normal attended installation using a virtual DVD.
To create a Windows desktop
1. Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster. On the Virtual Machines tab, click New Desktop.
Figure 4.7. Create New Windows Virtual Machine
You only need to fill in the Name field and select Windows 7 as your Operating System. You may alter other settings but in this example we will retain the defaults. Click OK to create the virtual machine.
2. A New Virtual Machine - Guide Me window opens. This allows you to define networks for the virtual machine. Click Configure Network Interfaces. See Figure 4.4, “New Network Interface configurations” for details.
3. You are returned to the Guide Me window. This time, click Configure Virtual Disks to add storage to the virtual machine. See Figure 4.5, “New Virtual Disk configurations” for details.
4. Close the Guide Me windows. Your new Windows 7 virtual machine will display in the Virtual Machines tab.
To install Windows guest operating system
1. Right click the virtual machine and select Run Once. The Run Once dialog displays as in Figure 4.6, “Run Linux Virtual Machine”. Configure the following options:
  • Attach Floppy: virtio-win
  • Attach CD: Windows 7
  • Boot sequence: CD-ROM
  • Display protocol: SPICE
Retain the default settings for the other options and click OK to start the virtual machine.
2. Select the virtual machine and click the Console ( ) icon. This displays a window to the virtual machine, where you will be prompted to begin installing the operating system.
3. Accept the default settings and enter the required information as necessary. The only change you must make is to manually install the VirtIO drivers from the virtual floppy disk (vfd) image. To do so, select the Custom (advanced) installation option and click Load Driver. Press Ctrl and select:
  • Red Hat VirtIO Ethernet Adapter
  • Red Hat VirtIO SCSI Controller
The installation process commences, and the system will reboot itself several times.
4. Back on the administration portal, when the virtual machine's status changes back to Up, right click on it and select Change CD. From the list of images, select RHEV-toolsSetup to attach the Guest Tools ISO which provides features including USB redirection and SPICE display optimization.
5. Click Console and log in to the virtual machine. Locate the CD drive to access the contents of the Guest Tools ISO, and launch the RHEV-toolsSetup executable. After the tools have been installed, you will be prompted to restart the machine for changes to be applied.
You can now connect to your Windows virtual machine and start using it.

Using Templates

Now that you know how to create a virtual machine, you can save its settings into a template. This template will retain the original virtual machine's configurations, including virtual disk and network interface settings, operating systems and applications. You can use this template to rapidly create replicas of the original virtual machine.

Create a Fedora Template

To make a Fedora virtual machine template, use the virtual machine you created in Section 4.1.1, “Create a Fedora Virtual Machine” as a basis. Before it can be used, it has to be sealed. This ensures that machine-specific settings are not propagated through the template.
To prepare a Fedora virtual machine for use as a template
1. Connect to the Fedora virtual machine to be used as a template. Flag the system for re-configuration by running the following command as root:
   # touch /.unconfigured
2. Remove ssh host keys. Run:
   # rm -rf /etc/ssh/ssh_host_*
3. Shut down the virtual machine. Run:
   # poweroff
4. The virtual machine has now been sealed, and is ready to be used as a template for Linux virtual machines.
To create a template from a Fedora virtual machine
1. In the administration portal, click the Virtual Machines tab. Select the sealed Red Hat Enterprise Linux 6 virtual machine. Ensure that it has a status of Down.
2. Click Make Template. The New Virtual Machine Template displays.
Figure 4.9. Make new virtual machine template

Enter information into the following fields:
  • Name: Name of the new template
  • Description: Description of the new template
  • Host Cluster: The Host Cluster for the virtual machines using this template.
  • Make Private: If you tick this checkbox, the template will only be available to the template's creator and the administrative user. Nobody else can use this template unless they are given permissions by the existing permitted users.
3. Click OK. The virtual machine displays a status of "Image Locked" while the template is being created. The template is created and added to the Templates tab. During this time, the action buttons for the template remain disabled. Once created, the action buttons are enabled and the template is ready for use.

Clone a Red Hat Enterprise Linux Virtual Machine

In the previous section, you created a Fedora template complete with pre-configured storage, networking and operating system settings. Now, you will use this template to deploy a pre-installed virtual machine.
To clone a Fedora virtual machine from a template
1. Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster. On the Virtual Machines tab, click New Server.
Figure 4.10. Create virtual machine based on Linux template

  • On the General tab, select the existing Linux template from the Based on Template list.
  • Enter a suitable Name and appropriate Description, then accept the default values inherited from the template in the rest of the fields. You can change them if needed.
  • Click the Resource Allocation tab. On the Provisioning field, click the drop down menu and select the Clone option.
Figure 4.11. Set the provisioning to Clone

2. Retain all other default settings and click OK to create the virtual machine. The virtual machine displays in the Virtual Machines list.

Create a Windows Template

To make a Windows virtual machine template, use the virtual machine you created in Section 4.1.2, “Create a Windows Virtual Machine” as a basis.
Before a template for Windows virtual machines can be created, it has to be sealed with sysprep. This ensures that machine-specific settings are not propagated through the template.
Note that the procedure below is applicable for creating Windows 7 and Windows 2008 R2 templates. If you wish to seal a Windows XP template, refer to the oVirt Administration Guide.
To seal a Windows virtual machine with sysprep
1. In the Windows virtual machine to be used as a template, open a command line terminal and type regedit.
2. The Registry Editor window displays. On the left pane, expand HKEY_LOCAL_MACHINE → SYSTEM → SETUP.
3. On the main window, right click to add a new string value using New → String Value. Right click on the file and select Modify. When the Edit String dialog box displays, enter the following information in the provided text boxes:
  • Value name: UnattendFile
  • Value data: a:\sysprep.inf
4. Launch sysprep from C:\Windows\System32\sysprep\sysprep.exe
  • Under System Cleanup Action, select Enter System Out-of-Box-Experience (OOBE).
  • Tick the Generalize checkbox if you need to change the computer's system identification number (SID).
  • Under Shutdown Options, select Shutdown.
Click OK. The virtual machine will now go through the sealing process and shut down automatically.
To create a template from an existing Windows machine
1. In the administration portal, click the Virtual Machines tab. Select the sealed Windows 7 virtual machine. Ensure that it has a status of Down and click Make Template.
2. The New Virtual Machine Template displays. Enter information into the following fields:
  • Name: Name of the new template
  • Description: Description of the new template
  • Host Cluster: The Host Cluster for the virtual machines using this template.
  • Make Public: Check this box to allow all users to access this template.
3. Click OK. In the Templates tab, the template displays the "Image Locked" status icon while it is being created. During this time, the action buttons for the template remain disabled. Once created, the action buttons are enabled and the template is ready for use.
You can now create new Windows machines using this template.

Create a Windows Virtual Machine from a Template

This section describes how to create a Windows 7 virtual machine using the template created in Section 4.2.3, “Create a Windows Template”.
To create a Windows virtual machine from a template
1. Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster. On the Virtual Machines tab, click New Desktop.
  • Select the existing Windows template from the Based on Template list.
  • Enter a suitable Name and appropriate Description, and accept the default values inherited from the template in the rest of the fields. You can change them if needed.
2. Retain all other default setting and click OK to create the virtual machine. The virtual machine displays in the Virtual Machines list with a status of "Image Locked" until the virtual disk is created. The virtual disk and networking settings are inherited from the template, and do not have to be reconfigured.
3. Click the Run icon to turn it on. This time, the Run Once steps are not required as the operating system has already been installed onto the virtual machine hard drive. Click the green Console button to connect to the virtual machine.
You have now learned how to create Fedora and Windows virtual machines with and without templates. Next, you will learn how to access these virtual machines from a user portal.

Using Virtual Machines

Now that you have created several running virtual machines, you can assign users to access them from the user portal. You can use virtual machines the same way you would use a physical desktop.

Assign User Permissions

oVirt has a sophisticated multi-level administration system, in which customized permissions for each system component can be assigned to different users as necessary. For instance, to access a virtual machine from the user portal, a user must have either UserRole or PowerUserRole permissions for the virtual machine. These permissions are added from the manager administration portal. For more information on the levels of user permissions refer to the oVirt Administration Guide.
To assign PowerUserRole permissions
1. Navigate to the Tree pane and click Expand All. Click the VMs icon under the Default cluster. On the Virtual Machines tab, select the virtual machine you would like to assign a user to.
2. On the Details pane, navigate to the Permissions tab. Click the Add button.
3. The Add Permission to User dialog displays. Enter a Name, or User Name, or part thereof in the Search textbox, and click Go. A list of possible matches display in the results list.
Figure 4.13. Add PowerUserRole Permission

4. Select the check box of the user to be assigned the permissions. Scroll through the Assign role to user list and select PowerUserRole. Click OK.

Log in to the User Portal

Now that you have assigned PowerUserRole permissions on a virtual machine to the user named admin, you can access the virtual machine from the user portal. To log in to the user portal, all you need is a Linux client running Mozilla Firefox.
If you are using a Fedora client, install the SPICE plug-in before logging in to the User Portal. Run:
   # yum install spice-xpi
To log in to the User Portal
1. Open your browser and navigate to https://ovirt.virt.com/UserPortal. Substitute domain.example.com with the oVirt Engine server address.
2. The login screen displays. Enter your User Name and Password, and click Login.
You have now logged into the user portal. As you have PowerUserRole permissions, you are taken by default to the Extended User Portal, where you can create and manage virtual machines in addition to using them. This portal is ideal if you are a system administrator who has to provision multiple virtual machines for yourself or other users in your environment.
NOTE: When launching SPICE consoles use SHIFT+F11 to switch to fullscreen mode and SHIFT+F12 to release the mouse cursor.
Figure 4.15. The Extended User Portal
You can also toggle to the Basic User Portal, which is the default (and only) display for users with UserRole permissions. This portal allows users to access and use virtual machines, and is ideal for everyday users who do not need to make configuration changes to the system. For more information, see the oVirt User Portal Guide.
Figure 4.16. The Basic User Portal

You have now completed the Quick Start Guide, and successfully set up oVirt.




Saturday, January 26, 2013

Fastest Remote Directory rsync over ssh CentOS /Fedora/RHEL/Ubuntu

rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" user@<source>:<source_dir> <dest_dir>

The fastest remote directory rsync over ssh archival

This creates an archive that does the following:

rsync:: (Everyone seems to like -z, but it is much slower for me)
-a: archive mode - rescursive, preserves owner, preserves permissions, preserves modification times, preserves group, copies symlinks as symlinks, preserves device files.
-H: preserves hard-links
-A: preserves ACLs
-X: preserves extended attributes
-x: don't cross file-system boundaries
-v: increase verbosity
--numeric-ds: don't map uid/gid values by user/group name
--delete: delete extraneous files from dest dirs (differential clean-up during sync)
--progress: show progress during transfer

ssh::
-T: turn off pseudo-tty to decrease cpu load on destination.
-c arcfour: use the weakest but fastest SSH encryption. Must specify "Ciphers arcfour" in sshd_config on destination.
-o Compression=no: Turn off SSH compression.
-x: turn off X forwarding if it is on by default.
Flip: rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" [source_dir] [dest_host:/dest_dir]

Thursday, January 24, 2013

Add a new LUN to live Multipath environment CentOS/RHEL


  • Rescan SCSI bus:
    # /usr/bin/rescan-scsi-bus.sh
  • Lookup the names of newly available disks
    # lsblk
  • Lookup WWID for each newly discovered disk:
    # scsi_id --page=0x83 --whitelisted --device=/dev/<disk>
  • Add those WWIDs to blacklist_exceptions and multipaths in /etc/multipath.conf.
  • Reload multipathd and check the disks are properly configured:
    # /etc/init.d/multipathd reload
    # lsblk

Friday, December 21, 2012

Standalone Storage Server With GlusterFS 3 On CentOS /RedHat


Download repo file from Gluster web site copy to /etc/yum.repos.d/

http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/CentOS/glusterfs-epel.repo


[root@server ~]# chkconfig iptables off

[root@server ~]# chkconfig ip6tables off

Disable SELINUX

[root@server ~]# vi /etc/selinux/config

reboot

[root@server Downloads]#  yum -y install glusterfs glusterfs-fuse glusterfs-geo-replication fuse fuse-devel fuse-libs fuse-ntfs-3g
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
glusterfs-epel                                                                                                                                 | 1.3 kB     00:00
glusterfs-epel/primary                                                                                                                         | 2.7 kB     00:00
glusterfs-epel                                                                                                                                                    7/7
glusterfs-swift-epel                                                                                                                           | 1.3 kB     00:00
glusterfs-swift-epel/primary                                                                                                                   | 2.5 kB     00:00
glusterfs-swift-epel                                                                                                                                              7/7
Setting up Install Process
No package fuse-devel available.
Package fuse-ntfs-3g-2010.10.2-1.el6.rf.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package glusterfs.x86_64 0:3.3.1-1.el6 will be installed
---> Package glusterfs-fuse.x86_64 0:3.3.1-1.el6 will be installed
---> Package glusterfs-geo-replication.x86_64 0:3.3.1-1.el6 will be installed
--> Processing Dependency: glusterfs-server = 3.3.1-1.el6 for package: glusterfs-geo-replication-3.3.1-1.el6.x86_64
--> Running transaction check
---> Package glusterfs-server.x86_64 0:3.3.1-1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================================================
 Package                                           Arch                           Version                                Repository                              Size
======================================================================================================================================================================
Installing:
 glusterfs                                         x86_64                         3.3.1-1.el6                            glusterfs-epel                         1.8 M
 glusterfs-fuse                                    x86_64                         3.3.1-1.el6                            glusterfs-epel                          64 k
 glusterfs-geo-replication                         x86_64                         3.3.1-1.el6                            glusterfs-epel                         104 k
Installing for dependencies:
 glusterfs-server                                  x86_64                         3.3.1-1.el6                            glusterfs-epel                         540 k

Transaction Summary
======================================================================================================================================================================
Install       4 Package(s)

Total download size: 2.5 M
Installed size: 9.1 M
Downloading Packages:
(1/4): glusterfs-3.3.1-1.el6.x86_64.rpm                                                                                                        | 1.8 MB     00:41
(2/4): glusterfs-fuse-3.3.1-1.el6.x86_64.rpm                                                                                                   |  64 kB     00:01
(3/4): glusterfs-geo-replication-3.3.1-1.el6.x86_64.rpm                                                                                        | 104 kB     00:01
(4/4): glusterfs-server-3.3.1-1.el6.x86_64.rpm                                                                                                 | 540 kB     00:07
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                  46 kB/s | 2.5 MB     00:55
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : glusterfs-3.3.1-1.el6.x86_64                                                                                                                       1/4
  Installing : glusterfs-fuse-3.3.1-1.el6.x86_64                                                                                                                  2/4
  Installing : glusterfs-server-3.3.1-1.el6.x86_64                                                                                                                3/4
  Installing : glusterfs-geo-replication-3.3.1-1.el6.x86_64                                                                                                       4/4
Installed products updated.
  Verifying  : glusterfs-fuse-3.3.1-1.el6.x86_64                                                                                                                  1/4
  Verifying  : glusterfs-3.3.1-1.el6.x86_64                                                                                                                       2/4
  Verifying  : glusterfs-server-3.3.1-1.el6.x86_64                                                                                                                3/4
  Verifying  : glusterfs-geo-replication-3.3.1-1.el6.x86_64                                                                                                       4/4

Installed:
  glusterfs.x86_64 0:3.3.1-1.el6                  glusterfs-fuse.x86_64 0:3.3.1-1.el6                  glusterfs-geo-replication.x86_64 0:3.3.1-1.el6

Dependency Installed:
  glusterfs-server.x86_64 0:3.3.1-1.el6

Complete!

[root@server ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.101 server.com
192.168.5.102 client
192.168.5.103 server0.com

gluster volume create lgv0 server.com:/data

gluster volume start lgv0

gluster volume info

[root@server ~]# gluster volume create lgv0 server.com:/data
Creation of volume lgv0 has been successful. Please start the volume to access data.


[root@server ~]# gluster volume start lgv0
Starting volume lgv0 has been successful

[root@server ~]# gluster volume info

Volume Name: lgv0
Type: Distribute
Volume ID: 08efd9f5-63f5-4982-89d0-6f150c5dee1f
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server.com:/data
[root@server ~]# gluster volume set lgv0 auth.allow 192.168.5.102
Set volume successful
[root@server ~]#


Goto Client side 

# yum install gluster-client -y

# mkdir /gluster-mount

# mount.glusterfs server.com:/lgv0 /mnt/gluster-mount

# mount -a

# df -HT

[root@client ~]# df -HT
Filesystem    Type     Size   Used  Avail Use% Mounted on
/dev/mapper/vg_server-lv_root
              ext4     5.9G   2.8G   2.8G  50% /
tmpfs        tmpfs     523M    91k   523M   1% /dev/shm
/dev/sda1     ext4     508M    35M   448M   8% /boot


server.com:/lgv0
                      9.7G  1.7G  7.5G  19% /mnt/glusterfs



[root@client ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Fri Dec 21 06:44:38 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_server-lv_root /                       ext4    defaults        1 1
UUID=11534f37-3f19-4b75-a846-f08685dcdd97 /boot                   ext4    defaults        1 2
/dev/mapper/vg_server-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0


server.com:/lgv0 /mnt/gluster-mount glusterfs defaults,_netdev 0 0