Saturday, April 30, 2011

Linux Date Commands

Basic Command-line Tools:

The date command can be used as follows to display the time and date:

$ date
Fri Mar 28 16:01:50 CST 2003

To see UTC/GMT, you can do this:

$ date --utc
Fri Mar 28 08:04:32 UTC 2003

The date command also can be used to set the time and date. To set the time manually, do this:

# date -s "16:15:00"
Fri Mar 28 16:15:00 CST 2003

If you also need to adjust the date, and not just the time, you can do it like this:

# date -s "16:55:30 July 7, 1986"
Mon Jul 7 16:55:30 PDT 1986

There is also another way to set the date and time, which is not very pretty:

# date 033121422003.55
Mon Mar 31 21:42:55 PST 2003

The above command does not use the -s option, and the fields are arranged like this: MMDDhhmmCCYY.ss
where MM = month, DD = day, hh = hour, mm = minute, CCYY = 4 digit year, and ss = seconds.

Please note that setting the clock with the date command must be done as root. This is a "savage" way to adjust the time. It adjusts the Linux kernel system time.

There is also a hardware clock (CMOS clock). You can look at the current hardware clock time with:

hwclock --show

I always keep my hardware clocks set to UTC/GMT. This maintains my clocks uniformly without any worries about "Daylight Savings Time". This is important, because when you set the hardware clock from the system clock (kept by the Linux kernel), you need to know if this is the case. To set the hardware clock from the system clock, leaving the hardware clock in UTC, enter the following:

# hwclock --systohc --utc
# hwclock --show
Fri 28 Mar 2003 04:23:52 PM CST -0.864036 seconds

Another interesting item is that the Linux system clock stores time in seconds since midnight on January 1st, 1970 (UTC). This is called UNIX time. Unfortunately, because this is a 32-bit value, there is a year-2038 problem. Hopefully, everyone will have moved to 64-bit architectures by then. In order to see the UNIX time, you can use the following command:

date +%s

Create a new soft link for /etc/localtime. Here is an example of step 3 and step 4:

# cd /etc
# ls -al localtime
lrwxrwxrwx 1 root root 39 Mar 28 07:00 localtime -> /usr/share/zoneinfo/America/Los_Angeles

# rm /etc/localtime

# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime
# ls -al localtime
lrwxrwxrwx 1 root root 34 Mar 28 08:59 localtime -> /usr/share/zoneinfo/America/Denver

# date
Fri Mar 28 09:00:04 MST 2003

Thursday, April 28, 2011

EMC Avamar VM Image Backup Proxy Setup

I am preparing to run a few tests with our EMC Avamar grid and our vSphere infrastructure. We will be leveraging the Backup Proxy machine. I thought I would post a beginning to end of how to set it up and get your first machine backed up.

Step 1.
Download the OVF from the Avamar software page. This is usually where the Windows, Linux and Mac software are located in your Grid.

Step 2.
Open vCenter Administrator and then click on File > Deploy OVF Template

Step 3.
Locate the OVA template on your computer and then click open

Step 4.
Deploy from file confirmation. Click Next.

Step 5.
Template details, click Next

Step 6.
Specify name and folder location to deploy template.

Step 7. Specify the Datastore to put the template.

Step 8.
Disk type to deploy. Choose which ever you want to deploy then click Next.

Step 9.
Network Mapping.

Step 10.
Verify information

Step 11.
Deployment Successful

Step 12.
Power on Virtual Machine to complete the configuration.

Step 13.
Goto the console of the virtual machine and choose first setup.

Step 14.
Configure the network choose 1). Enter the information in the menus and then Save & Quit.

Step 15.
Enter Time zone. Should be easy enough without me giving you screen shots. Then enter 3) for the proxy type. This makes the proxy only backup Windows machines. If you need to backup/restore linux machines when make another proxy.

Step 16.
Choose the proxy type

Step 17.
Register proxy with a Management Console.

Step 18.
Enter the management console that we should be connected to

Wednesday, April 6, 2011

Avamar to vCenter SSL Cert Communication

If you need to connect EMC Avamar to vCenter and have self assigned SSL certificates then this might help you out.

Log into the Avamar head node via ssh
ssh admin@hostname
Stop the MCS by typing
dpnctl stop mcs
Modify the following file
/usr/local/avamar/var/mc/server_data/prefs/mcserver.xml
Using a Unix/Linux editor look for the following code and change "true" to "false"
<entry key="ignore_vc_cert" value="true" />
Save your changes and then restart the Avamar MCS
dpnctl start mcs

Tuesday, April 5, 2011

VMFS Block Size Does Size Matter??

Well I am sure I am going to get a "That's what she said" or "It's not the size that matters but how you use it." Well the size might actually matter. I have been trying to find solid proof but I cannot get a finalized answer. Very smart people have very good points but in the end VMware does an excellent job of taking care of it for us. Here is some great information, you decide how you want to run this but to be honest I am going with 8MB block size.

First the facts:

If you assign a VMFS volume with 1MB blocksize the thin provisioned disk will grow in 1MB increments. Think of this in terms of disk I/O. If you where to increase the size of a VM by placing a 8GB file on the system you would end up with the following:
8MB Block size: 1,000 grows vs. 1MB Block size: 8,000 grows (Which do you think has less over head?)
I found this paragraph on many blogs but not sure who wrote it:
"If you create a thin provisioned disk on a datastore with a 1MB blocksize the thin provisioned disk will grow with increments of 1MB. Hopefully you can see where I’m going. A thin provisioned disk on a datastore with an 8MB blocksize will grow in 8MB increments. Each time the thin-provisioned disk grows a SCSI reservation takes place because of meta data changes. As you can imagine an 8MB blocksize will decrease the amount of meta data changes needed, which means less SCSI reservations. Less SCSI reservations equals better performance in my book."
I have to admit that really in the world today why wouldn't you just choose to go with the 8MB block size. If you're using Jumbo Frames on iSCSI wouldn't you also think that bigger is better? If you have proof that this is not the case please let me know. I want to get to the bottom of this.

UPDATE

It was my personal experience when I moved all of the virtual machines to the same EqualLogic with 4MB or 8MB block size they performed better and I was able to put more machines on a datastore. If you find the same performance increase please let me know.

Multi-pathing iSCSI for vSphere 4

Here is an article for multi-pathing for both ESX and ESXi. One thing to keep in mind that most people over look is that some vendors create their own version of ESX/ESXi such as HP. These include additional software for their hardware and I highly recommend that you download their version so you have all of the embedded drivers for their hardware.

Native iSCSI multi-pathing in vSphere 4 provides superior bandwidth performance by aggregating network ports. Configuring iSCSI multi-pathing requires at least two network ports on the virtual switch. The following steps must be performed on each ESX or ESXi server individually.
  • Create a second VMkernel port on the virtual switch for iSCSI.
  • For each VMkernel port on the virtual switch assign a different physical network adapter as the active adapter. This ensures the multiple VMkernel ports use different network adapters for theirI/O. Each VMkernel port should use a single physical network adapter and not have any standby adapters.
  • From the command line, bind both VMkernel ports to the software iSCSI adapter. The vmk# and vmhba## must match the correct numbers for the ESX or ESXi server and virtual switch you are configuring, for example:
vmkiscsi-tool -V -a vmk0 vmhba36
vmkiscsi-tool -V -a vmk1 vmhba36
Once configured correctly, perform a rescan of the iSCSI adapter. An iSCSI session should be connected for each VMkernel bound to the software iSCSI adapter. This gives each iSCSI LUN two iSCSI paths using two separate physical network adapters. As an example refer to figure below for NIC teaming tab for VMkernel properties.



To achieve load balancing across the two paths, datastores should be configured with a path selection policy of round robin. This can be done manually for each datastore in the vSphere client or ESX can be configured to automatically choose round robin for all datastores. To make all new datastores automatically use round robin, configure ESX to use it as the default path selection policy from the command line:
esxcli corestorage claiming unclaim --type location
esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
esxcli corestorage claimrule load
esxcli corestorage claimrule run
It is important to note that native vSphere 4 multi-pathing cannot be used with configurations that utilize more than one subnet and VIP (virtual IP). Multiple paths cannot be routed across those subnets by the ESX/ESXi 4 initiator.

ESXi iSCSI Initiator Setup (6+ NICs)

I found some directions for configuring vSphere 4 (ESX/ESXi) and thought I would post them. These directions assume you have 6 or more physical NICs on your server. Let's begin...

Six network ports
VMware vSphere 4 servers with six Gigabit network ports are ideal for delivering performance with the software iSCSI initiator. The improvement over four ports is achieved by separating VMotion and FT traffic from iSCSI traffic so that they do not have to share bandwidth. iSCSI, VMotion, and FT will perform better in this environment.

To configure vSphere 4 servers with six Gigabit network ports, use three virtual switches, each comprising two Gigabit ports teamed together, as shown below. If possible, one port from each of the separate Gigabit adapters should be used in each team to prevent some bus or card failures from affecting an entire virtual switch.

The first virtual switch should have:
  • A virtual machine network
  • A service console (ESX) or management network (ESXi)
The second virtual switch, for iSCSI, should have:
  • A VMkernel network with VMotion and FT disabled
The third virtual switch should have:
  • A VMkernel network with VMotion and FT enabled

More than six ports
If more than six network ports are available, you can add more ports to the iSCSI virtual switch to increase available bandwidth, or you can use them for any other network services desired.

Enabling the iSCSI software adapter

You will need to enable the vSphere 4 iSCSI software adapter on each ESX or ESXi server. The iSCSI software adapter is managed from each server’s storage adapters list. Here are some guidelines:
  • Enable the iSCSI adapter on each ESX or ESXi server.
  • Copy or write down the iSCSI qualified name (IQN) that identifies each vSphere 4 server; it will be needed for authentication later on the SAN.
  • Reboot the ESX or ESXi server after enabling the iSCSI software adapter if prompted to do so.
HBA connectivity and networking

SAN connectivity via iSCSI HBAs enables both offloading of the iSCSI processing from the vSphere 4 server and booting of ESX itself from the iSCSI SAN. HBAs do not require licensing or special networking within vSphere 4 servers as they provide a dedicated network connection for iSCSI only. The physical network for HBAs should be a Gigabit network dedicated to the SAN, just as it is for software initiators. As a best practice, use two HBA initiators (a dual port or two single ports), each configured with a path to all iSCSI targets for failover. Configuring multiple HBA initiators to connect to the same target requires authentication for each initiator’s IQN to be configured on the SAN. (See Figure below.) Typically this is configured in the HP P4000 software as two servers (one for each HBA initiator), each with permissions to the same volumes on the SAN.

Saturday, April 2, 2011

ThinApp Adobe Acrobat X

Working with ThinApp 4.6.x, I need to thinApp Acrobat X Reader. Adobe uses the DLM in order to install their software. I don't want to use the DLM in order to make my build. I use another Windows 7 machine to do the DLM install and then navigate to the installer folder to get the MSI package.

The location of the MSI package for Acrobat X on Windows 7 x86 (should be the same for x64)

c:\Program Files\Adobe\Reader 10.0\Setup Files\{ hash }\

Once you have the installer begin building the ThinApp of acrobat.
c:\Program Files\Adobe\Reader 10.0\Setup Files\{ hash }\
You might find better results for now ThinApp'ing Acrobat 9.x

Raspberry Pi Zero W - Wireless Configuation

create the file under "boot" folder wpa_supplicant.conf country=GB ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev u...