Tumgik
mwak-blog · 13 years
Text
Backup An ESXi Host's Config Using vCLI
Assuming you are already connected to your VI server using vCLI...
$ehost = Get-VMHost host
Get-VMHostFirmware $ehost -BackupConfiguration -DestinationPath your backup path
Full PowerCLI documentation here.
13 notes · View notes
mwak-blog · 13 years
Photo
Tumblr media
Something about a library that brings me relaxation.
Kaimal Mark II Lens, Blanko Film, No Flash, Taken with Hipstamatic
0 notes
mwak-blog · 13 years
Text
Oh, Installous, Where Do You Put My IPAs?
Just getting into the Jailbreakin' scene, quite late if I might add, but whatever. I have been using Installous to download apps and it looks like it keeps the IPAs (app packages) saved on your iDevice until you give the delete command. I'm sure there's a way for it to automatically remove the IPA after a successful install, but who wants that?
While SSH'd into my iPhone last night I noticed that Installous stores the IPAs here:
/User/Documents/Installous/Downloads
Tumblr media
With this in mind, one could easily SSH into their iDevice (w/ SSH enabled of course) using something like WinSCP and dump their IPAs to another piece of storage for safe-keeping and recovery purposes.
Very nice indeed.
0 notes
mwak-blog · 13 years
Text
vSphere 5 Reference Card
The author over at vReference.com has created this amazingly thorough reference card for vSphere 5.0. I will be printing this and adding it to my arsenal of study materials for my VCP-510 exam in March.
Direct links:
A-4
Letter
Full page
A link to the author's article can be found here. You should use this to ensure that you are downloading the latest version.
6 notes · View notes
mwak-blog · 13 years
Text
Syslogin' & Update Manager
So, it seems that if my ESXi hosts can't connect to the datastore  I have assigned for syslogs, it alters the path to this:
Tumblr media
[]/vmfs/volumes/...
When it should be:
Tumblr media
[datastorename]/vmfs/volumes/...
I discovered this as I was trying to scan my hosts through Update Manager, and got the following error in vSphere:
The error doesn't really give much to go on, but a quick search on the Internet found this:
http://communities.vmware.com/message/1982433
0 notes
mwak-blog · 13 years
Text
Recent Developments (Resolved)
It looks like jumbo frames was to blame for one of the major issues I was having. With jumbo frames enabled on my hosts' storage vSwitches, my switch, and NOT on my OpenFiler NICs, any deployment from a template would kick off, but around 8% my connection to OpenFiler would drop and I would have to restart the NICs on OpenFiler.
It turns out that jumbo frames are not supported on either of the NICs I have in my OpenFiler box, either the onboard or the Broadcom BCM5721 card. The Broadcom is incorrectly reported on some websites as a jumbo frames supported card, but it is NOT.
I have since replaced the Broadcom with two additional Intel Gb NICs.
0 notes
mwak-blog · 13 years
Text
Recent Developments
Between receiving my hardware (no RMAs! no DOAs!), assembling my hardware, and configuring all of it I have fell behind on updating this web log. 
I've experienced a wide variety of errors so far, but for the most part I have a working vSphere 5 infrastructure.
My OpenFiler box is giving me the most problems. It seems my onboard NIC is dropping packets and as a result I'll lose connection to the vSphere client, even though my servers remain running. This is usually resolved by restarting the questionable NIC on my OpenFiler box, at which point my ESXi hosts will resume their connections to their datastores and all will be well...until the next time it happens.
I'm at the stage now where I'm trying to ascertain whether the above described behavior happens when I perform a certain task in vSphere (say, deploy a client), but I haven't found anything as of yet.
More to come...
0 notes
mwak-blog · 13 years
Text
ESXi 5.0 Home Lab: Part 3 (Purchasing Hardware)
Time to purchase some new hardware!
Previous posts:
Part 1
Part 2
With my wish lists created (see Part 2), I went ahead and purchased the hardware today.
My order total was $1,362.69, which is cheaper than it should be because Newegg had a order limit of one for the Core i5, so I'm going to have to order another one after the 48-hour limit has elapsed. I even tried to immediately order another Core i5 after my first order went through, and at first it appeared to work, but then my second order was cancelled and I was sent the same notice about order limitations for that product. It seems that I will have to play by the rules, unfortunately.
0 notes
mwak-blog · 13 years
Text
ESXi 5.0 Home Lab: Part 2 (Reviewing Hardware)
Now on to the good stuff: Reviewing Hardware! Here's Part 1, in case you missed it.
I spent a whole afternoon virtually assembling the hardware for my home lab in a Newegg wish list, and here's what I came up with:
My ESXi Hosts 
Antec NSK2480 Micro-ATX case  $117.99
ASUS P8Z68-M Pro Motherboard  $117.99
Intel Core i5-2500K 3.3GHz Quad-Core CPU $229.99
Patriot Gamer 2 8GB (2 x 4GB) DDR3 RAM $39.99
Cooler Master Hyper 212 CPU Fan & Heatsink $29.99
LaCie mosKeyto 8GB USB 2.0 Flash Drive $19.99
I purchased two each of the above listed items, with the exception of the Patriot RAM which I purchased four (two for each host.) This will give me 16GB of DDR3 RAM in each of my ESXi hosts.
The PSZ68-M Pro will allow me some room to overclock, in case I feel the need to squeeze some more juice from my hardware. That's the reason I purchased the Z68 model, as it will allow me to overclock the CPU while still letting me use the built-in GPU. 
You may have noticed that I did not include discrete graphics cards, but that's because I can get decent graphics with the Core i5-2500K, and for my home lab purposes this will do just fine. Less parts = less money, yes, but also less required power and less wasted energy (heat). 
In addition to the lack of discrete graphics card, you may have also noticed that I did not include hard drives for these hosts. Why that's because I'm going to be running ESXi from those near-microscopic LaCie USB flash drives! Why spin a noisy and heat-producing hard disk when you can run ESXi from flash memory? 
That's pretty much it for my hosts' hardware, I just hope the CPU fan & heatsink I purchased fits into my case. I'm pretty sure I measured correctly...
My Storage Box
In order to implement a lot of the extremely useful features of vSphere, a shared storage box is required. For my home lab I'm going to be installing OpenFiler (FREE!) on the storage box I will be building. I'm using OpenFiler on a DL360 G5 at work, with one 1Gb NIC and a single NFS volume, and it has treated me well so far. I'm primarily using it to organize and centralize my ISO images, but I have a few work lab VMs in there as well. OpenFiler also supports iSCSI, which I will be using on this new storage box.
Here are the parts I have picked out:
Antec Three Hundred ATX Mid Tower Case   $69.99
ASUS P8H61-M Micro-ATX Motherboard  $84.99
Antec EA-430D PSU  $59.99
Intel Core i3-2100 3.1GHz Dual-Core CPU  $124.99
Patriot 4GB DDR3 RAM (2 x 2GB)  $24.99
Cooler Master Hyper TX3 CPU Fan & Heatsink  $25.99
I did not purchase any hard drives because I have four 1TB drives at my house that I'm going to throw into this box. It's going to be become my shared storage for basically everything, including all my digital media. 
4GB of DDR3 RAM might be a bit overkill according to OpenFiler requirements, but I can always use the RAM in another machine if it's not necessary in my storage box.
In the future, and if cost wasn't an issue, I'd like to get a hardware RAID controller to throw in this box. There's always next purchase...
0 notes
mwak-blog · 13 years
Text
ESXi 5.0 Home Lab: Part 1 (Planning)
In preparation for my VCP-510 certification exam I decided I wanted to go all out and build a dreamy home lab. I started with the following guidelines:
As quiet as possible
As compact as possible
At or less than $2,000
At least two hosts w/ a shared storage box
Of all the above listed guidelines, I think the first one is going to be the most difficult to achieve. I think this one will have to wait until after I get the infrastructure up and running, so I can more accurately judge what’s tolerable. Swapping out and/or changing speeds on case fans is not a hard task, and doing so after the systems are built doesn’t make it that much more difficult.
The next one involves space, which in my case is very important because simply put, I don’t have much. I already have one mammoth of a case under my desk, with no spare room whatsoever, so I’m going to have to design/build some sort of rack or system of shelves to store my equipment. Adequate airflow is a must, but that goes without saying. If I can find something that fits my needs retail, I’ll skip the construction and focus my energy on the lab itself. 
My spending limit ($2,000 maximum) should be more than enough for two hosts and a storage box, so if possible, I’ll try and spend even less. 
My last guideline is related to my lab infrastructure itself, which will consist of two physical ESXi hosts and a shared storage box (OpenFiler). Without my imposed financial limitations, I’d go for three physical ESXi hosts and a shared storage box, but two hosts should suffice. I will still be able to demonstrate vMotion, Storage vMotion, Fault Tolerance, and all the other vSphere 5 goodies. 
Next up: Part 2 (Reviewing & Purchasing)
0 notes
mwak-blog · 13 years
Text
Upgrade: ESX 4.0 -> ESX 5
I was tasked with upgrading a ESXi 4.0 host (single host) to ESXi 5.0, at work. The host wasn't in our production environment, and only had a few VMs living on it, but they weren't powered on. I really didn't need them to exist and could have purged them from the host before upgrading, but I wanted to leave them around to see what would come of them post-upgrade.
The host server's specs:
MODEL / HP ProLiant DL320 G6
CPU / 2 x Intel Xeon E5502 @ 1.87GHz
RAM / 16GB
STORAGE / 2 x 160GB SATA HDD (LOCAL)*
*The hard disks are actually configured in a RAID 1, but ESX does not see the array--it sees independent disks instead. No bother, this is a lab server anyway.
I had the installation ISO already downloaded, so I started the upgrade process by creating a bootable USB drive using Unetbootin. For instructions, refer to this page: http://goo.gl/HXKdp. Once that was complete it was time to insert my boot-able USB drive and reboot the host server.
Upon running the installer and targeting the disk I wanted to upgrade ESX on, I received the following message:
The image above reads:
The selected storage device contains an installation of ESX and a VMFS datastore. Choose whether to upgrade or install and overwrite the existing ESX installation. Also choose whether to preserve or overwrite the existing VMFS datastore.
I was also informed that "only the relevant settings will be migrated...my system has custom VIBs...proceeding with migration could cause it to not boot..."
I ignored the custom VIB warning because this is a test server. If this were a production server I'd have been more cautious. 
Since my task was to upgrade this host from 4.1 to 5.0, I left Force Migrate ESX, preserve VMFS datastore checked and proceeded with the upgrade.
The next warning notification I received shows the custom VIBs that would not be migrated. The notification message:
Again, since this is a test server I ignored this warning and proceeded with the upgrade.
The ESX installer then scanned the system.
The installer found my ESX 4 installation and wanted me to confirm the force migration, as well as warn me that the disk will be repartitioned. 
Then the migration progress screen appeared.
The actual migration process took 15-20 minutes to complete. Once it was completed, I was shown the finished screen below.
After I rebooted my host was upgraded and ready to go. 
0 notes