November 29, 2012
Posted by on
Recently I’ve been working to simplify and consolidate our service provision. The path of least pain has been determined as placing core applications in colocation. While investigating the provision of storage and with memories of building 2008 R2 clusters still clear in my head I have begun trialling Server 2012. Having read a series of articles by Aidan Finn (his excellent blog here) about Virtualisation on server 2012 and I happened across his converged fabrics posts, here.
First some background, in Hyper-v R2 you need upwards of six nic’s to build a VM host cluster, you can get functionality with less but you leave yourself exposed, it would not be N+1. Also bear in mind that teaming for fault tolerance across multiport network cards is only going to give you a false sense of security on the server side (it is after all only a single card regardless of how many ports it has).
In a nutshell, what I’m excited about is that you can use native teaming (or otherwise) on Server 2012 to bond a series of nics together, then spread your live-migration, storage, guest access and other nic requirements across a series of virtual nic’s connected to the virtual switch bound to this nic team (phew). You then set QOS on the virtual switch for the different virtual adaptors so you can guarantee service for the different aspects of connectivity your Hyper-V cluster will need. Anyway, have a look at the Aidan’s posts on the matter, they make for a great lab.
In my lab I’ve used a pair of 1gbe links and it works great for testing, in production you’d be looking at 2+10gbe links ideally, giving you resilience and most of the bandwidth you’d ever need in the forseeable future, at least for the kind of services/load experienced in most SME’s.
January 24, 2011
Posted by on
I’ve been using a portable lab for writing documentation and migration testing thats been working really well, the requirements were as follows:
- Entire lab should weigh less than 12 kg
- Storage solution be fast enough to be usable under load and support persistent reservation for CSV’s
- Should be able to test pretty much any sensible scenario on two laptops.
- Have storage of sufficient speed and quality that lab could be used for migrations or clones of production machines or for transferring machines between sites.
For the virtualisation platform the lab uses a pair of HP Elitebooks, one 2540p and an 8530w (until i can swap for something smaller), the 2540 is an i7 and the 8530 a Core2duo at 2.4 ghz, both have 8gb RAM and run Server 2008 R2 Datacenter.
For storage i purchased a Synology ds409 slim NAS that weighs about 700g and takes up to four 2.5″ hdd’s. I upgraded the firmware to the latest beta version of DSM which seems to support persistent reservations. I configured it with 3x500gb 7200rpm drives in RAID5 as a block target with a fourth disk for file storage, iso’s, sysprepped images etc. Very very pleased with this particular piece of kit. Not the cheapest but works fabulously for the size.
For connecting the lab together i found an 8 port SMC gigabit switch for around 300hkd.
Internet access is through a d-link DIR412 portable 3g/ethernet router, plugged into the above directly and to the internet with an unlocked Huawei hsdpa dongle. When traveling i purchase a pay as you go 3g sim for the lab that i also use with a flashed Orange San Francisco running as a pocket wifi hotspot for mobile data for laptop and blackberry instead of paying roaming fees.
Additionally i make use of two levelone usb ethernet adaptors when needed, the windows 7 drivers work fine on 2008 R2.
So far it’s been used to stage Exchange 2007 > 2010 migration, OCS 2007 R2 to Lync migration, TMG/UAG testing with Checkpoint R71, full DR lab for directory and exchange, an Orion/SCOM comparison, an SCCM image deployment lab, and a host of install documentation.