November 29, 2012
Posted by on
Recently I’ve been working to simplify and consolidate our service provision. The path of least pain has been determined as placing core applications in colocation. While investigating the provision of storage and with memories of building 2008 R2 clusters still clear in my head I have begun trialling Server 2012. Having read a series of articles by Aidan Finn (his excellent blog here) about Virtualisation on server 2012 and I happened across his converged fabrics posts, here.
First some background, in Hyper-v R2 you need upwards of six nic’s to build a VM host cluster, you can get functionality with less but you leave yourself exposed, it would not be N+1. Also bear in mind that teaming for fault tolerance across multiport network cards is only going to give you a false sense of security on the server side (it is after all only a single card regardless of how many ports it has).
In a nutshell, what I’m excited about is that you can use native teaming (or otherwise) on Server 2012 to bond a series of nics together, then spread your live-migration, storage, guest access and other nic requirements across a series of virtual nic’s connected to the virtual switch bound to this nic team (phew). You then set QOS on the virtual switch for the different virtual adaptors so you can guarantee service for the different aspects of connectivity your Hyper-V cluster will need. Anyway, have a look at the Aidan’s posts on the matter, they make for a great lab.
In my lab I’ve used a pair of 1gbe links and it works great for testing, in production you’d be looking at 2+10gbe links ideally, giving you resilience and most of the bandwidth you’d ever need in the forseeable future, at least for the kind of services/load experienced in most SME’s.
November 21, 2012
Posted by on
I installed R75.45 Gaia on a UTM-1 270 appliance recently, installation from USB went fine and performance was adequate with a low load, VPN, default IPS and a short QOS rule set.
In order to support a degree of resilience we’re using ISP Redundancy at all sites with multiple internet connections, despite configuring this site identically I was not able to get the failover to work. Usually, the script cp_isp_update runs and updates the gateways default route to match that of the secondary ISP, however when i tested this on R75.45 the route was not updated when primary was disconnected.
I contacted Checkpoint support and was informed that ISP Redundancy does not work in either version of Gaia, R75.45 or R75.40 – however there is a patch available for R75.40 if you contact them and reference this sk. I applied this patch on 75.40 but still didn’t see the solution work as expected so instead deployed R75.30 as I have at other ISP redundant sites.
I should also mention that my in no way scientific, cursory observations indicated that load on the CPU was much lower (15-20pc lower) on SPLAT (with 75.30) than on either version of GAIA. Something to bear in mind for older appliances like the UTM-1 270.
November 20, 2012
Posted by on
When converting a machine from VMWare Workstation to another virtualisation platform you may come up against a “Unable to obtain hardware information for the selected machine.” warning and red cross after selecting the VM you wish to convert.
This is easily resolved, simply right click the VMWare VCenter Converter icon/start menu item and select run as administrator.
November 16, 2012
Posted by on
Trying to migrate a 2012 VM from VMWare Workstation 9 to an ESXi host i found i saw the ‘sad face’, as below.
“Your computer ran into a problem and needs to restart”
A little research led me to this: http://kb.vmware.com/selfservice/microsites/microsite.do?cmd=displayKC&docType=kc&externalId=2006859&sliceId=2&docTypeID=DT_KB_1_1 but actually patching ESX was not something I’d done for some time, and before i think i used update manager.
A little digging led me here which is much clearer than the vmware instructions for patching. Many thanks Chris! Simply upload the patch to a datastore, enable ssh (or do from console), put server in maintenance mode, run the patch as Chris’ link shows, reboot, and your 2012 and Windows 8 VM’s will now boot just fine.
November 6, 2012
Posted by on
In the process of lab testing for the viability of a HA mail installation with HP’s E5000 series of messaging appliances I came up against some confusion about the configuration of the CAS array when using a 2 member DAG where the servers also host the CAS and HT roles. I am aware a hardware Load Balancer is required for this to work but was not clear on exactly how to configure exchange to work with such a device.
Initially in a lab I configured a DAG between two exchange 2010 VM’s, this seemed to be working as expected. Next instruction was to add both servers to a CAS array, and assign a VIP and put this in DNS.
I then configured a CAS array, assigned a VIP and configured the DNS record. The array included the two all-in-one servers, it created successfully but none of my clients were able to connect, neither was i able to ping the array address. Further reading, in particular this: http://blogs.technet.com/b/exchange/archive/2012/03/23/demystifying-the-cas-array-object-part-1.aspx led me to realise:
- Windows NLB is needed for a CAS array to work without additional hardware.
- Windows NLB is incompatible with Server 2008 R2 failover clustering.
- Server 2008 R2 failover clustering is needed for DAG…
- I need a DAG….
- Therefore i cannot use windows NLB…..
Which led me to read the “Best practices for networking and load balancing with the E5000 messaging systems” pdf on the HP site.
Essentially I’d gone about it backwards:
- You still create the CAS Array, but use a VIP that is assigned to the load balancers and it is that IP that must be defined when you create the array. Once this is in place and the HLB is configured they proxy the requests to the separate CAS servers and all is good.
- It seems the CAS array is a simple object, a pointer more than a mechanism, a CAS array object does not load balance your traffic, Windows NLB does that or in our case, our hardware load balancer, all the object does is tell the mail client where to go to get mail.