September 25, 2015
Posted by on
There is no easy way aside a file copy and paste to move virtual machines between 2008 R2 and Hyper-V 2012 R2 with the native tools. You can move from 2008 R2 to 2012, but not to 2012 R2.
I have found the easiest, quickest way to do this is to use the VEEAM hyper-v backup/restore utility. It requires a agent on the servers you are moving VM’s from and to and seems to work very very well, quick also with up to 1.7 compression and a good backup on the host you use for the transfer. Veeam Backup/Recovery
When you restore your 2008 R2 VM it updates the WMI, adds it to the cluster (if you want that) and brings it back online.
Well worth a look if you want a painless migration.
November 29, 2012
Posted by on
Recently I’ve been working to simplify and consolidate our service provision. The path of least pain has been determined as placing core applications in colocation. While investigating the provision of storage and with memories of building 2008 R2 clusters still clear in my head I have begun trialling Server 2012. Having read a series of articles by Aidan Finn (his excellent blog here) about Virtualisation on server 2012 and I happened across his converged fabrics posts, here.
First some background, in Hyper-v R2 you need upwards of six nic’s to build a VM host cluster, you can get functionality with less but you leave yourself exposed, it would not be N+1. Also bear in mind that teaming for fault tolerance across multiport network cards is only going to give you a false sense of security on the server side (it is after all only a single card regardless of how many ports it has).
In a nutshell, what I’m excited about is that you can use native teaming (or otherwise) on Server 2012 to bond a series of nics together, then spread your live-migration, storage, guest access and other nic requirements across a series of virtual nic’s connected to the virtual switch bound to this nic team (phew). You then set QOS on the virtual switch for the different virtual adaptors so you can guarantee service for the different aspects of connectivity your Hyper-V cluster will need. Anyway, have a look at the Aidan’s posts on the matter, they make for a great lab.
In my lab I’ve used a pair of 1gbe links and it works great for testing, in production you’d be looking at 2+10gbe links ideally, giving you resilience and most of the bandwidth you’d ever need in the forseeable future, at least for the kind of services/load experienced in most SME’s.