Tuesday, February 22, 2011

SQL Servers and Hyper-V with Dynamic Memory

It`s official now - Microsoft SQL Server support Dynamic Memory: http://support.microsoft.com/kb/956893

Before we start to look at this from the Hyper-V perspective, let’s refresh our knowledge on SQL server and memory management.
It`s probably the biggest topic when it comes to SQL server, and you are facing it one way or another on every SQL server. I will not cover every detail here.

First of all: SQL server manages its memory resources almost completely dynamically.
The SQL server communicates constantly with the OS – to allocate enough memory.
(This one is very similar to the VSP/VMBus/VSC – communication between the enlightened VM and the Hyper-V host)

The main memory component in SQL server is the buffer pool. The memory that are not used by another memory component remains in this buffer pool to be used as a so called data cache for the pages read in from the database files on disk.  One of the things that buffer manager manages, is disk I/O functions for bringing data and index pages into the data cache so data can be shared among users.
All data manipulation within SQL server occurs in memory within a set of buffers. If you are adding new data to a database, the new data is first written to a memory buffer, then written to the transaction log, and finally persisted to a data file via a background process called check pointing.
What happens when you modify or delete an existing row, is that if the row does not already exist in memory,, SQL server first reads the data off disk before making the modification. Similarly if you are reading data that has not been loaded into a memory buffer, SQL server must read it out of the data files on disk.

Memory is the most important thing when it comes to SQL servers, besides I/O.
In an ideal world - you could ensure that the machine hosting your databases had enough memory to hold all the data within your databases, SQL server could simply read all the data off disk into memory buffers when the SQL server instance started up – and give you a performance boost.
But sometimes the ideal is not always possible (I love that line) and the databases is most likely larger than memory capacity on any machine, so that SQL server retrieves data from disk only on an as-needed basis.
This brings us over to the data file design. Since accessing a disk drive is much slower than accessing memory, the design itself can have an impact on performance.

What about the Minimum server memory and the Maximum server memory in SQL Enterprise/Datacenter?

The Minimum server memory specifies that SQL server should start with at least the minimum amount of allocated memory and not release memory below this value.
The important thing here, is to set the option to a reasonable value to ensure that the OS does not request too much memory from SQL server.

The Maximum server memory specifies the maximum amount that SQL server can allocate when it starts and while it runs. If you run other applications at the same time as SQL server and want to guarantee that these applications have sufficient memory to run, this option can be set to specify the value.

So how should you configure the Dynamic Memory setting on a VM that is running SQL server?
Remember that SQL is very memory intensive, and uses memory extensively. You do not want your SQL server to use more I/O than necessary. This is very important when you have your VMs located on a SAN, where the throughput also is important. Again, it points us to the usual ‘it depends’ statement.
Personally, I would start to monitor the VM running SQL server after enabling Dynamic Memory.
If you have a lack of physical RAM installed on your host, you should at least configure the host to reserve some amount of RAM before the SQL server consumes it all.
One important thing to notice is the Memory Buffer. Since SQL server consumes memory in a large scale, you may want to adjust this setting – to a lower percentage. Dynamic Memory determines the amount of memory needed by a VM by calculating something called memory pressure. To perform this calculation, Hyper-V looks at the total committed memory of the guest operating system running in the VM and then calculates pressure as the ratio of how much memory the VM wants to how much it has. The amount of memory that Hyper-V then assigns to the VM equals total committed memory PLUS some additional memory to be used as a buffer. So if you have a buffer configured with 50% and a VM that needs 20GB RAM, Hyper-V can make up to 10GB of additional memory available to the VM for use by the file system cache.  But, Dynamic Memory does not guarantee that the additional memory amount configured as a buffer value is always assigned to the virtual memory. All this depends upon the memory pressure being exerted upon the host by the memory needs of the other VMs running.

But generally speaking, the Dynamic Memory feature is fully supported by supported guest OS running SQL server, and it works pretty well.
If you have configured the memory settings within SQL server, this will limit the RAM usage of the VM – if it`s only running SQL servers. When you know your workloads, you also know which VM you should prioritize.

SQL server may be one solid candidate.


Thursday, February 17, 2011

FAQ - Dynamic Memory and RemoteFX (Windows Server 2008 R2 SP1)

Updated March 5.
Updated February 26.
Updated February 22.
Updated February 20.

I`ve spent some time in the Hyper-V forum at TechNet and want to present some of the most common questions about the new features in service pack 1 – which includes Dynamic Memory and RemoteFX.

Q: I have installed SP1, but I can`t see any information about dynamic memory when I create a new VM?
A: When you`re using the wizard to create a VM, you would not be presented with any option for dynamic memory. You will find the dynamic memory option when you open the VM`s settings in Hyper-V Manager afterwards
Q: I have installed SP1 on my Hyper-V host, but I can`t manage dynamic memory from my client?
A: To be able to manage your Hyper-V host with the new dynamic memory feature from your clients, you also need to upgrade your clients with SP1 (Win7 or/and Windows Server 2008 R2)
Q: I`ve installed SP1 and configured the VMs with dynamic memory. However, they does not seem to be functioning with this new feature?
A: After you have installed SP1 on the Hyper-V host, remember to upgrade the IC in the VMs. SP1 brings an updated version of the Integrations Services, that the VMs need to be aware of.
In addition: make sure your guest OS supports this feature.
Q: When will SP1 be available?
A: February 16 on MSDN. February 22 through Windows Update.
Q: When will SCVMM support SP1?
A: SCVMM will not support SP1 until it releases its SP1 update. The official MSFT word on the release of the SCVMM 2008 R2 SP1 update is 60 days after the general availability of the Windows Server 2008 R2 SP1 release
Q: Will SP1 also work on Microsoft Hyper-V 2008 R2 server?
A: Yes, Windows Server 2008 R2 and Hyper-V 2008 R2 use the same service pack – so it will work.
Q: I want to use RemoteFX – but how?
A: To use RemoteFX, the Hyper-V server must be running Windows Server 2008 R2 SP1, the VMs must be running Windows 7 Enterprise with SP1 or Windows 7 Ultimate with SP1, and the remote client computer must be running either Windows Server 2008 R2 with SP1 or Windows 7 with SP1
Q: Are there any HW requirements for RemoteFX?
A: The processor must support SLAT (Second-Layer Address Translation), The graphics card`s GPU must support DirectX 9.0c and DirectX 10. If multiple GPUs, they must be the same card. The cluster requirements would include that the source and the target nodes have identical GPUs.
Q: What changes do I need to do with my cluster after installing SP1?
A: Remember to upgrade every node in the cluster with SP1
Q: Can a VM with upgraded IC be used on another Hyper-V host afterwards?
A: Once a VM has been configured to use dynamic memory by installing the latest Integration Components on the guest operating system, the VM will no longer work on pre-SP1 hosts and cannot be moved to such host.
Q: Are there any cluster-benefits with SP1?
A: As far as RAM concern, you only need to calculate the amount of physical memory available in the cluster when a node has failed and ensure that the sum value of all startup RAM values for all VMs on the cluster does not exceed your calculated value.
Q: If I already have installed the RC of SP1, do I need to uninstall it before installing the RTM of SP1?
A: Yes. There is no in place upgrade from RC to RTM
Q: How does actually dynamic memory works?
A: Hyper-V host and the enlightened VM communicate through the VMBus (the server use Virtual Service Provider and the client use Virtual Service Consumer) to determine the current memory needs of the VM. If the workload of the VM increases and need more memory – then memory is dynamically added to the VM. If the workload decreases (or other VMs have higher memory priority)– the memory is dynamically removed from the VM.
Q: When will the Dynamic Memory Priority kick in?
A: When all available physical memory has been allocated to VMs on the host, the dynamic memory priority starts to play. It will cause the memory to shrink on VMs with no – or lower priority, and allocate more memory to the VMs with higher priority.
Q: And what about the memory buffer?
A: Think of this as a memory reserve for the VM. If you configure a buffer of 50% means that an additional memory of up to 50% of committed memory can be allocated to the VM. The guest OS running in the VM usually use the additional memory for system file cache and performance of the OS and applications.
Q: Can I use Dynamic Memory on both x86 and x64 architectures?
A: Yes, it`s supported for both. (Windows Server 2003/2003 R2/2008/2008 R2)
Q: What happens with guest OS that does not support Dynamic Memory?
A: If you ‘enable’ dynamic memory on a VM which has a guest OS that does not support dynamic memory, it will only use the value defined in the startup RAM.
Q: After I have upgraded the IC within the guest, do I need to reboot before enabling Dynamic Memory?
A: Yes, but you could rather power off the VM after upgrading the IC, configure dynamic memory, and then boot the VM again.
Q: Can I use Dynamic Memory on VMs that is running SQL server or/and Exchange server?
A: You can use Dynamic Memory on every VM as long as the guest OS supports this feature.
When it comes to SQL and Exchange – that are using memory extensively, you may consider defining the maximum RAM value for those VMs. In addition, remember that you can set a similar option on the SQL server – better known as ‘minimum RAM’, and ‘maximum RAM’. This will also affect dynamic memory on the VM. SQL Server announced full support for Dynamic Memory and recommend to set Memory Buffer = 5% for SQL workloads. http://support.microsoft.com/kb/956893
Q: How can I reserve some memory for the parent partition (Hyper-V host)
A: A new registry key is available after you install SP1 on the Hyper-V host
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Virtualization
Name = MemoryReserve
Setting= amount of MB to reserve for the parent partition (minimum 2GB is recommended)
You have to reboot your host to let the changes take effect.
Q: My VMs are currently using only 400MB of RAM, and the setup I intend to run, requires that there is at least 2GB RAM installed. Do I need to power off my VM and assign a higher value of startup RAM?
A: You could assign a higher value of startup RAM, increase the memory buffer, or simply launch mspaint and alter the pixels. It would force the VM to allocate more memory if available on host. How to: http://kristiannese.blogspot.com/2011/02/windows-server-2008-r2-and-windows-7.html
Q: What is the recommended startup RAM for the supported guest operating system with Dynamic Memory enabled?
A: Windows Server 2008 R2 Enterprise/Datacenter, Win7 Ultimate/Enterprise, Windows Server 2008 Enterprise/Datacenter, and Windows Vista Ultimate/Enterprise are recommended to have a startup RAM of 512MB, Windows Server 2003 R2 Enterprise/Datacenter and Windows Server 2003 Enterprise/Datacenter are recommended to have a startup RAM of 128MB
Q: Currently have Windows Server 2008 R2 Hyper-V guest OSes setup to run on specific NUMA nodes.  How will the SP1 update enabling Dynamic Memory impact these affinity settings?
A: With SP1, it`s actually possible to control whether NUMA spanning is allowed. You may have a performance impact if VMs uses memory from more than one NUMA node, but Hyper-V will try to minimize the spanning - though it would use memory from more than 1 NUMA node if there is no other way. You can also simply turn off the NUMA spanning in the Hyper-V setting in Hyper-V Manager. When you disable NUMA node spanning, you make the system behave like multiple small computers
Q: I am preparing to install 2008 R2 SP1 on a 2 node Hyper-V cluster, currently running 2008 R2 Datacenter. Is there anything I should be aware of before applying SP1?
A: You should not configure the dynamic memory settings on the VMs before every node is updated with SP1. Just simply migrate the VMs from the node you intend to update, and remove them back again when you update the second, and third, and so on. When every node is updated, you should install the new IC within the guest, power off your VMs, and configure Dynamic Memory. Remember that if some of your VMs run Windows Server 2008 R2 Web/Standard, you need to install SP1 within the guest to support dynamic memory.
Q: After upgrading the guest to the latest IC, do all guests have to be shut down simultaneously before enabling dynamic memory, or can I enable dynamic memory and reboot the guests one at a time?
A: No. The Dynamic Memory setting is individual to each machine, so there is no problem if you decide to enable Dynamic Memory on one machine at a time.
The important thing is to not enable dynamic memory on a VM before every node is updated with SP1 - in case of a migration /failover
Q: Why are the Dynamic Memory option is available for all of my VMs, even the VMs who is running Windows XP?
A: The Dynamic Memory feature is a global setting in Hyper-V, which is available at the highest level. Though it`s visible on every VM, does not mean that it support every guest OS.
Q: What is the correct version of the VMBus after updating the IC?
A: The correct version of the VMBus should be 6.1.7601.17514 after you have installed SP1 on your Hyper-V host, and updated your supported guest OS with the new IC.
Q: How can I monitor Dynamic Memory?
A: There are two new performance counters (perfmon) available after you have installed SP1 .
‘Hyper-V Dynamic Memory Balancer’, and ‘Hyper-V Dynamic Memory VM’. In addition, there is also three new columns in Hyper-V Manager related to Dynamic Memory. ‘Assigned Memory’, ‘Memory Demand’, and ‘Memory Status’.
Q: I need to find some information regarding RemoteFX compatibility list – but where?
A: Check this article: http://technet.microsoft.com/en-us/library/ff817602(WS.10).aspx which contains additional links for GPU`s that will work with RemoteFX

I`ll guess this one will be frequently updated during the next days/weeks.

(Do not hesitate to ask us in the Hyper-V TechNet Forum if you did not find the answer to your question in this post.)


Wednesday, February 16, 2011

Windows Server 2008 R2 SP1 RTM.

Just released and available on MSDN (February 16, 2011)

If you already have the RC version installed, you need to uninstall it before installing the RTM version. (No in-place upgrade)

I will upgrade my lab tonight, and show you how you should upgrade the cluster nodes in my next post, and also provide a FAQ regarding SP1 (RemoteFX and Dynamic Memory) based on the activity in the Hyper-V forum at Microsoft TechNet

Monday, February 14, 2011

VMRole and Azure Connect (still not IaaS)

I`ve been playing around for a while.
It`s blue, and it`s called ‘Azure’.

I wanted to test the new VMRole feature together with Azure Connect.

Here are the various setups I have tested:

1)      SharePoint installed on VMRole – connected with On-Premise SQL server
2)      Self-Service-Portal (v.1) on VMRole – connected with On-Premise servers
3)      LOB application connected with On-Premise SQL server

Before I get started, I just want to be clear that Windows Azure – even with VMRole and Azure Connect, should not be considered as an IaaS solution. It`s just a simplified way to move your existing applications to Azure. Think of VMRole as another way to build and deploy your applications.

Because there are some requirements for this role.

1)      You need to install the Windows Azure Integration Services
2)      You should sysprep your VM before moving it to Azure
3)      Size the VM properly – including RAM and disk size (you will only get one partition up there. The additional partition is dedicated for ‘Azure-stuffs’)
Ok, with those requirements, - or limitations in mind. What can we get out of it?
Oh, I almost forgot. If you have logged into the windows.azure.com portal after you have migrated one or two VMs, you have probably noticed some options there. It is called ‘Reboot’, and ‘Reimage’. It does what it says. So remember that you`re uploading a so called ‘base image’. This image could be reimaged by Azure. Meaning that every modification you do to your VMs, would be lost whenever the reimage-process starts. That’s Azure`s way to keep your VM clean, healthy, and to fix the instance if it fails.

An instance can be rebooted any number of times. Windows persists all data across reboots. When you reimage a server instance, it is re-created from the image, and any state that you have not explicitly persisted is lost. Data that is written to the local storage resource directory is persisted when a server instance is reimaged; however, this data may be lost in the event of a transient failure in Windows Azure that requires your server instance to be moved to different hardware.

So if we have to consider our roles in Windows Azure as stateless, what could be preferred to run there as far as an IT-pro concern?
-Nothing, really. It`s still a Platform – as a Service solution.
But with the ‘stateless’ in mind, I tested some ‘stateless’ applications up there.
And this is the part where the skill of an IT-pro is relevant when it comes to Windows Azure.

1.       I deployed a VM in Hyper-V, joined it to my lab domain, installed The Windows Azure Integration Services, and also the applications needed.
2.       I uploaded the image to Azure (added the –skipverify option to the cmd)
3.       Created and deployed a new hosted service in Windows Azure, and enabled it for Remote Desktop
After the VM was uploaded, I needed to connect it to my on-premise servers.

1.       I installed Azure Connect on the VM in Azure
2.       I installed Azure Connect on the On-Premise domain controller
3.       I installed Azure Connect on the other necessary servers On-Premise
4.       Created a group in Windows Azure, and linked those VMs.
(To make this possible, you have to logon to the Azure portal and download Azure Connect token on every server that should connect to each other)
I should also mention that using Azure Connect made me to brush off my IPv6 skills. Yes, it communicates on IPv6. So to make it real smooth, I setup my lab to support IPv6 (Domain Name System Services, and created AAAA-records for the servers intended for this scenario, and added rules to the Windows Firewall).

The SharePoint scenario and LOB scenario had one thing in common. It was really slow.
Since my lab is located in Norway, the closest Azure Datacenter is located in Amsterdam. (I also tried the datacenter in Ireland). The ping latency was about 600-700ms. And for SQL-communication which operates in real time, it appeared to be very slow. But it worked.

Using the Self Service Portal appeared to be faster, since it only initiates the job from the portal, down to the server on-premise. I created a VM from Windows Azure, and was able to connect to it afterwards.

The moral here is that when you get the possibility to Remote Desktop onto a VM with full access – and even can connect it to your network, you really need to know the platform you`re dealing with.
You may be tempted to think that this would open the door to use Azure as an extension of your network. To confuse you even more – it could. But only for applications, not infrastructure.  


Thursday, February 10, 2011

Windows Server 2008 R2 and Windows 7 SP1 - RTM (how to trick your VM)

SP1 for Windows Server 2008 R2 and Windows 7 is ready to reach every datacenter around the world.

I have been running the SP1 for a while now, and it has been a success.
One of the greatest advantages provided by Dynamic Memory is the capacity planning part.
A server with 48GB RAM using 70-80% of it, was reduced to 30-40%. And for the cluster, I only need to calculate the startup-RAM in case of failover.

And that`s what is all about – to utilize your server hardware.

I`ve tested various server setups with Dynamic Memory enabled, to see how it acts.
One of the first challenges, was to get pass the memory requirements during install,
(If you install SCVMM for example, it requires 2GB RAM).
There is an easy trick to bypass this one. The answer is mspaint.
Launch mspaint, go to file à Settings, and tweak the pixels so that the VM use more memory.
Once it`s passed the requirement, continue with your setup.

1. Before
2. After

(This is an old trick. I knew a fellow that actually complained about his computer RAM-use to his boss, and claimed a new computer. He only ran mspaint and no other applications… he got himself a new computer the week after… )
when the VM was not aware of anything else than the assigned startup RAM.

Wednesday, February 9, 2011

ServiceBus meets VMBus

For a while ago, I announced that I should present you with a power point presentation discussing the IT-pro and Windows Azure. Since there is so much happening right now, I decided to postpone that one a bit. Waiting for the SCVMM 2012 (vNext or whatever it may be called), that would combine all from Azure, Hyper-V, Failover Cluster, and also App-V, and the new Server Application Virtualization.

In the meantime, I have done some testing (crash-testing) on Azure.
It`s been quite interesting to see how a VM running in a VMRole on Windows Azure could be used together with the rest of our infrastructure.
I have tested various scenarios.

-          SharePoint installed on VMRole connected with our on-premise SQL server
-          SCVMM Self Service Portal on a VMRole connected with our on-premise VMs
-          Our own LOB application on a VMRole connected with our on-premise VMs

I will share my experience with you later.


Saturday, February 5, 2011

A question of service, or just another Hyper-V vs. VMware discussion?

Recently, one of my costumers contacted me.
They had some old servers (really old hardware, struggling with backups, stability and so on), and asked me to give them my recommendation in competition with two other vendors.

I was the only one who included Hyper-V – the rest included VMware.
Off course we offered a cheaper solution, because of the licensing in Windows Server 2008 R2 Enterprise (which includes 4 guest OS). And we could document our qualifications and expertise on the technology. So we were off running quite early with this one.

This costumer – a government department had already some agreement with another vendor for hosting their infrastructure (Exchange, Web, AD DS and so on) which was outsourced a year ago, but had some servers locally that were used for the health department in a so called ‘secure zone’.
They also had a file and print server, and last – a terminal server. The OS installed was primarily Windows Server 2000 and Windows Server 2003.

The challenge in this scenario would definitely be the health-department server. The person who installed this years ago was out of reach, and there was no documentation. This would be the ideal candidate for a P2V.
Also, the servers involved were located in three different networks.

And here`s why Hyper-V and SCVMM are some beautiful tools:

The Hyper-V server

-          1 Windows Server 2008 R2 SP1 with Hyper-V role enabled
-          6 NICs installed on the host
-          RAID 1 for the OS partition
-          RAID 5 for the DATA partition (and plenty of storage)
-          24 GB RAM
-          And a ‘decent’CPU J
Job plan:
Install a DC and SCVMM as VMs in Hyper-V, connected to each other on a Private Virtual Network (SCVMM requires an Active Directory infrastructure).
Create External Virtual NICs connected to their respective networks, and assign vNICs to the VMM server so it could connect to the servers intended for P2V.

So, after my presentation of how we should solve this, the customer went for our services.

After things were ready, we had to contact the vendor who was responsible for their infrastructure.
They reacted very negative about the entire project - though the costumer required this service for their servers locally. Actually, they did not understood how we should manage to get all this done.
I explained how the architecture of Hyper-V worked, and that the System Center Virtual Machine Manager would take care of the old servers, and move them to new fresh hardware and have a backup solution ready. They seemed a bit confused of how we could move the server to a new hardware. After repeating the word ‘Hyper-V’ again, it all came clear. They used VMware on the other side, and did not want a Hyper-V under their wings. The reason why the old servers were never moved to their datacenter was because their environment did not ‘support’ x86 servers…
And the staff that was responsible for the VMware servers, did not want another Hypervisor to manage.
Anyhow, the customer decided this, and wanted our solution. So after the VMware guys provided us with some IP-addresses, the Hyper-V solution is alive and doing fine. And they have now a backup/restore plan that is actually working.