Showing posts with label Windows Server 2019. Show all posts
Showing posts with label Windows Server 2019. Show all posts

February 8, 2023

Windows Pagefile Done Right

Over the years there has been a lot of information on configuring the pagefile, paging has gone through the evolution on what size and configuration it should be. With virtual machine RAM allocation going higher and higher and a lot of guidance from the likes of Citrix and VMware to potentially build VMs with 64GB+ RAM. Paging is a very interesting topic as setting the pagefile size too high, wastes disk space; removing the page file entirely is bad as well as Windows needs to have a pagefile even though we do not want the system to page. Not having a pagefile equates Windows complaining about not having enough virtual memory if the actual memory is fully allocated. While no one wants to page and systems should be built to minimize paging aka allocating enough memory to let folks operate their applications without paging but systems should be configured to allow some sort of paging as a "use in case" measure. Lastly the out of box configuration of a system managed pagefile is also a bad idea.

Setting the pagefile to system managed in an enterprise type of environment is a bad practice. The reason why? If there are any monitoring tools a lot of monitoring tools track pagefile utilization and it is typically tracked as a percentage. If the pagefile is set to system managed and the minimum/maximum are not set, it can result in monitoring tools reporting excessive pagefile utilization.

Setting the pagefile with the old school approach of setting the pagefile 1.5x the size of the memory results in this configuration. We will hear a lot of complaints about wanting to capture some sort of dump for Microsoft to "analyze". When was the last time Microsoft successfully analyzed a dump with meaningful results? For me it has never happened.

Here is what the drives would look like if we for instance allocated 64GB of RAM to a VM and are using the old school approach of 1.5x RAM for the pagefile:

Here is the error in the eventlog when the pagefile is completely eliminated as virtual memory is low:

What is the happy medium? Setting the pagefile to either 4096MB/4GB for both the minimum and maximum on single user operating systems and 8192MB/8GB for both the minimum maximum on multi-user operating systems.


Why is this the happy medium? I checks all of the boxes as it allows for Windows to have a pagefile if needed, it allows for a minidump to be configured/captured/analyzed in Windows if needed and it is not too large that we feel like we are wasting space.

What about memory dumps? Well if there is a need for some sort of memory dump to be captured and analyzed, Windows can be configured to generate a minidump. There are plenty or articles/blog posts out on the internet on how to configure Windows to produce a minidump.

If you have any thoughts, we would like to hear from you in the comments.

Johnny @mrjohnnyma

July 21, 2022

Don't Treat Your Virtual Desktop Security Like Your Physical Desktop Security

 

This blog post complements a previous blog post I wrote a bit ago talking about not using your physical image in your virtual environment. To check out that one please refer here

Are you using anti-virus, anti-malware, data loss prevention (DLP) software or the like on your virtual desktops? Are you treating them the same as you would on a physical desktop? If the answer was yes to both this is the blog for you. If you are not using anti-virus on your virtual desktop that is a whole other conversation and potential can of worms that needs to be addressed. When running any of the various security tools out there we need to consider the need to configure the proper exclusions to ensure everything runs properly and the users are not getting performance degradation because these exclusions are missing. I see this all of the time that folks are not properly implementing the proper security tool exclusions into their virtual desktop images or they configure the exclusions in the various consoles and they can be shown when asked but machines are not landing in the proper container to actually get the exclusions. I recently was working with a customer that was suffering severely slow/long application launch times in applications such as Outlook, Teams, OneDrive, etc... Upon examining they were capturing things like the Outlook OST, Teams Cache and OneDrive cache into virtual disks stored on a network share as VHD/VHDX files. When users would log onto a virtual desktop and these virtual disks mounted they were being actively scanned by anti-virus and when the Outlook, Teams and OneDrive clients were trying to read the data on the virtual disks performance was hampered because of the scan.

The above not only just applies to non-persistent desktops but to fully persistent desktops as well. I know I will get the response of "aren't persistent desktops the same as physical desktops?" The answer is yes and no. While anything that gets written to the disk is fully stateful and there may or may not be any profile management happening on these desktops. There are still the core virtual desktop components installed to deliver folks the remote display capability with the requisite virtual channels to allow for things like audio/video redirection and offloading. Therefore we still need the proper security tool exclusions to ensure everything is as optimized from the security perspective as possible.  In addition to this with modern-day laptops/desktops, there are potentially a lot more resources in terms of CPU and RAM compared to what is allocated on the virtual desktop side. So, an un-optimized anti-virus/anti-malware utility's impact on the physical side may not be as noticeable.

Long story short, spending a little bit of extra effort to make sure security tools are configured properly will save the headaches of dealing with complaints about bad experience. Just as I said in the previous blog of the common adage "you can't build a house on a bad foundation." This holds very true on this conversation as well.

If you have any thoughts, we would like to hear from you below in the comments.

Johnny @mrjohnnyma

July 22, 2020

Don't Use Your Physical Image in Your Virtual Environment


Are you using SCCM, WDS or other deployment tools or have been asked to when deploying your virtual desktops or virtual application servers? If so, there can be some serious issues with this. I am often asked about by folks wanting to deploy Citrix or VMware Horizon images using the same image that is used for physical endpoints. Not only is this a bad idea, it can present performance ramifications and also make it so that best practices are not followed.

I always have been a believer that hand building the operating systems for virtual desktops and application delivery servers is the best approach because it ensures we know what went into the image. I understand the grips of manually installing the applications and the extra work but the extra work now can save a lot of headaches later and the reason of "this is how we build out images" is not a good enough reason to justify using the same image in the virtual environment.  Often and in most cases the deployment person and the virtual desktop environment are not the same person. They build images on physical endpoints or on a completely different hypervisor, they never optimize the image and just let things fly. Since these are physical endpoints they have dedicated hardware and rarely if ever do they experience any issues from being unoptimized. In the datacenter, on a virtual desktop or an application delivery server which share host resources with other virtual machines we need to optimize things as much as possible.

Here are two examples of recent environments where there were issues with using SCCM to deploy the same image as physical endpoints:

  1. First was in the medical field and the customer wanted to move from persistent Windows 10 desktops to pooled non-persistent virtual desktops as the administrative overhead of having a persistent desktop and having to administer the desktops with deployment tools was not feasible. Also, when presented with justifying the need of having a persistent desktop pool and having the response be “that is how we have deployed it before” there really was no reason to have it. When it came time to build the Windows 10 non-persistent image, the customer completely disregarded my suggestion on building the Windows 10 base image by hand and used WDS to deploy the “standard” image that is deployed on physical endpoints. The end result was that a known bug in the image in which the start menu stopped responding to left clicks. This bug also existed on physical endpoints but was hacked around by copying profiles over the default profile but when this was done on the non-persistent desktop image, it caused Citrix Profile Management to create temp profiles on each login. After countless days of the customer trying to remediate this, the only successful way to do so was to break out the iso and install the operating system by hand and manually installing the applications and everything is functioning correctly. 
  2. A second example of this was a large law firm migrating from an on-prem Citrix environment to VMware Workspace ONE. When it came time to build their images for the RDS Linked Clone pool they stressed a need to use an existing task sequence that was built for Windows 10 and force it to target a Window Server 2016 operating system. The issue here is that applications were installed before the RDS Session Host role was installed afterwards. It has commonly been a known and best practice for RDS Session Hosts servers that the RDS Session Host role to be installed prior to installing applications due to the need to potentially capture applications settings into the RDS shadow key. In this environment, there are small abnormalities in application behavior even today due to the incorrect installation sequence.

Long story short, when building the images for your virtual desktops and application delivery servers be careful how you approach this. As the common adage is "you can’t build a house on a bad foundation" and doing things incorrectly could lead to a bad user experience.

Johnny @mrjohnnyma