February 8, 2023

Windows Pagefile Done Right

Over the years there has been a lot of information on configuring the pagefile, paging has gone through the evolution on what size and configuration it should be. With virtual machine RAM allocation going higher and higher and a lot of guidance from the likes of Citrix and VMware to potentially build VMs with 64GB+ RAM. Paging is a very interesting topic as setting the pagefile size too high, wastes disk space; removing the page file entirely is bad as well as Windows needs to have a pagefile even though we do not want the system to page. Not having a pagefile equates Windows complaining about not having enough virtual memory if the actual memory is fully allocated. While no one wants to page and systems should be built to minimize paging aka allocating enough memory to let folks operate their applications without paging but systems should be configured to allow some sort of paging as a "use in case" measure. Lastly the out of box configuration of a system managed pagefile is also a bad idea.

Setting the pagefile to system managed in an enterprise type of environment is a bad practice. The reason why? If there are any monitoring tools a lot of monitoring tools track pagefile utilization and it is typically tracked as a percentage. If the pagefile is set to system managed and the minimum/maximum are not set, it can result in monitoring tools reporting excessive pagefile utilization.

Setting the pagefile with the old school approach of setting the pagefile 1.5x the size of the memory results in this configuration. We will hear a lot of complaints about wanting to capture some sort of dump for Microsoft to "analyze". When was the last time Microsoft successfully analyzed a dump with meaningful results? For me it has never happened.

Here is what the drives would look like if we for instance allocated 64GB of RAM to a VM and are using the old school approach of 1.5x RAM for the pagefile:

Here is the error in the eventlog when the pagefile is completely eliminated as virtual memory is low:

What is the happy medium? Setting the pagefile to either 4096MB/4GB for both the minimum and maximum on single user operating systems and 8192MB/8GB for both the minimum maximum on multi-user operating systems.


Why is this the happy medium? I checks all of the boxes as it allows for Windows to have a pagefile if needed, it allows for a minidump to be configured/captured/analyzed in Windows if needed and it is not too large that we feel like we are wasting space.

What about memory dumps? Well if there is a need for some sort of memory dump to be captured and analyzed, Windows can be configured to generate a minidump. There are plenty or articles/blog posts out on the internet on how to configure Windows to produce a minidump.

If you have any thoughts, we would like to hear from you in the comments.

Johnny @mrjohnnyma

July 21, 2022

Don't Treat Your Virtual Desktop Security Like Your Physical Desktop Security

 

This blog post complements a previous blog post I wrote a bit ago talking about not using your physical image in your virtual environment. To check out that one please refer here

Are you using anti-virus, anti-malware, data loss prevention (DLP) software or the like on your virtual desktops? Are you treating them the same as you would on a physical desktop? If the answer was yes to both this is the blog for you. If you are not using anti-virus on your virtual desktop that is a whole other conversation and potential can of worms that needs to be addressed. When running any of the various security tools out there we need to consider the need to configure the proper exclusions to ensure everything runs properly and the users are not getting performance degradation because these exclusions are missing. I see this all of the time that folks are not properly implementing the proper security tool exclusions into their virtual desktop images or they configure the exclusions in the various consoles and they can be shown when asked but machines are not landing in the proper container to actually get the exclusions. I recently was working with a customer that was suffering severely slow/long application launch times in applications such as Outlook, Teams, OneDrive, etc... Upon examining they were capturing things like the Outlook OST, Teams Cache and OneDrive cache into virtual disks stored on a network share as VHD/VHDX files. When users would log onto a virtual desktop and these virtual disks mounted they were being actively scanned by anti-virus and when the Outlook, Teams and OneDrive clients were trying to read the data on the virtual disks performance was hampered because of the scan.

The above not only just applies to non-persistent desktops but to fully persistent desktops as well. I know I will get the response of "aren't persistent desktops the same as physical desktops?" The answer is yes and no. While anything that gets written to the disk is fully stateful and there may or may not be any profile management happening on these desktops. There are still the core virtual desktop components installed to deliver folks the remote display capability with the requisite virtual channels to allow for things like audio/video redirection and offloading. Therefore we still need the proper security tool exclusions to ensure everything is as optimized from the security perspective as possible.  In addition to this with modern-day laptops/desktops, there are potentially a lot more resources in terms of CPU and RAM compared to what is allocated on the virtual desktop side. So, an un-optimized anti-virus/anti-malware utility's impact on the physical side may not be as noticeable.

Long story short, spending a little bit of extra effort to make sure security tools are configured properly will save the headaches of dealing with complaints about bad experience. Just as I said in the previous blog of the common adage "you can't build a house on a bad foundation." This holds very true on this conversation as well.

If you have any thoughts, we would like to hear from you below in the comments.

Johnny @mrjohnnyma

April 28, 2021

I Was a Panelist Discussing Challenges of a Hybrid Workforce with Citrix

Introduction:

For those that are not aware, I recently made the move to join Liquidware as a Sales Engineer and subsequently moved away from my long tenure in consulting, while I am greatful for all of the customers and companies that I worked with. Sales Engineering has always been a goal of mine to formally get into and things have been great so far. In my role as a Sales Engineer, I am always asked to present and discuss not only Liquidware's products but present on past experiances I have had implementing virtualization with customers. Recently I was tab to speak on Liquidware's Unplugged webinar about The Challenges of a Hybrid Workforce with Calvin Hsu who is the V.P. of Product Management at Citrix and Jason Smith VP of Product Marketing at Liquidware.

For those of you that are not aware, Liquidware's Unplugged Webinar series is interactive webinars that are built exclusively for the end-user computing (EUC) community. More information can be found on Unplugged here.

Recording:

Here is the recording of the Unplugged Session with Calvin Hsu.


Hopefully this webinar is a first of many that I will be able to participate in going forward. Please use the link above to stay up to date with future Unplugged sessions.

We would like to hear from you so feel free to drop us a note if you have any questions.

Johnny @mrjohnnyma