Wednesday, June 25, 2014

Correcting Time Sync Issues on Citrix PVS VDI

Anyone who has been working with PVS for a while has probably run in to some type of time synchronization or time skew issues on their VMs at some point. This causes issues with the VM clock being off sync beyond the time skew threshold causing kerberos errors and leads to GPOs not being pulled down and registration errors between the VDI and the desktop controller servers (See screenshot below). Some causes of time skew on PVS VMs are due to DST time being configured on a master image that hasn't been updated in a while (More info on DST here and here) and also time synchronization misconfiguration at the hypervisor layer. It is always important to have your hypervisor layer have the time synchronization setup for the hardware/system clock as this is by default sent up to the VM via guest OS tools.

An example of this issue causing registration issues with the Citrix Desktop Service (Event ID 1002):


Although as you can see this problem causes registration errors with the VM it eventually resolves itself after W32time had time to synchronize with the domain and also because the Citrix Desktop Service service has a short retry interval. However, this isn't the case with GPOs as the default refresh cycle is normally 90 minutes. After tieing the process of this all together I came up with a solution below to cover whatever case was causing the time sync issue. The key here was to start up the necessary actions in the correct order W32Time Sync->GPupdate->Start Citrix Desktop Service.

Please note that this is not a recommended solution by Citrix and you should do thorough testing in your environment before attempting this.

***I took the idea from here (at the bottom of the article) and took it another step to resolve the Citrix Desktop Service registration issues and GPO issues. I also found my process worked better by running w32time /resync in my script rather than making my service simply dependent on the w32time service to ensure time sync has fully completed before doing a GPupdate and starting the Desktop Registration process.

The solution (tested for a WinXP machine):



  • Set Citrix Desktop Service to manual and stop service

  • Create .BAT file with the following, (Optional: convert to .EXE with 3rd party software)
--------------------------------------------------------------
w32tm /resync
gpupdate /force
net start workstationagent
--------------------------------------------------------------


  • Create the "Citrix Startup" service:

"c:\program files\windows resource kits\tools\instsrv.exe" "Citrix Startup" "c:\program files\windows resource kits\tools\srvany.exe"

  • Open Regedit and navigate to the following: 
HKLM\SYSTEM\CurrentControlSet\Services\Citrix Startup

Right click Citrix Startup -> Add key called Parameters -> right click Parameters  and create a new string key named Application -> Fill in path (Ex c:\startup.bat or whatever you named the bat/exe above)

  • Add Startup dependencies to the service we created:

sc config "Citrix Startup" depend= netlogon/w32time



Thursday, June 5, 2014

Bulk disable AD users script

I wrote this quick handy script to disable a bunch of users from a text file. The client I was working at still had a 2003 level domain so I had to old school it with VBScript :)

Run this on a DC in your environment and fill in users in a text file named disableList.txt. I have it set to output to a disableUsers.bat file so you can inspect it before you run it.

Const ForReading = 1
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile1 = objFSO.OpenTextFile("disableList.txt", ForReading)

'Read in users
Dim ListToProcess()
i = 0
Do Until objFile1.AtEndOfStream
Redim Preserve ListToProcess(i)
ListToProcess(i) = objFile1.ReadLine
i = i + 1
Loop

objFile1.Close

outFile="disableUsers.bat"
Set objFile = objFSO.CreateTextFile(outFile,True)

For Each strLine in ListToProcess
 if strLine <> "" then
 objFile.Write "dsquery user -samid " & strLine & " | dsmod user -disabled yes " & vbCrLf
 End if
Next

objFile.Close

Tuesday, June 3, 2014

Implementing SCEP 2012 on Citrix PVS VDI

In this post, I'm going to explain how System Center Endpoint Protection 2012 was implemented on our Citrix PVS VDI machines at a client project I recently worked at. I hadn't found much about this online so I thought I'd share how it was done for us and hopefully it will help someone out!

They key issue with the PVS random pooled VDI was the lack of persistence for the virus definitions, so in this solution we move the virus definitions to a separate disk attached to our VDI. The first step to implementing this is to ensure you allocate enough space on your write cache drive or a separate drive to accommodate virus definitions, in my scenario I found 1GB seems to be sufficient for the SCEP files/definitions.


Once you have determined a suitable size for your separate SCEP drive or if you choose to combine it on your write cache drive (this is route I took) boot up your master image with the drive attached.

First, on the master image(s) ensure the SCCM client push account is added to the machine's local administrators. Once this is complete go ahead and create a device collection and add the master image(s) as resources in SCCM. Additionally, we created a separate device collection for our Citrix VDI images via OU. This is done to add our custom endpoint policies to later on.

Next, here is the steps I took on our XP VDI master image(s):

1. Install Windows Resource Toolkit: http://www.microsoft.com/en-us/download/details.aspx?id=17657 - This includes a tool called linkd, which we use to create a symbolic link.

2. Create D:\SCEP folder (or wherever your persistent drive is)

3. CD c:\documents and settings\all users\application data\microsoft\

4. “c:\program files\windows resource toolkit\tools\linkd.exe” “Microsoft Antimalware” “d:\scep”

5. Push/Install SCEP And SCCM client

6. Validate after installation that d:\scep folder is getting all the latest updates (check folder size properties is going up)

If you were using Windows 7 VDI do all the following above except:

Ignore step 1, and for step 3 change the directory to C:\Users\All Users\Microsoft and then step 4 use  mklink /D "Microsoft Antimalware" "d:\scep"

After you have completed that and notice that SCEP is running properly, you will need to do the following prior to shutting down your image before publishing in order to have the SCCM client generate appropriate MIFs for each machine

1. Open Powershell as administrator: net stop ccmexec

2. Followed by: del %WINDIR%\smscfg.ini

3. Followed by: Remove-Item -Path HKLM:\Software\Microsoft\SystemCertificates\SMS\Certificates\* -Force or from DOS using powershell -command “Remove-Item -Path HKLM:\Software\Microsoft\SystemCertificates\SMS\Certificates\* -Force”

4.Finally: wmic /namespace:\\root\ccm\invagt path inventoryActionStatus where InventoryActionID=”{00000000-0000-0000-0000-000000000001}” DELETE /NOINTERACTIVE

Once this is complete your image is almost ready to published out, however I found we needed a small logon script to get the images properly setup on their first boot so I created the following small batch file and attached it as a logon script for GPO:

mkdir d:\scep
net start msmpsvc

There is probably a more elegant solution to this such as a scheduled task that run once prior to shutting down your master image, let me know if you find a better solution! Once this is done you are ready to publish out your VDI. On boot you should see the SCEP client showing red in the tray and will pickup/start updating virus defs as defined by your policy. Again, you can validate your D:\SCEP has all the files/folders and is increasing in size once the updates begin. Also, don't forget to apply your antimalware policy with Citrix recommended file/folder exclusions. We also opted to turn scans off given this being a VDI environment.

Tuesday, April 22, 2014

Export / Import RDS Easy Print Driver

So this was a fun task! I had a Remote Desktop Services 2012 farm setup and for some reason client printers would not get redirected upon launching RemoteApps. After some investigation I noticed that this one particular RDS Session Host did not have the Easy Print driver installed. I tried numerous methods of attempting to install the Easy Print driver with no luck.

However, after digging around in the registry I did figure out one "hackish" method that seemed to do the trick. I'll refer to Host A as the machine with the printer drivers installed and host B with the drivers not installed. This process should work for any printer drivers, and can be very helpful with a potential workaround for Windows 8 / Server 2012 driver signing issues.

1) Export the Reg settings of this key from Host A to Host B


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Environments\Windows x64\Drivers\Version-3\Remote Desktop Easy Print


2) Save this Reg Key on Host B and run the .reg key to import in to the registry.

3) Copy the following files from the Host A -> B located @ C:\Windows\System32\spool\drivers\x64\3


4) Reboot Host B


Extra tips for troubleshooting driver installs:
-Check C:\Windows\inf\setupapi.dev for logs regarding driver installs

Custom Microsoft System Center Endpoint Protection 2012 Reporting

This post I'm going to share how I setup a custom infection alerting for SCCM EP 2012 that is used to feed infection data in to Splunk. I recently developed this at a company that was switching over from Symantec EP and had already had some previous reporting capability with Splunk. The goal was to recreate the same infection reporting functionality they had previously from a ready made Symantec Endpoint Protection Splunk App.

First, you will need to get a little familar with the SCCM database as this is where all the EP infection data is housed.

The main table I was interested in for my solution was dbo.EP_Malware which was logging all the infections. Here is a screenshot showing all the columns available to this table:

The main requirement I had was to log the alerts of infections at a real-time frequency so I will break down the process of my solution here before presenting it:

1) A SQL “After Insert” Trigger is setup on dbo.EP_Malware table in the SCCM Database to collect the newly inserted infection entry

2) After a new addition to the EP_Malware SQL table it will copy this over to a custom made temporary table called dbo.SplunkEP with the Record ID of the infection entry and add a timestamp to it

3) A SQL Agent job is kicked off next that will run a query to examine this temporary table (dbo.SplunkEP) and collect the RecordID (Primary Key) of the newly inserted EP_Malware table

4) After obtaining the RecordID it will run a custom query against the dbo.EP_Malware table and other associated tables in the SCCM DB to collect all the required data for the alert

5) This custom data will be posted to the Event Viewer Log on the SCCM Server, and subsequently a Splunk Event forwarder agent on the machine will send the infection data up in to Splunk.
Example end result:





OK, enough process, let me get down to the solution!


1) First we need to create the trigger on dbo.EP_Malware, launch SQL Management Studio and connect to your SCCM db, be sure to edit the use command
USE [YOUR_SCCM_DB_GOES_HERE]

GO

/****** Object:  Trigger [dbo].[SplunkUpdate]    Script Date: 04/22/2014 15:40:50 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

-- =============================================

-- Author:        Matt Shorrosh

-- Create date:

-- Description:  

-- =============================================

CREATE TRIGGER [dbo].[SplunkUpdate]

   ON  [dbo].[EP_Malware]

   AFTER INSERT

AS

BEGIN

      declare @temp as bigint

     

       select @temp=RecordID from inserted

       Insert Into dbo.SplunkEP values(@temp,GETDATE())

      

    EXEC msdb..sp_start_job

        @job_name = 'SplunkEPReporting';

END

GO

2) Create a temporary table called SplunkEP, be sure to edit the use command 


USE [YOUR DB]

GO

/****** Object: Table [dbo].[SplunkEP] Script Date: 03/13/2014 08:31:48 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE [dbo].[SplunkEP](

[ID] [bigint] NOT NULL,

[Time] [datetime] NOT NULL,

CONSTRAINT [PK_SplunkEP] PRIMARY KEY CLUSTERED

(

[ID] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

3) Create SQL Agent Job


Setup a new SQL Agent Job called SplunkEPReporting and configure a .ps1 file with the SQL query prepared as show in Step 4.
 4) Create .ps1 file that includes the custom SQL Query, be sure to edit the use commands

Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
$data1 = invoke-sqlcmd -query "Use [YOUR_SCCM_DB_HERE];

select top 1 id from dbo.SplunkEP order by time desc"

$record = $data1.id

$data = invoke-sqlcmd -query "use [YOUR_SCCM_DB_HERE]; 

Select distinct top 1 v_R_System.Name0 as 'Computer Name', 
  dbo.EP_Malware.RecordID, 
  dbo.EP_Malware.LastMessageTime, 
  dbo.EP_Malware.DetectionTime, 
  dbo.EP_Malware.DetectionSource, 
  dbo.EP_Malware.ThreatName,
               dbo.EP_Malware.Path,
               dbo.EP_Malware.Process,
               dbo.EP_Malware.ExecutionStatus,
               dbo.EP_ThreatDefaultActions.DefaultAction,
               dbo.EP_Malware.ActionSuccess,
               v_R_System.Distinguished_Name0 as 'Distinguished Name',
               dbo.EP_Malware.PendingActions,
               dbo.Users.FullName as 'UserName',
               STUFF((SELECT '; ' + v_RA_System_IPAddresses.IP_Addresses0 
    FROM v_RA_System_IPAddresses 
    WHERE v_RA_System_IPAddresses.ResourceID = dbo.EP_Malware.MachineID
    FOR XML PATH('')), 1, 1, '') as 'IP Addresses'
                 
from dbo.EP_Malware inner join v_R_System on dbo.EP_Malware.MachineID = v_R_System.ResourceID 
inner join v_RA_System_IPAddresses on dbo.EP_Malware.MachineID = v_RA_System_IPAddresses.ResourceID                
inner join dbo.Users on dbo.EP_Malware.UserID = dbo.Users.UserID
inner join dbo.EP_ThreatDefaultActions on dbo.EP_Malware.CleaningAction = dbo.EP_ThreatDefaultActions.DefaultActionID


where RecordID=$record"

$delete = invoke-sqlcmd -query "use [YOUR_SCCM_DB_HERE]; delete from dbo.SplunkEP where id=$record"


Write-Eventlog -logname Application -source SplunkEPReporting -eventid 1111 -message ($data | Format-List | Out-String)


5) Create EventLog Entry for your Splunk EP Reports

Open up a Administrator Powershell and enter the following:
New-EventLog -Source SplunkEPReporting -Logname Application

Tuesday, February 25, 2014

Help!!!! Troubleshooting accessing sites with AAMs, Host Name Site Collections, IIS Bindings/Protocols/Ports/Host Headers

I wrote up this post to help clarify one of the hardest grasped concepts in regards to SharePoint that I've seen trip SharePoint Admins up, which is how IIS / SharePoint handle incoming requests and how it is all processed. In specific this in regards to IIS Site IP Bindings/Host Headers/Ports/Protocols and SharePoint AAM settings as well as host name based site collections. Once the general flow of processing is understood troubleshooting issues with accessing sites will become a lot easier which is the goal of this post. I figure the easiest way to explain this is by scenarios and break it down and explain the outcome.
 
KEY CONCEPTS TO UNDERSTAND FIRST
  • HTTP/HTTPS requests are first processed at the IIS level meaning the incoming request must satisfy the configured settings on the site for the configured Protocol/IP/Port/Host Header. Two IIS sites can never have all 4 settings the same configuration and run at the same time, at least one of the 4 must be different.
  • After the site has been processed at the IIS level and it starts executing SharePoint's site code (assuming its a SharePoint site....) SharePoint will match the Host Header request to current configured AAMs and capture authentication protocol assigned for that zone. Then SharePoint will prompt for credentials via that protocol, and validate the login as mapped to the site you are attempting to access. The incoming URL/Host header must match a configured Alternate Access Mapping and you must be authenticated for the site or you will receive 401 UNAUTHORIZED.
  • SharePoint automatically creates an AAM for 1) the manual host header you configured or 2) if the host header is blank when you created your web app, it will automatically create an AAM for the http://servername:port the web app was created on.
  • Host Named Site Collections require IIS Sites configured with a blank host header. AAMs will be auto created for that site in the Default zone when using New-SPSite and the -HostHeaderWebApplication parameter.
SCENARIOS ENVIRONMENT 
 
For simplicity's sake I will only be working with HTTP requests below (No HTTPs). We are assuming all URLs I type in the post below have a load balancer (just for fun) in front forwarding the request to our single SharePoint server with an IP of 192.168.27.1. Also we are assuming our DNS works just fine. :)
 
Scenario 1: Web Application created, Blank Host Header, * IP Bindings, Port 80, no additional AAM configured (Basic host name site collection web app structure set up, no sites created only the root site).
 
For this scenario I type http://site1.com in the browser which takes me to our VIP and then forwarded to the SharePoint server via the load balancer. Following that I am prompted to login. So this means I am actually hitting the site, the load balancer sent me to the IIS server, and it is processing my connection. I type in my credentials however I receive 401 Unauthorized. Why is this?
 
First at the IIS site level, IIS is seeing a host header come in for site1.com and the IP it is directed in for as 192.168.27.1:80. The site I created accepts any host header, directed at any IP configured on the IIS/SharePoint server, matches port 80 so it starts it begins launching SharePoint's Web App. This is where SharePoint examines the HTTP header and sees that you are trying to access site1.com which is not associated with the web app at all so it will reject your login.
 
Scenario 2: Web Application created, Blank Host Header, * IP Bindings, Port 80, site1.com additional AAM configured for this web application (Host named site collection created).
 
For this scenario I type http://site1.com in the browser which takes me to the VIP and the load balancer and I am prompted to login. So this means I am actually hitting the site and the IIS server is processing my connection. I type in my credentials and I am able to login. Why is this?
 
First at the IIS site level, IIS is seeing a host header come in for site1.com and the IP it is directed in for 192.168.27.1:80. The site I created accepts any host header, directed at any IP configured on the IIS server, matches port 80 so it starts it begins launching SharePoint's site files. This is where SharePoint examines the HTTP header and sees that you are accessing site1.com which is a configured alternate access mapping for this site. It recognizes your credentials via the authentication provider and accepts.
 
This is an important concept to note because this is how Host named site collections work. Each site collection essentially creates another AAM for the web application. Host named site collections require a blank host header so they can accept any hostname coming in on the IIS level and then SharePoint does the site direction on it's side.
 
Scenario 3: Web Application created, Blank Host Header, 192.168.27.1 IP Bound (manually configured in IIS), Port 80, no additional AAM configured.
 
For this scenario I type http://site1.com in the browser which takes me to the VIP and the load balancer and I am prompted to login. So this means I am actually hitting the site and the load balancer sent me to the IIS/SharePoint server and it is processing my connection. I type in my credentials and I am unable to login. Why is this?
 
First at the IIS site level, IIS is seeing a host header come in for site1.com and the IP it is directed in for 192.168.27.1:80. The site I created accepts any host header, directed at 192.168.27.1 on the IIS server (which matches the IP the load balancer directed me to), matches port 80 so it starts it begins launching SharePoint's site files. This is where SharePoint examines the HTTP header and sees that you are accessing site1.com which is not configured as an AAM for this site so it will say unauthorized and reject your connection.
 
Scenario 4: Web Application (simple path based) created, site1.com as the configured Host Header, 192.168.27.1 IP Bound (manually configured), Port 80, no additional AAM configured besides auto generated.
 
For this scenario I type http://site1.com in the browser which takes me to the VIP and the load balancer and I am prompted to login. So this means I am actually hitting the site and the load balancer sent me to the IIS server and it is processing my connection. I type in my credentials and I am to login. Why is this?
 
First at the IIS site level, IIS is seeing a host header come in for site1.com and the IP it is directed in for 192.168.27.1:80. The site I created accepts ONLY site1.com, directed at 192.168.27.1 on the IIS server (which matches the IP the load balancer directed me to), matches port 80 so it starts it begins launching SharePoint's site files. This is where SharePoint examines the HTTP header and sees that you are accessing site1.com which matches the AUTOMATICALLY generated AAM configured.
 
The important concept here is that my criteria matched the IIS level and then also matched the AAM criteria. Additionally important to understand when you specify a host header when you create a web application the url you use it is automatically generated via an AAM.
 
Summary: I know this doesn't cover every scenario, however I hope this helps you better understand the process flow of requests to SharePoint/IIS sites.

Quick guide to Implementing Host Name Site Collections in SharePoint 2013

In this post I am going to explain a little bit about the new site collection model in SharePoint 2013 called host name site collections and how to implement it via PowerShell (which is currently the only supported method).

First off, what are host named site collections? Host named site collections are simply site collections that are created based off of unique FQDNs instead of the previous model of path based site collections that followed a basic root URL. This provides several benefits which I won't go in to detail but of the biggest benefit is scalability ( Ex. I can host multiple sites now under one web application aka one iw3p.exe process instead of creating new web applications for new FQDN sites)

As an example, with host name site collections I can now create several SharePoint sites with the following names all under one web application (one iw3p.exe process from IIS):
  • www.mattsharepoint.com
  • sharepoint.hostnamedsitecollection.com
  • hnsc.org
This is cool right? So how do we go about doing this. Below I've created an example script that you can follow to create 2 host name site collection sites. Hope this helps you get started!
#"Add SharePoint Cmdlets"
add-pssnapin microsoft.sharepoint.powershell


# Web App Variables
$WebAppDefault = "SharePoint - HSNC Example"
$Port = "80"
$AppPool = "HSNCAppPool"
$Account = "domain\svc-apppoolaccount"

# Root Site Variables'
$RootHHDefault = "myrootsite.com"
$RootURLDefault = "http://myrootsite.com"
$Owner = "domain\svc-farmaccount"
$RootDB = "RootDB"
$Lang = "1033"
$Template = "blankinternetcontainer#0"

# HSNC Site Variables
$HNSCSITE1 = "http://hnsc1.com"
$HNSCSITE2 = "http://hnsc2.com"


# Create Web App
New-SPWebApplication -Name $WebAppDefault -hostHeader $RootHHDefault -Port $port -ApplicationPool $AppPool -ApplicationPoolAccount (Get-SPManagedAccount $Account) -AuthenticationProvider (New-SPAuthenticationProvider –UseWindowsIntegratedAuthentication) -DatabaseName $RootDB -AllowAnonymousAccess
echo "Web App created"

# Create Root Site Collection 
New-SPSite $RootURLDefault -Name 'Root Site' -Description 'External Root Site Collection' -OwnerAlias $Owner -language $Lang -Template $Template
echo "Root Site Collection created"

# Create HNSC 1
New-SPSite $HNSCSITE1 -HostHeaderWebApplication (get-spwebapplication $RootURLDefault) -Name 'Site 1' -Description 'HNSC Site1' -OwnerAlias $Owner -language $Lang -Template $template
echo "HNSC 1 Site Collection created"

# Create HNSC 2
New-SPSite $HNSCSITE2 -HostHeaderWebApplication (get-spwebapplication $RootURLDefault) -Name 'Site 2' -Description 'HNSC Site2' -OwnerAlias $Owner -language $Lang -Template $template
echo "HNSC 2 Site Collection created"



 

PowerShell Script to backup files based on particular date

Here is a nice short PowerShell script I wrote up that can help system admins with backing up files based on their modified date. What this script does is recurse the directory you wish to scan for files based on a particular modified date and export their paths to a text file. The second line in the script parses the text file and removes all white spaces so that it is cleaned up to throw in to 7z, which is used to create a compressed zipped up file. Feel free to throw this in your scheduled tasks and test it out for your backups!
Get-ChildItem -Path E:\FilesToBackUp -Recurse | Where{$_.LastWriteTime -gt (get-date).AddDays(-1)} | Where{$_.PSIsContainer -ne $true} | Select FullName | format-table -HideTableHeaders | out-file E:\file_backup.txt
Select-String -Pattern "\w" -Path E:\file_backup.txt | ForEach-Object { $_.line} | Set-Content -Path E:\file_backup_cleaned.txt

# Use 7z Version 9.25 alpha or newer
7z a G:\BackupLocation\Daily%date:~4,2%-%date:~7,2%-%date:~10%.7z @E:\file_backup_cleaned.txt -spf