Archive for the ‘ Windows ’ Category

Managing DNS using PowerShell

One of the things I have found sorely lacking in PowerShell 2.0 is a good way to manage DNS.  I’m hoping there is a fix for this in Windows Server 2012 and PowerShell 3.0, but for now I’ve had to search for alternatives.  Now, the Scripting Guys have published an excellent article on how to manage DNS through a combination of PowerShell and WMI.  You can find it here.   As I said, this is an interesting approach, and certainly works, but I personally found it to be overly complicated.  Instead, I found that leveraging PowerShell’s ability to incorporate pre-existing tools into my scripts presented me with a more viable solution. Specifically, I’ve combined DNSCMD with PowerShell.  Here’s how it works. 

DNSCMD is a well documented and very powerful tool for managing your windows DNS server from the command line.  It can be called directly from within PowerShell. So lets say I want a script that will set up new primary DNS zones for me based on a standard that we’ve developed.  We could set up AD integrated zones this way as well, but in my situation I was dealing with primary and so that’s what I’m sticking with now.  If you want to use AD integrated zones instead then the syntax is pretty easy to modify.  So, lets say I want to set up a new zone for sweeneyops.com.  I could simply run my script like this:

./create-primarydnszone.ps1 sweeneyops.com

… and voila! The zone would be created as per my predefined standards. What are my standards?  I guess we should probably define those now:

  • I want to create a primary dns zone
  • I want to use the default windows dns zone naming convention. (e.g. if my zone is sweeneyops.lab then my zone file would be sweeneyops.lab.dns)
  • My server host name is Server1, but it is recognized publicly as ns1.sweeneyops.lab.  This server only hosts primary zones. 
  • I have a second server called Server2.  It is publicly recognized as ns2.sweeneyops.lab.  It will host only seecondary copies of the zone, and I want it listed as a name server.
  • The host name will be added automatically to the name servers list. I want to remove it.
  • I want the zone contact to be me@sweeneyops.lab.  In DNS zone syntax that equates to me.sweeneyops.lab.
  • I want to use the windows default settings for the start of authority.
  • I want to allow zone transfers, but only for servers listed in the name servers tab of the DNS zone properties.
  • I want to create an A record for WWW and an empty A record (DNSCMD thinks of this as an @ record) and have both point to 192.168.1.1
    Here’s what the code looks like (I apologize if the line breaks are confusing, but I think it reads easily enough):

PARAM($domain)

# get the local host name
$localhost = $env:computername

# create the dns zone
dnscmd $localhost /zoneadd $domain /primary /file "$domain.dns"
   
# update Start of Authority (note: need to use single quotes around @ or will error out)
dnscmd $localhost /recordadd $domain ‘@’ SOA ns1.sweeneyops.lab me.sweeneyops.lab 1\ 3600 600 86400 3600

# add authoritative name servers
dnscmd $localhost /recordadd $domain ‘@’ NS ns1.sweeneyops.lab
dnscmd $localhost /recordadd $domain ‘@’ NS ns2.sweeneyops.lab
   
# Remove the host name from the name servers list
dnscmd $localhost /recorddelete $domain ‘@’ NS $localhost /f

# make sure zone transfers are allowed, but only for servers in the name server tab, and configure to notify
dnscmd $localhost /zoneresetsecondaries $domain /securens /notify

# create default records for cooperindustries.com
dnscmd $localhost /recordadd $domain ‘@’ A 192.168.1.1
dnscmd $localhost /recordadd $domain www A 192.168.1.1

Now one thing worth noting here is the entry for creating the Start of Authority record (SOA).  I said I wanted to use the windows defaults for setting up the SOA, but the DNSCMD syntax requires that you provide the values for the various SOA components.  In our script, that looks like this:

dnscmd $localhost /recordadd $domain ‘@’ SOA ns1.sweeneyops.lab me.sweeneyops.lab 1\ 3600 600 86400 3600

If we were to manually enter this on Server1 to create a zone called sweeneyops.com then it would look like this:

dnscmd Server1 /recordadd sweeneyops.com ‘@’ SOA ns1.sweeneyops.lab me.sweeneyops.lab 1\ 3600 600 86400 3600

So what’s up with that bit at the end where all the numbers start?  Well, if you pull up NSLOOKUP and set your query type to SOA then look up a zone after creating it through the DNS GUI you will see the following returned in the entry:

refresh = 3600 (1 hour)
retry   = 600 (10 mins)
expire  = 86400 (1 day)
default TTL = 3600 (1 hour)

These are the default Refresh, Retry, Expire, and MinTTL values defined by Windows, which I said I wanted to use.  That’s what the “3600 600 86400 3600” part is about.  The “1\” is a serial number value required in the DNS syntax. Normally, when using the GUI, Windows would define this value automatically.  We still want it to do so, and I found that using the “1\” causes it to do just that.  Trial an error for me, benefit for you.  I haven’t tried defining a specify serial number, but I suppose you could do that and possibly pass it in as another parameter.  For now, however, lets move on.  

I run this script on Server1 (ns1.sweeneyops.lab), pass in my domain name, and it creates my primary zone.  But what about my secondary zones on Server2 (ns2.sweeneyops.lab)?  For those, I will need a slightly different script.  Our new script is going to need the IP address of ns1.sweeneyops.lab, so for the sake of argument we’ll say that it’s 192.168.1.2.  Our code is much simpler.  It takes the domain name and the IP address of the first name server. It then uses that information to create the secondary zone.  All of the records were defined on the primary server, so there’s really not much left to do.  The code, which would be run on Server2, looks like this:

PARAM([string]$domain, [string]$IP)

# get the local host name
$localhost = $env:computername

dnscmd $localhost /zoneadd $domain /Secondary $IP /file "$domain.dns"

I could probably shorten it down to a single line, but I felt that it read more easily this way.  Feel free to modify it however you see fit.  So, there you have it. Simplified scripting of DNS through PowerShell. Enjoy!

Advertisements

Configuring PowerShell to run as a Scheduled Task in Server 2008 R2

When configuring a PowerShell script to run as a scheduled task in Windows Server 2008 R2, there are a few things that you need to pay special attention to if you want to make sure that it actually runs.  Specifically, each Action in the Actions section of the Scheduled task should be configured as follows:

Action:

Start a Program

Program/Script:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Add arguments (optional):

-noninteractive –nologo c:\scriptpath\script.ps1

Start in (optional):

c:\scriptpath

There are a few things to note here:

  1. Do not place the script itself in the program/script filed.  Instead, you are making a call to powershell itself and then passing in the script name as a parameter. 
  2. The “-noninteractive” switch tells PowerShell that it should not present an actual shell. 
  3. The “-NoLogo” switch starts PowerShell from trying to display the copyright.  Frankly, I’m not sure this is critical, but I’ve always done it.
  4. Try and avoid placing your script in a path that contains spaces.  This means you don’t have to screw around with quotations.  This is not so much a road-block as a hurdle, but I personally like to keep things as simple as possible and I find this helps.
  5. If your script reads from or outputs to a file then make sure you don’t forget the Start In option. I like to specify this location as being the same location as my script file.  The reason I do this is that it allows me to write a script and simply specify the name of the file to read from or write to without having to specify the path in the script.  This is a matter of preference, but I find that it helps me keep things simple.

Configuring a new disk with DISKPART

I found myself struggling to remember the order of commands to use when setting up a new disk in Windows using DISKPART.  Specifically, I had just added a new virtual disk to the server and wanted to configure a new volume on it.  Here is how to complete the process:

First, let’s see if the computer sees the disk.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  ——–  ————-  ——-  ——-  —  —
  Disk 0    Online           40 GB      0 B
  Disk 1    Offline          40 GB    40 GB

Sure enough its showing up as Disk 1, but is offline.  To correct that, we must select the disk and then set its status to Online:

DISKPART> select Disk 1

Disk 1 is now the selected disk.

DISKPART> online disk

DiskPart successfully onlined the selected disk.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  ——–  ————-  ——-  ——-  —  —
  Disk 0    Online           40 GB      0 B
* Disk 1    Online           40 GB    40 GB

Good stuff.  The disk is now online.  Lets take a look at the details:

DISKPART> detail disk

VMware, VMware Virtual S SCSI Disk Device
Disk ID: 00000000
Type   : SAS
Status : Online
Path   : 0
Target : 1
LUN ID : 0
Location Path : PCIROOT(0)#PCI(1500)#PCI(0000)#SAS(P00T01L00)
Current Read-only State : Yes
Read-only  : Yes
Boot Disk  : No
Pagefile Disk  : No
Hibernation File Disk  : No
Crashdump Disk  : No
Clustered Disk  : No

There are no volumes.

Everything looks good, but I can see that the disk is currently set to read-only.  I’m not going to be able to make any changes to the disk unless I first correct that:

DISKPART> attributes disk clear readonly

Disk attributes cleared successfully.

To visually confirm I could use DETAIL DISK again to see all the disk properties, or I can just view a subset of attributes:

DISKPART> attributes disk
Current Read-only State : No
Read-only  : No
Boot Disk  : No
Pagefile Disk  : No
Hibernation File Disk  : No
Crashdump Disk  : No
Clustered Disk  : No

Now that I’ve confirmed that the disk is writeable I need to go ahead and set up my primary partition:

DISKPART> create partition primary

DiskPart succeeded in creating the specified partition.

If I check the disk details again I can see that there is now a raw primary partition. Notice the asterisk indicating that the volume is already selected:

DISKPART> detail disk

VMware, VMware Virtual S SCSI Disk Device
Disk ID: 7C54406E
Type   : SAS
Status : Online
Path   : 0
Target : 1
LUN ID : 0
Location Path : PCIROOT(0)#PCI(1500)#PCI(0000)#SAS(P00T01L00)
Current Read-only State : No
Read-only  : No
Boot Disk  : No
Pagefile Disk  : No
Hibernation File Disk  : No
Crashdump Disk  : No
Clustered Disk  : No

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
* Volume 2                      RAW    Partition     39 GB  Healthy

I can also see just the volume information another way.  Once again, the asterisk indicating that the volume is already selected:

DISKPART> list volume

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
  Volume 0     D                       DVD-ROM         0 B  No Media
  Volume 1     C                NTFS   Partition     39 GB  Healthy    System
* Volume 2                      RAW    Partition     39 GB  Healthy

Now I need to format my volume.  I want to use NTFS as the file-system and perform a quick format:

DISKPART> format FS=NTFS quick

  100 percent completed

DiskPart successfully formatted the volume.

DISKPART> list volume

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
  Volume 0     D                       DVD-ROM         0 B  No Media
  Volume 1     C                NTFS   Partition     39 GB  Healthy    System
* Volume 2                      NTFS   Partition     39 GB  Healthy

That went over without any problems, but notice that there is still no drive letter assigned.  I’m just going to let it use the next letter in order of progression, though I could specify a letter if I wanted to:

DISKPART> assign

DiskPart successfully assigned the drive letter or mount point.

DISKPART> list volume

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
  Volume 0     D                       DVD-ROM         0 B  No Media
  Volume 1     C                NTFS   Partition     39 GB  Healthy    System
* Volume 2     E                NTFS   Partition     39 GB  Healthy

I can now navigate to drive E and do my thing.  This might have been faster using the Disk Manager GUI, but I prefer the command line due to its flexibility, script-ability, and the performance in remote management situations. Ok. Back to my studies. Hope this comes in handy to someone.

Event ID 1000, faulting application maildsmx.dll, Active DIrectory Users and Computers, and a 2003 Terminal Server

I recently set up a Windows Server 2003 terminal server that admins in remote locations with spotty connectivity could use to perform certain bandwidth-heavy administrative tasks for some aging systems.  In particular, I’ve created an Microsoft Management Console (MMC) featuring Active Directory Users and Computers and the Exchange 2003 management plug-in.  The Exchange 2003 plugin is what drove me into the 2003 server.  The plugin will not work with the Remote Server Administration Tools (RSAT) tools released in Server 2008 and 2008 R2, and will not run on a 64-bit machine. 

All that aside, it was met with rave reviews and I moved on to bigger and more interesting things.  Today, however, I was contacted by one of the admins who was complaining that the MMC was crashing repeatedly, so I dug in to investigate. In the event logs I found the following error:

Event Type:    Error
Event Source:    Microsoft Exchange Server
Event Category:    None
Event ID:    1000
Date:        6/18/2012
Time:        9:10:49 AM
User:        N/A
Computer:    MyServer
Description:
Faulting application maildsmx.dll, version 6.5.6944.0, stamp 3edc4ebd, faulting module adprop.dll, version 5.2.3790.3959, stamp 45d70a1d, debug? 0, fault address 0x00043ca3.

Long story short, the problem was due to the fact that I was running under a very specific scenario. Namely, and I quote:

  • You have an Active Directory forest that is running a Windows Server 2008 Active Directory schema. For example, you have extended the Active Directory schema to the Windows Server 2008 schema.
  • You have a Windows Server 2003 member server that has Windows Support Tools installed.
  • From the member server, you connect the Active Directory Users and Computers Microsoft Management Console (MMC) snap-in to the domain.
  • In the Active Directory Users and Computers MMC snap-in, you open the Properties dialog box for a user object, and then you close the dialog box. Later, you open the Properties dialog box for another user object.
    The whole thing is already written up in Microsoft KB 946459.  All you have to do to solve the problem is apply the hotfix.  Just make sure you download the correct version of the hotfix.  Also note that, at least in my situation, the hotfix did require a reboot.  Hope this helps someone out. 

Delegating Permissions to Group Policy Objects using PowerShell

Today I was asked to delegate permissions to a very large set of group policy objects.  My first thought was “ugh!” as I envisioned going though each and everyone in the Group Policy Management Console (GPMC).  A moment later my outlook changed when I realized I could just use PowerShell and make the changes in short order.  Here’s how:

First, I need to load the Active Directory and Group Policy modules in PowerShell.  If you don’t have access to these modules then you need to install the Remote Server Administration Tools (RSAT) for Windows 7 w/ Service Pack 1.  The RSAT tools are built into Windows Server 2008 R2 and just need to be activated .  On Windows 7, once installed, you can activate the Active Directory Module as a unique Windows Feature.  The Group Policy Module will be activated when you activate the Group Policy Management Tools feature, which include the GPMC.  Once you’re past that, loading them is a breeze:

import-module activedirectory

import-module grouppolicy

Second, I need to get the list of Group Policy Objects in question.  I know I have a large number of GPO’s, but I’m only interested in a subset.  Fortunately, I know that all the GPO’s I’m interested in begin with “SweeneyOps” so I’ll adjust my query to return only GPO’s with that prefix.  We get the list of GPO’s using the Get-GPO cmdlet, and will dump the resulting set into a variable called $GPOList.

$GPOList = get-gpo –all | where{$_.displayname –like “SweeneyOps*”}

Now that I have all my GPO’s I can iterate through that list to grant permissions to a specific security group, but first in need the group.  For the sake of argument, we’ll call this group “GPO-Admins.”  The Group Policy cmdlet that we’re going to use actually requires that we provide either the domain-qualified name of the security principal (domain\account), or the sAMAccountName of the user, group or computer that we will be granting permissions.  In this case we are granting permissions to a group, and I find it easiest to use the sAMAccountName, so I’ll grab that and place it in a variable called $group.

$group = $(get-adgroup “GPO-Admins”).sAMAccountName

Ok, group in tow, we can proceed to the delegation.  To do this, we’re going to use the Set-GPPermissions cmdlet. In this case I just happen to want to grant (nearly) full access to the group, so I’ll use GpoEditDeleteModifySecurity as the value for the PermissionLevel parameter.

$gpolist | foreach{set-gppermissions -guid $_.id -targetname $group -targettype Group -PermissionLevel GpoEditDeleteModifySecurity}

And we’re done!  This process will differ slightly in each environment of course. You may need to specify a different domain, or server, or you might need to allocate permissions to multiple groups, etc…  The fundamental principal remains the same, however.  Get your GPO(s), get your group(s), select the level of access, and then delegate access using the Set-GPPermissions cmdlet. 

Domain Controllers and Snapshots

I was sharing some tips on Active Directory yesterday when the person I was speaking to mentioned that they would take a snapshot of their domain controller before applying the changes we were discussing.  The conversation came to a dead stop.   I could actually hear the robot from Lost in Space wailing “Danger Will Robinson! Danger!” 

My friend is running his DC’s in VMWare, as am I.  That wasn’t the problem.  I actually prefer to  run DC’s as virtual servers, but there is one golden rule you must always keep in mind if you’re going to do that.  NEVER take a snapshot of a domain controller.

As you would expect, he was alarmed by my reaction, so I explained the problem. 

Update Sequence Numbers

You see, some directory services use timestamps to track changes that need to be replicated to other systems.  Newest timestamp wins when there is contention on which change should apply.  Active Directory uses a different approach.  Instead of timestamps, AD uses Update Sequence Numbers (USN’s). 

An exhaustive explanation of the process would be pretty interesting, but isn’t really germane to this conversation.  If you’re interested in reading up on it you can find a great explanation here.   For now, I’ll provide a high level overview.  Basically, there are three key components to the replication process you should understand: the High-Watermark value, the Upd-to-Dateness Vector, and the Database Identity.

High-Watermark Value

Each DC keeps track of changes using an local USN counter.  The USN is incremented whenever changes are made to an object and stored in a usnChanged attribute.  When another DC requests an update from the source DC, the latest USN from the source is passed along to the destination.  This USN record is referred to as the High Watermark value.  The next time it requests an update it sends that High Watermark USN value back to the source DC.  The source DC only sends over information newer than the High Watermark USN. 

Up-to-Dateness Vector

The destination DC also keeps track of a value for each DC that it has ever replicated with.  This Up-to-Dateness Vector is also passed back to the source DC when replication is requested.  After reducing the scope of objects to replicate using the High-Watermark value, the source DC can further reduce the replication set by using the Up-to-Dateness vector to determine which attributes in that set should be replicated. 

Database Identity

Each DC has its own server identity, but each instance of the AD database also has a Database Identity, stored as an InvocationID.  The server identity, never changes, but the InvocationID does change IF AD is properly restored from backup.  For now, suffice it to say that the destination DC keeps track of the source DC’s InvocationID.   

USN Rollback (a.k.a. Why Snapshots are evil)

Here’s where things go awry.  Lets say DC1 makes changes to a user account.  Those changes are tracked using a USN.  Now lets say that DC2 requests a replication from DC1.  It passes back the High-Watermark value and Up-to-Dateness vector that it has on record for DC1.  DC1 uses that information to determine a replca set and passes that information on to DC2.  For the sake of argument we will say that DC2 now has a High-Watermark value of 10, an Up-to-Dateness value of 100, and an InvocationID value of X on record for DC1 (that’s an oversimplification, but is good enough for this explanation).

At this point, we go off and take a snapshot of DC1.  Everything seems ok, so we make some more changes, and those changes are replicated.  Lets assume that DC2 now has a High-Watermark value of 20, an Up-to-Dateness value of 200, and an InvocationID of X on record for DC1.

Now lets say we revert to a previous snapshot.  This is where the InvocationID comes into play.  When we revert to an older snapshot, our High-Watermark and Up-to-Dateness Vector values go back to 10 and 100 respectively.  If we restored AD correctly then our High-Watermark and Up-to-Dateness Vectors would still have reverted, but our InvocationID would have changed.  DC2 would detect the change and replicate everything correctly.  By reverting to a snapshot, we circumvent that process.  The values are decremented, but the InvocationID stays the same.  We are now in a USN Rollback state, described in greater detail here.  DC2 detects the problem and DC1 is isolated from replicating data to the rest of the domain to preserve database integrity.

Practically speaking, there is really only one fix in this situation.  Demote DC1 and then promote it again.

Conclusion

And that, my friends, is why we don’t snapshot our DC’s, but there is a light at the end of the tunnel.  Server 2012 will be implementing a solution to this problem called the VM GenerationID that will trigger a reset of the InvocationID after reverting to a snapshot.  It looks like it will only be available out the gate for Hyper-V, but they are supposedly working with other vendors on implementing the solution.  For now, however, don’t do it.  It’s a really really really bad idea.

Reading XML files in PowerShell

In most cases I use plain old text files to provide data to simple PowerShell scripts.  Sometimes, however, I have to use more complex structured data.  In these situations I switch to XML, which PowerShell is very good at processing. 

Lets say I have a simple XML file called servers.xml:

<servers>
    <server>Server0001</server>
    <server>Server0002</server>
    <server>Server0003</server>
    <server>Server0004</server>
</servers>

Now lets say I want to get the servers from those files and execute a command on each.  First, I need to get the XML data from the file.  To do so I would execute something like this:

$xml = [xml](get-content servers.xml)

My $xml object now contains all the data I need to do the work.  To see the list of servers I would run the following command:

PS C:\Tools\Powershell> $xml.xaservers.server
Server0001
Server0002
Server0003
Server0004
PS C:\Tools\Powershell>

Now lets say that my data is broken into two groups as such:

<servers>
    <group1>
        <server>Server0001</server>
        <server>Server0002</server>
        <server>Server0003</server>
        <server>Server0004</server>
    </group1>
    <group2>
        <server>Server0005</server>
        <server>Server0006</server>
        <server>Server0007</server>
        <server>Server0008</server>
    </group2>
</servers>

No problem.  I just make sure I include the additional group level in my dot notation:

<servers>
    <group1>
        <server>Server0001</server>
        <server>Server0002</server>
        <server>Server0003</server>
        <server>Server0004</server>
    </group1>
    <group2>
        <server>Server0005</server>
        <server>Server0006</server>
        <server>Server0007</server>
        <server>Server0008</server>
    </group2>
</servers>

As you can see that the each node simply becomes a child of the $xml object, making it very easy to navigate to the data you need.  For example:

PS C:\Tools\Powershell> $xml = [xml](get-content data.xml)
PS C:\Tools\Powershell> $xml

servers
——-
servers

PS C:\Tools\Powershell> $xml.servers

group1                                                      group2
——                                                      ——
group1                                                      group2

PS C:\Tools\Powershell> $xml.servers.group1

server
——
{Server0001, Server0002, Server0003, Server0004}

PS C:\Tools\Powershell> $xml.servers.group2

server
——
{Server0005, Server0006, Server0007, Server0008}

PS C:\Tools\Powershell> $xml.servers.group1.server
Server0001
Server0002
Server0003
Server0004
PS C:\Tools\Powershell> $xml.servers.group2.server
Server0005
Server0006
Server0007
Server0008
PS C:\Tools\Powershell>

With this data in hand I can now run through the list using the ForEach-Object cmdlet. 

I’ll write another post on writing to XML files when I get a moment.