Azure and RFC3927 IP Addresses

RFC3927’s title is Dynamic Configuration of IPv4 Link-Local Addresses, and it describes how a host can automatically configure itself with an IPv4 address to communicate with other hosts on the same local link, when no statically configured IP address or DHCP server is available. The IPv4 address space set aside for this is 169.254.0.0/16. Just as RFC1918 defines addresses that can be used for private, routed networks, the addresses in RFC3927 are also private, but even more restricted than those in RFC1918. The addresses from RFC1918 (10/8, 172.16/12 and 192.168/16) will be dropped by any Internet-connected router. The only way to get traffic from a host with an RFC1918 address out on the Internet is through the use of Network Address Translation. On your own network, however, RFC1918 addresses work just as public addresses and can be routed etc. But addresses from RFC3927 will not be routed anywhere, they are what is know as link-local addresses and only facilitate communications on the same link. No router will forward RFC3927 addresses. You can test this yourself by leaving your computer set to receive a dynamic IP from a DHCP server, and then making sure that no DHCP server is available, but that your still have a network link. If your computer is RFC3927 compliant it will configure itself with an address in the 169.254.0.0/16 range. RFC3927 defines rules for how to choose an address and make sure your host can keep it once other hosts connect to the same link as you, but it is beyond this post to recite the RFC. Now let’s look at how RFC3927 pertains to Microsoft Azure.

In fact, it is very straight forward. RFC3927s 169.254.0.0/16 addresses are blocked from use in Azure on:

  • Azure Virtual Networks
    Only RFC1918 IPv4 addresses are valid on Azure vNets
  • Local network sites
    Both RFC1918 IPv4 addresses and any public IPv4 addresses you may own, unless they are in conflict with already configured network resources in Azure, can be used.
  • Gateway subnets
    Technically a part of an Azure address space so the same rules apply as for vNets.
  • P2S subnets
    Only RFC1918 IPv4 addresses allowed

The only place in Azure that the RFC3927 addresses are in use is inside S2S VPN tunnels. When you download S2S configuration scripts from the Azure portal you will find references to 169.254.0.0/16. As in this example for the Cisco ISR template:

int tunnel 1
  ip address 169.254.0.1 255.255.255.0
  ip tcp adjust-mss 1350
  tunnel source <NameOfYourOutsideInterface>
  tunnel mode ipsec ipv4
  tunnel destination 137.135.242.240
  tunnel protection ipsec profile vti

And here on a Windows Server 2012 R2 RRAS server you can see it in the IP properties of the PPP Link adapter:

PPP adapter 23.101.xx.yy:

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : 23.101.xx.yy
   Physical Address. . . . . . . . . :
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   Autoconfiguration IPv4 Address. . : 169.254.0.35(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.0.0
   Default Gateway . . . . . . . . . :
   DHCPv6 IAID . . . . . . . . . . . : 587208029

So in summary, the only place you will find these addresses in Azure are inside your S2S VPN tunnels. Now you know what those strange addresses are doing there…

Posted in Uncategorized | Leave a comment

Quick and dirty DirSync install automation

Here are the steps required to do an automated vanilla install of the latest DirSync tool:

Download the DirSync bits using BITS (the URL always points to the latest version):
Start-BitsTransfer -Source http://go.microsoft.com/fwlink/?LinkID=278924 -Destination c:Tempdirsync.exe

Install the required .NET 3.5 bits:
Add-WindowsFeature -Name NET-Framework-Core

Install DirSync unattended:
.DirSync.exe /quiet

Posted in Uncategorized | Leave a comment

Azure Files and IaaS Domain Controllers

Probably on of the very first VMs you deploy in your Microsoft Azure IaaS deployment are Windows Server Active Directory Domain Controllers. The Active Directory Domain Service is the foundation for pretty much every Windows server application out there, from Exchange to System Center. Running DCs in Azure IaaS works great, just remember that bit about tuning off the host caching on the disk where your DIT is.

At TechED NA 2014 Microsoft announced a new feature called Azure Files, which is currently in preview. In summary it is access to Azure storage over the SMB 2.1 protocol. Until Azure Files, if you wanted to talk to the blog, table or queue storage endpoints in Azure you had to do it via the REST API. Fine for developers, but not easliy accessible for others. What Azure Files does is introduce another endpoint for Azure storage, an endpoint that speaks the SMB protocol. So now if we want to talk to Azure Storage we can access it from any compute instance over SMB and UNC paths. So raw storage over SMB, without the need for a file server VM in between. Think of it as Fileserver-as-a-Service. Read the full announcement from the Azure Storage Group here. I wanted to use Azure Files to store a bunch of data in my lab environment in Azure, but I ran into a problem…

You access Azure Files over a UNC path where the servername is the name of the Azure Files endpoint in your storage account, the username is the name of the storage account, and the password is the access key for the same storage account. Like this:

net use \mystorageaccount.file.core.windows.netshare /u:mystorageaccount <storage account access key>

When I tried to do this on a workgroup VM in Azure it worked great, same with member servers in my lab domain, but on Domain Controllers in the domain I got this error:

System error 53 has occurred.

The network path was not found.

All VMs I have tested are in the same region and affinity group as the storage account, which is also in the same affinity group. I have also tested with a storage account outside an affinity group, but in the same region; same result.

My preliminary conclusion is that “something” in how DCs handle the browsing for the network path is different from member servers or stand alone VMs. I have no root cause yet, I just thought I should write this up so that anyone else experienceing the same would have some information. Sound of in the comments if you have experienced the same or have a solution.

Posted in Uncategorized | Leave a comment

Microsoft Azure RemoteApp Free Webinar

On August 29th I will be hosting a live, free webinar on the new Microsoft RemoteApp service that is currently in preview. The webinar is open to anyone. Read my introductory post to Azure RemoteApp here, and register for the webinar here. Hope you can join me!

Posted in Uncategorized | Leave a comment

Issue with using Windows Server 2012 RRAS as a local VPN device with Microsoft Azure when WAN interface has DHCP assigned IP address

In my personal lab I use a Windows Server 2012 R2 VM running on a local Hyper-V server as my VPN device to connect to a Microsoft Azure Virtual Network gateway. This setup has been working fine for quite some time, and is indeed a supported configuration. Recently though I could not make it connect. Come hear the story…

My ISP (Telenor) here in Norway is gracious enough to provide me with 2 dynamically assigned official IPv4 addresses. I use one for my home network and the other for the Azure gateway. I hadn’t used the lab for some time so the S2S VPN had been down. When I tried to reconnect it wouldn’t work.I knew Microsoft had made some changes recently, these were announced during Build 2014, so I figured I would reset the RRAS configuration and run the VPN setup from scratch. So I disabled the RRAS service and downloaded a fresh VpnDeviceScript.cfg file from the portal an redid everything. This didn’t take more than a couple of minutes, but still I could not connect. I noticed that when the RRAS service started I lost all connectivity on my WAN (Internet) interface. I could ping an address on the Interne, but the ping immediately stopped when RRAS started. I fired up Wireshark and did a trace of the WAN interface during RRAS startup and discovered that when the service starts several DHCP Discover messages were sent out on the WAN interface:

image

Since my ISP only allows 2 IPs this made the original address I had received on the WAN interface invalid. But why did it happen?

Also during RRAS startup I got this warning in the System log:

image

When RRAS is running as a remote access service, which it does when providing Site-2-Site VPN tunneling, it also supports incoming VPN requests from clients. These clients need an IP address to be able to communicate on the local network. They get this address either from the RRAS server itself and a static IP pool, or from a DHCP server on the local network where the RRAS reserves IP addresses in blocks of 10. In my case I had DHCP selected and thus no static IP pool configured. If the RRAS server is multi-homed, which it usually is, you can select which interface RRAS looks for a local DHCP server on.This settings is configured through the RRAS server properties:

image

The default setting is Allow RAS to select adapter. This was the cause of my problem. The RRAS service selected my WAN interface and send out a bunch of DHCP Discover messages to allocated IP addresses for incoming clients. This invalidated my existing DHCP lease and stopped all communication on the Internet. Once I selected the internal (LAN) Interface I could connect fine.

I guess this problem can occur in any S2S or RAS scenario where the WAN IP address is dynamically assigned.

Posted in Uncategorized | Leave a comment

Debugging unattended domain join on Windows Azure VMs

Introduction

One great thing about Windows Azure PowerShell is the ability to join a VM to an Active Directory domain during provisioning, this ability is not available in the portal. Joining a domain during Windows setup is nothing new, and it is accomplished by using the normal Windows unattended setup mechanism; namely unattend.xml. In unattend.xml you can specify the following information about the domain you want to join and the account that has permissions to perform the join operation in the directory (excerpt has been edited for readability):

<component name=”Microsoft-Windows-UnattendedJoin” processorArchitecture=”amd64″>
<Identification>
<Credentials>
<Domain><NetBIOS domain name></Domain>
<Username><NT type (samaccountname) username></Username>
<Password><password in encrypted form></Password>
</Credentials>
<JoinDomain><FQDN of domain to join></JoinDomain>
<MachineObjectOU><DN of OU to place machine account in></MachineObjectOU>
</Identification>
</component>

You usually don’t create the unattend.xml file manually, but rather use a tool like Windows System Image Manager from the Windows Assessment and Deployment Kit (ADK).

I recently had an issue where none of my new VMs would join the domain during provisioning, leaving them all in a workgroup. That led me to compile the following during my debugging.

Add-WindowsProvisioningConfig

The cmdlet that enabled a Windows Azure VM to join a domain is called Add-WindowsProvisioningConfig. If used with the –WindowsDomain parameter it lets you specify these additional parameters:

Parameter Info
JoinDomain FQDN of domain to join
Domain NetBIOS name of account with permission to join computers to domain specified in the JoinDomain parameter. This is usually always the NetBIOS name of the domain from JoinDomin, but could be from another domain in the forest or a trusted domain.
DomainUserName NT-style username (samaccountname) of account with permissions to join the domain specified in JoinDomain
DomainPassword Password of the account with permissions to join the domain specified in JoinDomain
MachineObjectOU DN of OU where the computer account of the VM should be placed

This info is loaded into the unattend.xml file used to setup the new VM. After that the unattended setup process continues  as with any regular Windows install.

Logs

If domain join does not work there are several logs that should be examined.

Operation logs in the portal

Under Management Services in the Windows Azure portal you find the Operation Logs.

image

The provisioning of a new VM is an AddRole operation, first one with Status Started and then another one with Status Succeeded. By examining the details of these you can see what is passed to the back end. This is particularly useful if you are using scripts which pass variables to the domain join parameters. Here you see exactly what is passed. Note that the password is omitted from the logs, that is not an error.

Windows Setup logs

After the portal has done its job the rest of the provisioning is left to the normal Windows unattended setup process and is handled entirely within the VM. That means that any normal troubleshooting techniques for unattended Windows setup applies. The working folder of Windows Setup is systemdrive Windows Panther. Here you will find the unattend.xml file used during setup (if is has not been deleted by setup itself, Azure does not do this), and the logs for the entire setup process:

File Info
unattend.xml The answer file for unattended Windows setup
setuperr.log Any major errors encountered during setup
setupact.log All setup activity

The first thing you should do is check unattend.xml and verify that it contains the correct info.

Domain join during unattended setup is done with the djoin.exe executable, so to debug domin join we need to search for that in setupact.log. Also searching for the words Warning or Error could reveal useful information. This is an excerpt from the setupact.log with the error I encountered:

2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Begin
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Loading input parameters…
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: AccountData = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: UnsecureJoin = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: MachinePassword = [secret not logged]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: JoinDomain = [corp.mydomain.com]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: JoinWorkgroup = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Domain = [corp]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Username = [Administrator]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Password = [secret not logged]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: MachineObjectOU = [CN=Computers,DC=corp,DC=mydomain,DC=com]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: DebugJoin = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: DebugJoinOnlyOnThisError = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: TimeoutPeriodInMinutes = [NULL]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Checking that auto start services have started.
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Calling DsGetDcName for corp.mydomain.com…
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: DsGetDcName returned [dc2.corp.mydomain.com]
2014-03-11 14:09:45, Info     [DJOIN.EXE] Unattended Join: Constructed domain parameter [corp.mydomain.com DC2.corp.mydomain.com]
2014-03-11 14:09:45, Warning     [DJOIN.EXE] Unattended Join: NetJoinDomain attempt failed: 0x2, will retry in 10 seconds…
2014-03-11 22:09:56, Warning     [DJOIN.EXE] Unattended Join: NetJoinDomain attempt failed: 0x2, will retry in 10 seconds…

As you can see from this case, the NetJoinDomain function fails and is eventually abandoned. You can also find these errors in the System log:

ID: 4097
Source: NetJoin
Info: The machine SERVER1 attempted to join the domain corp.mydomain.com DC2.corp.mydomain.com but failed. The error code was 2.

To find out more about this we need to look at another log; C Windows debug NetSetup.log. This file contains information for alle domain join operations performed on a machine, not just the ones during setup. Here is a section from mine:

I have highlighted my particular error. As you can see I specified the DN of the computers container as the location of the new computer object. Since Computers is in fact a container and not an OU, the NetpCreateComputerObjectInDs function fails. After I specified a bona fide OU in my Add-WindowsAzureProvisioningConfig cmdlet the VM was successfully joined to my domain. If you actually want the computer object to be placed in the Computers container just omit the MachineObjectOU parameter from Add-WindowsAzureProvisioningConfig. Since the Computers container is the default location for new computer objects, unless redirected by the admin or overridden, your VMs account will end up there.

Posted in Uncategorized | Leave a comment

Will it run? How to run your favorite legacy OS in a Windows Azure VM

Introduction

Old OSs are very popular, at least judging by the number of, for want of a better word, “legacy” servers I find in customer data centers. The old workhorses Windows Server 2003/2003 R2, Windows 2000 and even good old Windows NT pop up in disturbing numbers. With them, the question “Will Windows X, run in a Windows Azure VM?” often follows. To be able to answer that question we first need to talk about “run” vs. “supported”. A lot of OSs can be made to run in a Windows Azure VM, but that does not mean that Microsoft will support that OS. Unfortunately a lot of people are only interested in the “run” part of the equation, ignoring any problems of supportability down the road. Support is very straight forward; the oldest supported Windows OS running in a Windows Azure VM is Windows Server 2008 R2 x64, as stated in KB2721672: Microsoft server software support for Windows Azure Virtual Machines. So if you want support the buck stops there. But what can we make to run if we throw supportability to the wind…

It’s still Hyper-V

All the physical servers in a Windows Azure data center run a modified version of Windows Server with the Hyper-V role installed. So basically all the capabilities of Hyper-V should be available in Windows Azure, unless they have been specifically blocked. The About Virtual Machines and Guest Operating Systems page lists all the guest operating systems that are supported on Hyper-V, so lets assume that we can run all of those. I’m not going to test all the OSs on that list, but rather focus on some specific golden oldies (all 32-bit editions):

  • Windows 2000
  • Windows XP
  • Windows Server 2003

There is also a lot of rumor on the Internet concerning support for 32-bit (x86) VMs in Windows Azure. Although all the images in the gallery and the fact that the earliest supported server OS (Windows Server 2008 R2) is only available in 64-bit (x64), this is never actually stated anywhere officially. (At least not that I have found.)

Another area where there is a lot of talk is concerning client operating systems in Windows Azure VMs. In this case Microsoft is very clear; client OSs are not supported and it is a violation of the license to install and run one in a Windows Azure VM. For the sake of scientific discovery I will ignore that issue for this post.

So lets see what’s what…

Basic requirements

I’m going to work from these premises during my testing:

  • The legacy OS needs to be on the supported list for Hyper-V
  • Hyper-V Integration Components need to be available
    Either on the vmguest.iso image, already in the OS or available for download.
  • Remote Desktop for Windows VMs and SSH for Linux VMs must be available
  • Only VHD images are supported in Windows Azure, not VHDX
  • None of the selected legacy OSs will support being generalized in a way that Windows Azure can use, so we can only upload OS disks, not images. (Windows Azure uses the setup technology introduced in Windows Vista, codenamed Panther, which legacy OSs don’t.)

Basic steps

Here is what we need to do (at least):

  1. Use a Hyper-V host running Windows Server 2012 R2
    This probably not a requirement, but the latest and greatest is always nice.
  2. Install a VM with the legacy OS, must select Generation 1 and the VHD disk format.
  3. Once OS install completes, install the Hyper-V integration components
  4. Update the OS through e.g. Microsoft Update
    Having the latest updates will maximize the chance of success
  5. If you want remote PowerShell or WinRM make sure to install those optional updates.
  6. Enable Remote Desktop and make sure the Remote Desktop exception is enabled in the Windows Firewall if it is enabled.
  7. If you are using WinRM technology configure that, preferably with HTTPS instead of HTTP, and enable the appropriate firewall exceptions.
  8. If you are actually trying to run a production server with a legacy OS in Windows Azure (something you should not do), remove any special drivers or low level software that might break the VM. You can always install this alter.
  9. Shut down the VM on your Hyper-V Server
  10. Upload the VHD to a Windows Azure storage account (Add-AzureVHD)
  11. Register the new blob as an OS image (Add-AzureDisk)
  12. Start the VM in Azure and log on (hopefully).

The results

Using the above method I was able to obtain these results:

OS Runs Notes
Windows 2000 Yes
  • Professional, Server and Advanced Server all share the same code base and all work
  • Windows Server 2012 removed Windows 2000 support from the Integration components so you have to use integrations components from Windows Server 2008 R2.
  • Performance was not great on my VM…
Windows XP x86 Yes  
Windows Server 2003 x86 Yes Windows Server 2003 R2 is pretty much the same code base so it should run fine too. And in fact it does.

I assume that Windows Server 2003/2003R2 and XP x64 will also work.

Feel free to try some golden oldies yourself and use the comments for your results. Good luck.

Posted in Uncategorized | Leave a comment

Microsoft Product Use Rights (PUR) document update; great news for Windows Azure and hosters

What is Product use Rights (PUR)?

from microsoft.com:

“When you purchase a software license through a Microsoft Volume Licensing program, the terms and conditions for how you can use the software are defined in the Volume Licensing Product Use Rights (PUR) document, Product List document, and program agreement. The PUR is updated quarterly.”

The change

Effective January 1 2014 Microsoft made a pretty significant change to the Software Assurance benefit. This is from the Windows Server 2012 R2 Remote Desktop Service Licensing Data Sheet:

RDS User CALs Extended Rights through Software Assurance
Today, RDS CALs permit remote access to the Windows Server GUI (Graphical User Interface) running on a customer’s on-premise server and RDS SALs (Subscriber Access License) if running on a shared-server environment. Effective January 1 2014, RDS User CALs will have Extended Rights through Software Assurance. In addition to the on-premise access, RDS User CAL customers will also be able to access the Windows Server GUI running on Windows Azure or on a third party’s shared server, without acquiring a separate RDS SAL.

To leverage this benefit customers should meet following requirements:

  • Maintain Software Assurance coverage on the RDS User CALs
  • Use dedicated VOSE (Virtual Operating System Environment) in Windows Azure or third party’s
    shared servers
  • Access Windows Server session-based desktops and/or applications running on shared server
    environments
  • Limit access by internal users only i.e. by company employees, vendors and contractors and not by
    external users such as customers
  • Assign each on-premise RDS User CAL to the same named user on Windows Azure or a third party’s
    shared servers
    This RDS User CAL Software Assurance benefit allows each User to access RDS functionality only on one shared server environment (i.e. Windows Azure or a third party server) in addition to access the respective on premise servers. The customer must acquire extra RDS SALs (Subscriber Access License) if the same User needs to access RDS functionality on additional shared server environments.

This is indeed great news for all customers wanting to access a remote Windows Server Desktop in Windows Azure or at another third party hoster.

More info:

Posted in Uncategorized | Leave a comment

Quickly stop and start your Windows Azure lab

Introduction

I needed  way to easily stop and start my different lab setups in Windows Azure. I don’t want to keep running, and pay for, a set of VMs I use maybe once a month. So here is a PowerShell script to stop and start a set of Azure VMs. One important dimension here is the order in which the VMs start (and possibly; shuts down, depending on your setup). Since all IP addresses are dynamically allocated in Windows Azure I had to make sure that my VMs started in a specific order and that the script executed synchronously. That way whatever IP they had when they were provisioned would most likely be assigned to them when they started. Therefore the script includes code to start and stop VMs in the order you specify them in the input file. It also tries to wait until the current VM is fully started or stopped before proceeding with the next one.

Pre-requisites, input file and usage

The input file is really simple. It is a text file with a Cloud Service name and a VM name on each line, separated by a semi-colon, no header:

<cs name>;<vm name>

The VMs in the file will be started in the order they are listed and stopped in the reverse order!

By default, the script looks for a txt file in its execution directory called Control-AzureVMs.txt. If it is not found PowerShell throws an error. You can override this behavior by specifying your own file with VMs to either start or stop.

You must already have your machine configured to use Windows Azure PowerShell and specify your subscription in the script:

Select-AzureSubscription “<your subscription name here>”

Whether you want to start or stop a set of VMs is controlled by a script parameter. You use either Stop or Start. Optionally you can add your own txt file with VMs after the Start/Stop parameter.

Usage with the default input file:

Control-AzureVMs.ps1 Stop

Control-AzureVMs.ps1 Start

Usage with the custom input file:

Control-AzureVMs.ps1 Stop MySetOfVms.txt

The code

Here is the code. I can think of all sorts of improvements, but I needed this quickly so that will have to wait. Error checking is pretty much non-existent at this point so use at your own risk. I accept no responsibility whatsoever.

UPDATE: With the release of the Windows Azure PowerShell module v0.7.3 you can now create DHCP reservations for your VMs that will persist when the machine is deallocated. Check out Get-AzureStaticVNetIP, Set-AzureStaticVNetIP, Remove-AzureStaticVNetIP and Test-AzureStaticVNetIP.

Posted in Uncategorized | Leave a comment

NIC 2014 Slide deck

Here is my slide deck from the Nordic Infrastructure Conference (NIC) 2014. My talk was called Modern authentication for the Cloud Era and covered claims based authentication and some common scenarios, OAuth and OpenID Connect. Thanks to everyone who attended my session. Hope to see you there next year!

http://www.slideshare.net/MorganSimonsen/nic-2014-modern-authentication-for-the-cloud-era

Posted in Uncategorized | Leave a comment