Setup New Disk on 2016 Server Core

When adding a disk to a server core machine, we don’t have the GUI and will need to use PowerShell to finish the disk setup.

To start, you’ll want to use Get-Disk to show your disks, you’ll want to note the disk number of the new disk you are setting up.  For this example, I’ll be working with disk 2.

snap2

Then you need to make sure the disk is online and set to read/write (by default it will be offline and read-only).

Get-Disk -Number 2 | Set-Disk –IsOffline:$false
Get-Disk -Number 2 | Set-Disk –IsReadOnly:$false

Then you need to initialize the disk, create a partition, and format it.  We can do all of this in one command.  Be careful that you’ve picked the right disk!

Get-Disk -Number 2 | Initialize-Disk –Passthru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -Confirm:$false –Force

Note that it automatically picks the first free drive letter.  In this case, it picked drive F.

snap3

If you want to use a different drive letter, you need to mix some PowerShell and WMI to make the change.  In my example I’m going to change drive F to be drive S.

$drive = Get-WMIObject –Class win32_volume –Filter "DriveLetter = 'f:'"
Set-WMIInstance –input $drive –arguments @{DriveLetter="s:"}

Once you’re all done, you can use Get-Volume to make sure everything is set the way you want.

Advertisements

Office 365 – Outlook Autodiscover Fails

So, the Autodiscover for Office 365 just doesn’t work, sometimes.  We happened to be in that boat, and we tried checking the domain in Office 365 (everything is set up correctly), tried the Microsoft Remote Connectivity Analyzer (also said everything is fine), and tried changing group policies to disable some of the Autodiscover methods (now it fails faster).

So we were down to using the Office 365 Support and Recovery Assistant to set up every new profile.  This is slow and can’t set up profiles that connect to more than on tenant (since it forces you to create a new profile each time you run it).

Fortunately, in this guide (https://www.howto-outlook.com/howto/autodiscoverconfiguration.htm) to setting up Autodiscovery, it mentions using a local autodiscover.xml file.  I was able to set this up, so it has the redirection information, so Outlook can go right to it, fixing all our setup problem.

Between our two tenants, the xml file was the same, so hopefully this is a universal and can work for you too.

First, create your xml file (the file name doesn’t matter as long as you know it later):

<?xml version="1.0" encoding="utf-8" ?>
<Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006">
  <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a">
    <Account>
      <AccountType>email</AccountType>
      <Action>redirectUrl</Action>
      <RedirectUrl>https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml</RedirectUrl>
    </Account>
  </Response>
</Autodiscover>

Now, you need to add a registry key to make Outlook use this file.  Create a new REG_SZ key in HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\16.0\Outlook\AutoDiscover, the value name is the mail domain you’re using, the value is the path to the xml file.

You can bundle this part up with PowerShell:

$localpath = "C:\autodiscover"
$autodfile = "autodiscover.xml"
$regpath = 'HKCU:\SOFTWARE\Microsoft\Office\16.0\Outlook\AutoDiscover'
$key = 'mail.domain'
$value = ($localpath+'\'+$autodfile)
#check if the key is already there
$out = $null
$out = Get-ItemProperty -Path $regpath -Name $key -ErrorAction SilentlyContinue
#add key if not present
if (!$out) { New-ItemProperty -Path $regpath -Name $key -Value $value -PropertyType String }

 

VM Switch Team on Windows Server 2016 Nano

So you’ve set up your Nano server to work as a Hyper-V host, and now you’re ready to configure your teamed VM switch.  Except you’re on Nano and have no console to do it, and only two NICs.  What to do?

Well, you can get there, it just takes a little bouncing around.

Configure one of your NICs (make sure to note the MAC address) with an IP on your network so that you can make a connection.  Leave the other NIC unconfigured.

Now you can connected with PowerShell.  You first have to add the IP of the remote Nano server to a Trusted Hosts list for WinRM:

Set-Item WSMan:\localhost\Client\TrustedHosts -Value 10.0.0.2

Now you can create your remote session:

Enter-PSSession -ComputerName 10.0.0.2 -Credential HostName\Administrator

Now get your network adapters:

Get-NetAdapter

If you want to rename them, do that now:

Get-NetAdapter -Name 'Ethernet' | Rename-NetAdapter -NewName Team1
Get-NetAdapter -Name 'Ethernet 1' | Rename-NetAdapter -NewName Team2

Check to see which NIC you’re using now.  I’ll assume that is Team1, so we’re create your VM Switch with just Team2:

New-VMSwitch -Name TeamedvSwitch -NetAdapterName "Team2" -EnableEmbeddedTeaming $true -AllowManagementOS $false

Now check that the team was created correctly:

Get-VMSwitchTeam

Now you can add a management NIC and configure it (replace values as necessary for your environment):

Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "TeamedvSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 2
New-NetIPAddress –InterfaceAlias "vEthernet (Management)" –IPAddress 10.0.2.2 –PrefixLength 24 -DefaultGateway 10.0.2.1
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses 10.0.2.10,10.0.2.11

Now you need to exit your remote session:

Exit-PSSession
Then set up a rule to allow a connection to the new Team:
Set-Item WSMan:\localhost\Client\TrustedHosts -Value 10.0.2.2
Now you can connect to the server on the new connection:
Enter-PSSession -ComputerName 10.0.2.2 -Credential HostName\Administrator
Now you just need to add the Team1 NIC into the team:
Set-VMSwitchTeam -Name TeamedvSwitch -NetAdapterName "Team1","Team2"
Now your team is set up and you’re ready to start setting up the rest of your server.  Don’t forget to close out your session when you’re done.
Exit-PSSession

Update Invoke-WebRequest HTTPS Protocol

If you’re trying to talk to a web server via PowerShell with the Invoke-WebRequest command, you may get an error: Invoke-WebRequest : The underlying connection was closed: An unexpected error occurred on a send.

While I’m sure there are other situations that can cause these error message, this is the error you’ll get if the server you’re talking to has disabled SSLv3 and TLS1.0.

To make a connection, you’ll need to tell PowerShell to use something newer.  The following command will tell it to use TLS1.2.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::TLS12

 

Add MS SQL Always On Database via Powershell

You can use the GUI to set up SQL AO groups, but if you want to include it any database creation scripts, you can also do it in PowerShell.

First you’ll need to import the PowerShell module for SQL.

Import-Module SQLPS -DisableNameChecking

Then we’ll set some variables to get ready.  First you set up the database name and the network share to store and distribute the initial sync’s backup.

$dbname = 'databasename'
$networkshare = '\\servername\sharename\'

Then it uses those to build the file names for the backups.

$dbfilebackup = $networkshare + $DBName + '.bak'
$dblogbackup = $networkshare + $DBName + '.trn'

Now you need to tell it about the SQL AO group name, as well as the primary and secondary SQL server names.

$sqlaogroup = 'sqlao01'
$sqlprimary = 'server01'
$sqlsecondary = 'server02'

Then we use that data to build the SQL strings to connect to the availability group.

$sqlstringprimary = 'sqlserver:\sql\' + $sqlprimary + '\default\availabilitygroups\' + $sqlaogroup
$sqlstringsecondary = 'sqlserver:\sql\' + $sqlsecondary + '\default\availabilitygroups\' + $sqlaogroup

Now we’re ready to get to work.  First we’ll take the initial backup of the database.

Backup-SqlDatabase -Database $DBName -BackupFile $dbfilebackup -ServerInstance $sqlprimary
Backup-SqlDatabase -Database $DBName -BackupFile $dblogbackup -ServerInstance $sqlprimary -BackupAction Log

Once that’s done, we’ll restore the database to the secondary server.

Restore-SqlDatabase -Database $DBName -BackupFile $dbfilebackup -ServerInstance $sqlsecondary -NoRecovery
Restore-SqlDatabase -Database $DBName -BackupFile $dblogbackup -ServerInstance $sqlsecondary -RestoreAction Log -NoRecovery

After that completes it is time to add the database to always on, first on the primary, then on the secondary server.

Add-SqlAvailabilityDatabase -Path $sqlstringprimary -Database $DBName
Add-SqlAvailabilityDatabase -Path $sqlstringsecondary -Database $DBName

Now you’re done and the database should be syncing.  This process also works for a tertiary server, just add the data restore and add commands.

Import-Module SQLPS -DisableNameChecking
$dbname = 'databasename'
$networkshare = '\\servername\sharename\'
$dbfilebackup = $networkshare + $DBName + '.bak'
$dblogbackup = $networkshare + $DBName + '.trn'
$sqlaogroup = 'sqlao01'
$sqlprimary = 'server01'
$sqlsecondary = 'server02'
$sqlstringprimary = 'sqlserver:\sql\' + $sqlprimary + '\default\availabilitygroups\' + $sqlaogroup
$sqlstringsecondary = 'sqlserver:\sql\' + $sqlsecondary + '\default\availabilitygroups\' + $sqlaogroup
Backup-SqlDatabase -Database $DBName -BackupFile $dbfilebackup -ServerInstance $sqlprimary
Backup-SqlDatabase -Database $DBName -BackupFile $dblogbackup -ServerInstance $sqlprimary -BackupAction Log
Restore-SqlDatabase -Database $DBName -BackupFile $dbfilebackup -ServerInstance $sqlsecondary -NoRecovery
Restore-SqlDatabase -Database $DBName -BackupFile $dblogbackup -ServerInstance $sqlsecondary -RestoreAction Log -NoRecovery
Add-SqlAvailabilityDatabase -Path $sqlstringprimary -Database $DBName
Add-SqlAvailabilityDatabase -Path $sqlstringsecondary -Database $DBName

.

 

 

Moving a SQL Log File

Sometimes you need to move a log file for a SQL database, when I originally learned to do this the process was to detach the database, move your files and then attach and modify the locations.  I had a need to do this last weekend, and found that there is a much better way to do this (and a bit safer).

You do this by using the ALTER DATABASE command to MODIFY FILE and tell it where the log will go.  This command won’t take effect until the next time the database is brought online.

ALTER DATABASE dbname MODIFY FILE ( NAME = dbname_log, FILENAME = 'S:\newlocations\dbname_log.ldf');

So now the database knows where to look for the log file the next time it’s started.  So now you need to take that database offline and move the file.  You can shut down SQL, but you can use ALTER DATABASE again to just have downtime for the one database.

ALTER DATABASE dbname SET OFFLINE;

When this completes your database will be offline, but still registered on the SQL server, which is why this is safer than the detach/attach method.  If you want to confirm your settings you can run the following.

SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'dbname);

Now move the log file to the location you specified and bring the database back online.

ALTER DATABASE dbname SET ONLINE;

.

Chrome PAC Error Detection

Chrome is more strict than IE in dealing with PAC files.  So you may (and I did) have a PAC file that works in IE, but is ignored by Chrome.  To see what proxy file chrome is gathering, and hopefully an error to point you to why it’s being ignored, load this in chrome:

chrome://net-internals/#proxy

From here you can reload the proxy, and see what file is being pulled down.  After you try to browse to a page, you can also click the link to events to hopefully find a useful error.