UC Summit 2020 by UC Today

Yesterday I had the pleasure of speaking with Rob Scott, Chief Publisher of UC Today, an online magazine and news website that focuses on the Unified Communications Industry about UC Summit.

UC Summit is a new online and virtual conference that UC Today are organising which is targeting technology buyers from end user customer organisations.

The aim of UC Summit is to provide a free, on-demand virtual expo where vendors from all over the UC technosphere will converge to pitch their solution to a captive audience, completely online in a video on-demand conference.

UC Summit will also be showcasing advocates from all over the UC industry, including Microsoft MVPs and community experts to deliver high value, technical content in sessions that compliment their technology expertise.

We are pleased to announce that the Commsverse team will be presenting at UC Summit on Microsoft Teams. More information will be posted later. But we are hugely excited by this opportunity and that another great UC conference is emerging which shows the demand and excitement around modern unified communications.

UC Summit will begin on the 20th January 2020. If you’d like to join in with all the free learning, please visit www.ucsummit.com today and register for your pass today!

Processing…
Success! You're on the list.

Channel Moderation in Microsoft Teams

In this episode of #TeamsIn2Minutes Mark Vale discusses what Channel Moderation is in Microsoft Teams and why you may find it useful in your own teams.

Channel Moderation is a feature available to all team owners whereby they can select just a few key people within their team to act as conversation starters.

By default, all team members are able to start new conversations, but as your team grows, channels may be used to share information that isn’t necessarily relevant to the purpose of the channel, e.g. pizza night outs and other “white noise” topics.

Sometimes users accidentally click the new conversation button, instead of the reply to thread button and then continue to post what they think is a reply to a previous thread, but indeed it is a whole new conversation.

These two human flaws can start to dilute the content within the channel and make it harder for people to find the most important information within it.

If you’re struggling with this, then perhaps moderation is a solution for you.

You can turn it on by selecting your team, then channel, go to manage channel and select your moderators.

Moderators will then have the permission to start new conversations, while normal members may only respond to the conversations started by the moderators.

You can take this a step further, by banning normal members from replying to any thread, thus making your channel more of a notice board where announcements and other content that shouldn’t be distracted can be posted.

Processing…
Success! You're on the list.

Automating Documentation with GitHub & Markdown – Part 2 – Construct

In Part 1 I discussed the concept and structure of what I am trying to achieve. In this part I’ll show you how it all works together.

Before we get on with GitHub and document automation, I wanted to be able to create a handful of base template documents from the same set of markdown assets. I don’t want to be having to type in the filenames of each document I want to include each time. To solve this I first created 4 or 5 document flows from the elements I created that target the main different deployment scenarios customers will want. Once I had the order of the elements for each document template, I needed a way of being able to call the structure with ease.

I decided I would create a JSON file that I could write a PS script to use to generate the templates for me. In the JSON file I would create document options, and each option would list out the markdown elements and the order I want them to appear in the final document.

"cloudcollab": [
  {
    "id":"1",
    "file":"DocIntro.md"
  },
  {
    "id":"2",
    "file":"MSTeamsLogicalArchitecture.md"
  },
  {
    "id":"3",
    "file":"MSTeamsSecCom.md"
  },
  {
    "id":"4",
    "file":"MSTeamsGovernance.md"
  },
  {
    "id":"5",
    "file":"MSTeamsLicensing.md"
  },
  {
    "id":"6",
    "file":"MSTeamsMessaging.md"
  },
  {
    "id":"7",
    "file":"MSTeamsCollaboration.md"
  },
  {
    "id":"8",
    "file":"MSTeamsClient.md"
  },
  {
    "id":"9",
    "file":"MSTeamsVDI.md"
  },
  {
    "id":"10",
    "file":"MSTeamsOperations.md"
  },
  {
    "id":"11",
    "file":"MSTeamsReporting.md"
  },
  {
    "id":"12",
    "file":"MSTeamsDisaster.md"
  }
],

In the above JSON structure I would be creating a Teams design document based on Chat and Collaboration only.

Now for storing all this on Github. First off, I don’t want to be sharing all my IP with the public (sorry) so I needed to ensure it is protected. GitHub supports private repositories, so I made one for my files. If you want your team to work on the documentation, you can invite fellow workers as contributors to your private repository. For free I think you can invite up to 3 people. To add more than 3, you need a Github premium account.

One caveat on using private repositories is that when you are authoring a document and referencing an image file from that repository in markdown, it doesn’t work because of the authentication barrier. In order to solve this, I create a separate, public repository to store just my images, and I would reference those instead.

To keep files in sync, it is best to use GitHub Desktop to keep your documentation up to date.

Now, to generate the document from GitHub, I thought this would be easy to use Powershell and Invoke-RestMethod or Invoke-WebRequest to pull the files I wanted directly from the private repository. It turns out that GitHub doesn’t allow you to send your authentication token using either of these methods. Several blogs and trials later I gave up, and thought that I had reached a dead end.

I found a program called git scm which you can install that allows you to use some command line tools to clone the repository. After you install this, edit your PATH variable in your system Environmental Variables to include the following path:

C:\Program Files\Git\Bin

This will allow you to reference git.exe without specifying the path to the executable in any script.

All that is left now is to clone the repository to your local machine, create the YAML front matter file, process the JSON file and build the document.

To clone the repository from PowerShell use the following command

git clone https://github.com/user/repo

You’ll be asked to login to GitHub and it will generate an access key so that all future clones happen without having to enter your username and password.

Now you need to create your YAML file. You can do this in PowerShell by prompting for all the variables, or reference a pre-made YAML template

Once you have got this, if you have used a JSON array to structure your document elements, read this into a PowerShell variable

$file = Get-Content  .\teamsdoc\params.json | ConvertFrom-Json

Now you can create an option menu in PowerShell and press 1,2,3 etc to generate a document

function Show-Menu
{
     param (
           [string]$Title = 'Please choose which type of document to create'
     )
     cls
     Write-Host "================ $Title ================"
     
     Write-Host "1: Press '1' Collab Only."
     Write-Host "2: Press '2' Collab and Meetings."
     Write-Host "3: Press '3' Cloud PSTN Calling."
     Write-Host "4: Press '4' Direct Routing."
     Write-Host "5: Press '5' Hybrid PSTN"
     Write-Host "Q: Press 'Q' to quit."
}

do
{
     Show-Menu
     $input = Read-Host "Please make a selection"
     switch ($input)
     {
           '1' {
                cls
                'You chose Collab Only'
                Generate-DocTemplate -Template "cloudcollab"
           } '2' {
                cls
                'You chose Collab and Meetings'
                Generate-DocTemplate -Template "cloudcollabmeetings"
           } '3' {
                cls
                'You chose Cloud PSTN Calling'
                Generate-DocTemplate -Template "cloudcalling"
           } '4' {
                cls
                'You chose Direct Routing - To be added'
           } '5' {
                cls
                'You chose Hybrid Voice - To be added'
           } 'q' {
                return
           }
     }
    pause

}
until ($input -eq 'q')

Finally, create yourself a little function to take the chosen structure from JSON and add the file elements

Function Generate-DocTemplate{
        
        param (
            [string]$Template = 'cloudcollab'
        )
       
       $doc = $file.$Template

       foreach($element in $doc){
            
            $input = "$($input) $($element.file)"
       }

       $mdfiles = $input.Substring(1)

       Set-Location -Path C:\autodoc\teamsdoc

       $test = Test-Path -Path command.cmd

       If ($test -eq $true){

        Remove-Item -Path command.cmd -Force -confirm:$false

       }

       New-item -ItemType file -name command.cmd

       set-content -path command.cmd -value "cd c:\autodoc\teamsdoc
       pandoc.exe $($mdfiles) --filter pandoc-mustache --toc --standalone --reference-doc $($documentreference) --o LLD.docx"

       .\command.cmd

       # move lld to final folder

       Move-Item -Path LLD.docx -Destination C:\autodoc\final

       Set-Location -Path 'C:\autodoc'

       Rename-Item -Path .\final\LLD.docx -NewName "$($customer) Teams Low Level Design.docx"

       ## clean up directory

       Remove-Item -Path .\teamsdoc -Recurse -Force -Confirm:$false


       Write-Host "Finished creating document, check c:\autodoc\final for the document. You may now press Q to quit..." -ForegroundColor Green
       
}

One tip is that when you create your Pandoc command programmatically PowerShell has trouble parsing it due to the double dash parameters, so the way to get around this is to temporarily create a .cmd file and write the output to this file, then execute it using command prompt. After it is completed, the command file can be deleted.

Full PowerShell Script:

# set directory and create directory structure.

$tdir = Test-Path C:\autodoc -PathType Container

If ($tdir -eq $false){
        
        $mdir = New-Item -Path c:\autodoc -Force -ItemType Diretory
}
$tdirf = Test-Path c:\autodoc\final -PathType Container

If ($tdirf -eq $false){
    $mdirf = New-Item -Path c:\autodoc\final -ItemType Directory -Force
}

# change working directory of script

Set-Location -Path C:\autodoc -PassThru

# clone the git

    git clone https://github.com/user/repo


# get parameters file and load variables for script

$file = Get-Content  .\teamsdoc\params.json | ConvertFrom-Json

ForEach($var in $file.variables){
        
        Set-Variable -Name $var.param -Value $var.value
}

## Or use PS to Set variable values instead of parsing them from JSON
$customer = Read-Host "Please enter customer name"
$date = Get-Date -Format d-mm-y
$plan = Read-Host "Enter O365 Plan e.g. E5"
$version = Read-Host "Enter Document Version"
$residency = Read-Host "Enter Tenant Location"
$audioconferencinglicenses = Read-Host "Enter how many audio conferencing licenses required"
$azureadp1licenses = Read-Host "Enter how many Azure P1 licenses needed"
$meetingroomlicenses = Read-Host "Enter how many meeting room licenses needed"
$commonarealicenses = Read-Host "Enter how many common area phone licenses required"
$domcallingplanlicenses = Read-Host "Enter how many Domestic Calling Plan licenses required"
$intcallingplanlicenses = Read-Host "Enter how many International Calling Plan licenses required"
$phonesystemlicenses = Read-Host "Enter how many phone system licenses required"
$enterpriseuserlicenses = Read-Host "Enter how many E plans are required"
$communicationcredits = Read-Host "How much will be loaded into Communication Credits?"
$virtualuserlicenses = Read-Host "How many virtual user licenses required"
$documentreference = "referencedoc.docx"
## Create YAML file

$yaml = ".\teamsdoc\docmeta.yaml"

$filetest = Test-Path .\teamsdoc\docmeta.yaml

if ($filetest -eq $true){
    
    Remove-Item -Path .\teamsdoc\docmeta.yaml -Force -Confirm:$false
}

New-Item -Path $yaml -ItemType File -Force

$content = "customer: $($customer)
supplier: $($supplier)
date:  '$($date)'
plan: $($plan)
version: '$($version)'
residency: $($residency)
audioconferencinglicenses: '$($audioconferencinglicenses)'
azureadp1licenses: '$($azureadp1licenses)'
meetingroomlicenses: '$($meetingroomlicenses)'
commonarealicenses: '$($commonarealicenses)'
domcallingplanlicenses: '$($domcallingplanlicenses)'
intcallingplanlicenses: '$($intcallingplanlicenses)'
phonesystemlicenses: '$($phonesystemlicenses)'
enterpriseuserlicenses: '$($enterpriseuserlicenses)'
communicationcredits: '$($communicationcredits)'
requiredvirtualuserlicenses: '$($virtualuserlicenses)'
virtualuserlicenses: '10'"

Set-Content -Path $yaml -Value $content

## Create document template

Function Generate-DocTemplate{
        
        param (
            [string]$Template = 'cloudcollab'
        )
       
       $doc = $file.$Template

       foreach($element in $doc){
            
            $input = "$($input) $($element.file)"
       }

       $mdfiles = $input.Substring(1)

       Set-Location -Path C:\autodoc\teamsdoc

       $test = Test-Path -Path command.cmd

       If ($test -eq $true){

        Remove-Item -Path command.cmd -Force -confirm:$false

       }

       New-item -ItemType file -name command.cmd

       set-content -path command.cmd -value "cd c:\autodoc\teamsdoc
       pandoc.exe $($mdfiles) --filter pandoc-mustache --toc --standalone --reference-doc $($documentreference) --o LLD.docx"

       .\command.cmd

       # move lld to final folder

       Move-Item -Path LLD.docx -Destination C:\autodoc\final

       Set-Location -Path 'C:\autodoc'

       Rename-Item -Path .\final\LLD.docx -NewName "$($customer) Teams Low Level Design.docx"

       ## clean up directory

       Remove-Item -Path .\teamsdoc -Recurse -Force -Confirm:$false


       Write-Host "Finished creating document, check c:\autodoc\final for the document. You may now press Q to quit..." -ForegroundColor Green
       
}
## set option menu

function Show-Menu
{
     param (
           [string]$Title = 'Please choose which type of document to create'
     )
     cls
     Write-Host "================ $Title ================"
     
     Write-Host "1: Press '1' Collab Only."
     Write-Host "2: Press '2' Collab and Meetings."
     Write-Host "3: Press '3' Cloud PSTN Calling."
     Write-Host "4: Press '4' Direct Routing."
     Write-Host "5: Press '5' Hybrid PSTN"
     Write-Host "Q: Press 'Q' to quit."
}

do
{
     Show-Menu
     $input = Read-Host "Please make a selection"
     switch ($input)
     {
           '1' {
                cls
                'You chose Collab Only'
                Generate-DocTemplate -Template "cloudcollab"
           } '2' {
                cls
                'You chose Collab and Meetings'
                Generate-DocTemplate -Template "cloudcollabmeetings"
           } '3' {
                cls
                'You chose Cloud PSTN Calling'
                Generate-DocTemplate -Template "cloudcalling"
           } '4' {
                cls
                'You chose Direct Routing - To be added'
           } '5' {
                cls
                'You chose Hybrid Voice - To be added'
           } 'q' {
                return
           }
     }
    pause

}
until ($input -eq 'q')

Example JSON file params.json – add your own params you used in your document.

{
"variables":[
    {"param":"customer","value":""},
    {"param":"plan","value":""},
    {"param":"supplier","value":""},
    {"param":"version","value":""},
    {"param":"date","value":""},
    {"param":"enterpriseuserlicenses","value":""},
    {"param":"phonesystemlicenses","value":""},
    {"param":"domcallingplanlicenses","value":""},
    {"param":"intcallingplanlicenses","value":""},
    {"param":"audioconferencinglicenses","value":""},
    {"param":"azureadp1licenses","value":""},
    {"param":"virtualuserlicenses","value":""},
    {"param":"requiredvirtualuserlicenses","value":""},
    {"param":"meetingroomlicenses","value":""},
    {"param":"communicationcredits","value":""},
    {"param":"commonarealicenses","value":""},
    {"param":"residency","value":""}
],
"cloudcollab": [
  {
    "id":"1",
    "file":"DocIntro.md"
  },
  {
    "id":"2",
    "file":"MSTeamsLogicalArchitecture.md"
  },
  {
    "id":"3",
    "file":"MSTeamsSecCom.md"
  },
  {
    "id":"4",
    "file":"MSTeamsGovernance.md"
  },
  {
    "id":"5",
    "file":"MSTeamsLicensing.md"
  },
  {
    "id":"6",
    "file":"MSTeamsMessaging.md"
  },
  {
    "id":"7",
    "file":"MSTeamsCollaboration.md"
  },
  {
    "id":"8",
    "file":"MSTeamsClient.md"
  },
  {
    "id":"9",
    "file":"MSTeamsVDI.md"
  },
  {
    "id":"10",
    "file":"MSTeamsOperations.md"
  },
  {
    "id":"11",
    "file":"MSTeamsReporting.md"
  },
  {
    "id":"12",
    "file":"MSTeamsDisaster.md"
  }
],
"cloudcollabmeetings":[
  {
    "id":"1",
    "file":"DocIntro.md"
  },
  {
    "id":"2",
    "file":"MSTeamsLogicalArchitecture.md"
  },
  {
    "id":"3",
    "file":"MSTeamsSecCom.md"
  },
  {
    "id":"4",
    "file":"MSTeamsGovernance.md"
  },
  {
    "id":"5",
    "file":"MSTeamsLicensing.md"
  },
  {
    "id":"6",
    "file":"MSTeamsMessaging.md"
  },
  {
    "id":"7",
    "file":"MSTeamsMeetings.md"
  },
  {
    "id":"8",
    "file":"MSTeamsLiveEvents.md"
  },
  {
    "id":"9",
    "file":"MSTeamsAudioConferencing.md"
  },
  {
    "id":"10",
    "file":"MSTeamsCollaboration.md"
  },
  {
    "id":"11",
    "file":"MSTeamsClient.md"
  },
  {
    "id":"12",
    "file":"MSTeamsVDI.md"
  },
  {
    "id":"13",
    "file":"MSTeamsNetwork.md"
  },
  {
    "id":"14",
    "file":"MSTeamsPeripherals.md"
  },
  {
    "id":"15",
    "file":"MSTeamsVideoInterop.md"
  },
  {
    "id":"16",
    "file":"MSTeamsOperations.md"
  },
  {
    "id":"17",
    "file":"MSTeamsReporting.md"
  },
  {
    "id":"18",
    "file":"MSTeamsDisaster.md"
  }
],
"cloudcalling":[
  {
    "id":"1",
    "file":"DocIntro.md"
  },
  {
    "id":"2",
    "file":"MSTeamsLogicalArchitecture.md"
  },
  {
    "id":"3",
    "file":"MSTeamsSecCom.md"
  },
  {
    "id":"4",
    "file":"MSTeamsGovernance.md"
  },
  {
    "id":"5",
    "file":"MSTeamsLicensing.md"
  },
  {
    "id":"6",
    "file":"MSTeamsMessaging.md"
  },
  {
    "id":"7",
    "file":"MSTeamsMeetings.md"
  },
  {
    "id":"8",
    "file":"MSTeamsLiveEvents.md"
  },
  {
    "id":"9",
    "file":"MSTeamsAudioConferencing.md"
  },
  {
    "id":"10",
    "file":"MSTeamsCollaboration.md"
  },
  {
    "id":"11",
    "file":"MSTeamsClient.md"
  },
  {
    "id":"12",
    "file":"MSTeamsVDI.md"
  },
  {
    "id":"13",
    "file":"MSTeamsNetwork.md"
  },
  {
    "id":"14",
    "file":"MSTeamsPeripherals.md"
  },
  {
    "id":"15",
    "file":"MSTeamsVideoInterop.md"
  },
  {
    "id":"16",
    "file":"MSTeamsPhoneSystem.md"
  },
  {
    "id":"17",
    "file":"MSTeamsCallingPlans.md"
  },
  {
    "id":"16",
    "file":"MSTeamsOperations.md"
  },
  {
    "id":"17",
    "file":"MSTeamsReporting.md"
  },
  {
    "id":"18",
    "file":"MSTeamsDisaster.md"
  }
]
}

Your mileage will vary and this is not something you can simply copy and paste and get it to work as your variables and requirements will be different to mine. The aim of this was to show you the way and give you the main tooling to help you create your own solution.

For me now, I just need to run my Powershell script and I have 5 document types I can choose from that will give me 80% of what I need within 1 minute.

Taking this further, the input file could be a data capture form using Microsoft Forms and process a Microsoft Flow to generate the document based in the form inputs. This is my next phase on this venture.

Processing…
Success! You're on the list.

Automating Documentation with GitHub & Markdown – Part 1 – Setting Up

One of my biggest gripes when doing customer documentation is that invariably I have to start from scratch each time. This is mostly down to the way I work, being self-employed and my nomadic work lifestyle taking me from project to project, customer to customer. But each time I enter a company to augment their team, I always seem to have to start from Document1.docx.

Well, now I have had enough, and I need to make my life easier. One of the values of people employing me is for my skills and knowledge. So perhaps I should create a baseline document template for all my design and artefact work as part of MVC Ltd stock digital assets. After all, I end up writing 80% of the same stuff over and over again.

Now, I could do this just by creating my own Word document and using OneDrive/SharePoint versioning, keep it somewhat up-to-date. But I’ve had constant pains in the past with document formatting in Word and keeping the flow of the document on point for the customer. I wanted something that I can easily write without formatting issues, that was modular so I could pick and choose sections that I want to include in a particular document and something I can maintain easily without having to read 200 pages and making sure references, citations etc. were all aligned.

On my journey to find a solution that works for me, I heard a lot on the community on how Markdown and static content websites are now “the thing”. I had an idea, what if I could use this type of method to control my documentation and template it so that I can fit it into almost every customer?

I had to admit, I never knew what Markdown was and was surprised on how easy it is. Want to learn? Use this website. Having used it now for a month or so I find it liberating and I can hit my flow much easier than in Word. The reason for this is because my fingers just don’t need to leave my keyboard to select a heading or insert an image. It can all be done in Markdown using special characters. For instance, want a level 1 heading use # Your heading here, level 2? ## your level 2 heading here and so on. Want to italicise text? use *italic text here* and much more.

Using Markdown allows me to create a consistent typography that is free from external influences, inherited styles, margins and so on.

So now I am convinced Markdown should be my default authoring language, what do I need in order to make this into a document?

Firstly, I needed an editor. Markdown can be done in notepad or any plain text editor, but I wanted a more intelligent editor that can interpret the plain markdown code and display it in a formatted way so that I have a visual idea of how the final document will look. After a few trials I settled on Typora. Best of all it is free!

Example of Typora Interface

The next element I needed to consider was how to structure the documentation so that I could reuse elements to create a customer facing document that was relevant to their requirements.

Writing a single markdown document wouldn’t cut it because it would mean that I would need several similar documents to cover the core scenarios I meander between. So my style of documentation and structure needed to change.

I approached it in a modular way. I would write a section on each element of Teams, split out things that could be add-ons to the core functionality e.g. splitting Teams Meetings from Audio Conferencing, and having an Audio Conferencing section. Also have elements that cover calling plans, and others that cover direct routing and so on.

I began to create a structure where every document would have a set of base elements, and from that I could add on sections I wanted. The basic rule I set out with was that I would not cross reference sections, or mention them in any other section. E.g. I wouldn’t mention audio conferencing in the Teams Meeting Section, or Direct Routing in the Calling Plan Section etc. That way the final document would not be eluding to missing sections and stay on point.

The next issue to solve was to figure out how to bundle these elements together to produce a final customer facing version. I knew that documents created from this template approach would fit 80% of the final version, but at least it could save me several weeks of writing.

After searching the internet for a bit, I found a program called Pandoc which is a free, open source command line document reader and writer that supports Markdown inputs and converts them to a wide array of formats, including PDF, Word, Html and more.

A sample of code to generate a document in pandoc:

pandoc mymarkdownfile.md -o mywordfile.docx 

So now I have a way of converting my source files to a Word document. Next I wanted a way to be able to use document variables. What if I want to customise the document to include the customer name, how many users, licenses or any other variable to make the document look more personalised to the customer, rather than a sterile, generic, boiler plated document?

Pandoc has a feature called filters. A filter is a python script that Pandoc will process while the document is in its AST (a middle space between the source file and the output, before it writes the output). These scripts can be anything you can create. For me, I wanted to be able to replace placeholders in my source file with the defined output value I wanted.

Again, I searched the internet and found a free filter that Michael Stepner made called Pandoc-Mustache. This filter looked for placeholders within double-curly brackets {{variable here}}, hence the name mustache.

Example of a mustache variable

In order for Pandoc to be able to process this filter, the filter needs to understand what variables there are in the files to look for, and what to replace them with. This is done using a variable file which must be saved as a .YAML file.

Inside this YAML file you declare your variables and values e.g.

customer = MVC Ltd
tenant name = mvc.onmicrosoft.com
totalusers = '100'

Save the file as docmeta.yaml in reality the name is arbitrary.

Now in my markdown files, every document will begin with the same starting element e.g. docintro.md which will contain things like executive summary, dependencies, requirements, purpose and solution overview. In this file I want to add some meta data, called YAML Front Matter. In here we add the source to the YAML file I created with the variables.

---
mustache: .\docmeta.yaml
---

Front Matter is declared by encapsulating the matter between three dashes at the beginning of the document.

Now that I have this, I can now reference these variables in my documentation, and if I wanted to generate a new one, I would change the variables in the YAML and run Pandoc and voilla! document created.

pandoc mymarkdownfile.md --filter pandoc-mustache --s -o mywordfile.docx
Example Extract from created Word document

So now I had a method. What if I wanted to add more than one markdown file? Pandoc makes this really easy, you just have to declare them in the order you want them to be processed.

pandoc docintro.md teamscliend.md teamsmeetings.md --filter pandoc-mustache --s -o mywordfile.docx

Remember to include the YAML only in the first document in the input list. It is not needed in others and if you do put it in to cover yourself, beware there is a processing bug that writes the front matter as literal text in your word file. I found this quite frustrating.

So now I have an end to end structure and process I can use to generate a document. But there are still things I need to consider. What if other people need to create a document from these source files? I don’t want them to have to email me to create one, nor do I want them to store their own versions of these files on their desktop.

Where you store these, is entirely up to you. But bear in mind that with lots of elements the script you will end up running will be quite big and depends on the document structure each design will need. I chose GitHub because its just easy!

One thing to be really aware of is inserting images. In markdown you reference the image location instead of embedding the image in the markdown file. Then when processed by Pandoc, the image is fetched from it’s location and embedded into the Word file. This means that for others, the path to the image must be accessible. It is another reason I chose GitHub.

To insert an image from a Github in markdown, use this syntax

![](https://github.com/path to image/image.png?raw=true)

Out of the box, Pandoc will use the normal document template from Word to format the output document. This will use the Normal style which is likely not to suit your corporate styling. So to override this we reference a document that has been stylised to meet your corporate branding.

If you haven’t got a reference document create one from the default pandoc template using this code

pandoc -o custom-reference.docx --print-default-data-file reference.docx

Now modify the reference.docx with your styling. Pandoc uses the following style names when converting formatting

  • Normal
  • Body Text
  • First Paragraph
  • Compact
  • Title
  • Subtitle
  • Author
  • Date
  • Abstract
  • Bibliography
  • Heading 1
  • Heading 2
  • Heading 3
  • Heading 4
  • Heading 5
  • Heading 6
  • Heading 7
  • Heading 8
  • Heading 9
  • Block Text
  • Footnote Text
  • Definition Term
  • Definition
  • Caption
  • Table Caption
  • Table Normal
  • Image Caption
  • Figure
  • Captioned Figure
  • TOC Heading

Make sure you style each of them to suit your style. Add in any header and footer imagery or elements you want and save it.

Now when you run Pandoc, reference the template to generate a more corporate document

pandoc mymarkdownfile.md --filter pandoc-mustache --reference-doc reference.docx --s -o mywordfile.docx
Example of a styled Word Document from reference template.

So now I am finished with my local proof of concept. Onwards to putting this on GitHub and then generating documents from that source in the next post. However, before I go, let me go through how to install all the tooling you need.

If you want to add a table of contents at the beginning of the document, use this code

pandoc mymarkdownfile.md --filter pandoc-mustache --reference-doc reference.docx --toc --s -o mywordfile.docx 

Note that it is not possible to add a Cover Page to the template reference document as all content that isn’t in the header and footer section is ignored by the Pandoc writer processor. So this will have to be inserted post processing.

The same components work on Mac and Windows, but the Mac setup is a little different.

Windows

First you need to install python for windows. Please install version 3.7.4 https://www.python.org/downloads/windows/

Once python is installed, you need to install PIP. Download PIP for Windows

Open Command Prompt and type in python c:\path to\get-pip.py

Now head over to Pandoc.org and download Pandoc for Windows and follow the install instructions

Now we need to install some python libraries for Pandoc-Mustache

Open Command Prompt and type in

pip install panflute
pip install pyyaml
pip install future
pip install pystache

Once installed we can install Pandoc-Mustache using this command

pip install pandoc-mustache

Do not use the -U switch!

Mac OSX

You can install pandoc using brew or by downloading the binaries. Also install python using brew and then follow the pip commands from Windows. make sure you run these under sudo.

Now your machine is prepped with the tools you need to start creating your document.

In Part 2 I will show you how to use GitHub and protect your documentation from the public view.

Microsoft Teams Location Based Routing Not Just For Toll Bypass Laws

Some of you may have been unlucky / lucky enough to work on global voice deployments that have encountered telephone regulations restricting the use of a flexible Unified Communications telephony network that is designed for least possible cost. The term LBR, of Location Based Routing to give it’s full name is a technology baked into the majority of UC systems that help keep organizations compliant with these laws.

Traditional LBR Implementation

In this example, we see a traditional implementation of LBR in action. A user in the Dubai Office wants to make a phone call to a person in the UK. The organization the user works for has a telephony gateway in the UK. Under normal conditions where there are no regulations, the organization can configure all outbound UK numbers to route through their UK gateway and therefore pay local rate call charges, instead of international rate. However, the UAE, amongst other nations have regulations that stipulate all international (and sometimes even provincial) numbers must route out of a gateway connected to the local telephone company network. This means the organization must force calls through this gateway when a user is in the Dubai Office. This is where LBR comes in.

As we move towards a cloud first telephony model, and in particular Teams, it presents some challenges within organizations that have been using traditional PBXs installed at various locations. In the old model, we had a number of PBXs we could configure for its individual site’s needs. In Teams we have one “PBX” type replacement that is used across all sites. As a result, there are some limitations to this concept and design.

Let me give you an example. An organization has 10 sites in the UK of various sizes. At each site there is a PBX of some description. The sites a physically protected by the Organization’s security team who are responsible for building and campus security and emergencies. The Organization has a policy that any employee can dial 3333 from any phone and get through to the security office personnel responsible for the site in which they are calling from.

How can you achieve this same experience in Teams, from desktop clients, to mobile and desk phone?

The first solution may meet most of your requirements depending on how mobile your workforce is. The simple solution would be to create a dial plan for the site that transforms 3333 into the local DID of the security office Teams Object, whether that is a single endpoint, an endpoint with team call group or even a Call Queue.

However, if you have a mobile workforce that float between sites, then how do you maintain continuity for them? Using the 1st approach, if the mobile worker dialed 3333 at a site that is not “home” to them, they would be connected to the wrong security office which will waste valuable time and even compromise the emergency they are calling about. IT cannot change dial plans in line with each transient move.

Another solution could be to create different emergency numbers at each site. Site 1 is 3333, Site 2 is 4444 etc. But this relies on users remembering and knowing these. Probably not a feasible solution.

But wait, don’t people search by name in Teams? Can we not just rely on that? Well, you could, but the same problem arises in what do users have to type in to get the lookup record they want? Typing a name also takes up valuable time, more if you have to use a desk phone or mobile. It’s also probably not an ideal solution.

You could compensate for this by creating your own Teams Directory App that lists clearly all the important contacts in your Organization, therefore all the user has to do is visually find the contact that applies and click to dial. Again, you’re limited to desktop client for this functionality currently and in emergencies people are going to want to dial a number from a physical phone in most cases.

So where does this leave us?

It leaves us with two viable options. Option 1 is to deploy a contact center where the 3333 number routes to that has a voice IVR that is capable of routing based on voice answers, e.g. Please tell me the site you are calling from? Answer: Cardiff. Then routes to the Cardiff site security office.

Or use LBR to route these calls to the local security office. From the outset, you would need a local SBC at each site. Whether you choose to connect them to local PSTN services or centrally is your choice. Looking at the SIP message received from Direct Routing, all client IP information is abstracted and replaced by Microsoft SBC FQDNs, so we are unable to rely on a conditional route based on the Client IP in the FROM Header. Shame.

Using LBR in this way would enable you to route the call placed to 3333 to any desired endpoint, whether that is in Teams or another PBX.

LBR used for Internal Routing Of Calls

In this example we have two sites, Cardiff and London. London is the main office and has an SBC connected to the PSTN via a chosen ITSP. Cardiff is a satellite site with no direct local PSTN access. Deploying an SBC here and connecting it downstream to the London SBC will allow Cardiff users to share the PSTN access with London, but also be leveraged for this use case.

Simon is a mobile worker and splits his time between both sites. We cannot rely on Simon’s SIP message information from pstnhub to determine location. The only reliable information we have is the client IP address used by his device and that is held only between the client and Microsoft Teams Back-end. So we have to look inside what we can do with Microsoft.

By using LBR and taking his client workstation IP we can determine which gateway to use based on this information allows Simon to dial 3333 at both sites and get through to local building security personnel.

In Teams each building security object would have it’s own unique DDI and the SBCs would transform 3333 into that local number and route back into Teams for lookup and dial tone. Any other number would be routed back to the London SBC for PSTN dial tone.

At first it seems an over engineered solution to an internal problem, but when global numbers are used to route to local services it is often very hard to change this approach. In all likelihood any organization of this size and complexity probably have a valid use case for local PSTN access too making this more cost effective. Smaller organizations may want to consider the options suggested earlier in this article first as they carry less investment.

However, it seems that Location Based Routing for Microsoft Teams is going to play more of a part in modern cloud telephony than it previously did with On-Premises. For more information about how to set up LBR for Microsoft Teams:
https://docs.microsoft.com/en-us/microsoftteams/location-based-routing-plan

Unused Teams Audit & Warning with Microsoft Teams & Graph API

Inspired by my peers using Microsoft Graph API to interact with the Microsoft 365 substrate and in particular Microsoft Teams, I decided to give it a whirl and see what I could break.

I came up with a valid problem that administrators will face as Teams proliferate through their organization. How to clean-up unused Teams. We accept there will be a lot of Teams created for test purposes. There may also be well used Teams created that have expired their usefulness and simply left dormant. What do we do with these?

Built into Microsoft 365 we have the ability to set group expiration policies. Two issues with these policies spring to mind. 1) They required AzureAD Premium P2 licensing and 2) they work on a fixed time duration approval basis. This means every X Days a Team / Group owner will need to reconfirm they still want the Team / Group to remain active. This may get annoying for Owners who have multiple Teams.

Using the Graph API, we have access to lots of information, some of which is inaccessible by any other means. I wanted to investigate if I could audit Teams and figure out if there has been any activity in that Team within a set number of days. If not, then post a message into that Team to warn members, unless they use it, it will be deleted or archived, whatever your policy dictates.

The metrics used are message creation date in any Team channel and any file activity on the Team drive. If there has been activity in either metric within a configured time period, consider these Teams active and ignore. If the last activity exceeds the configured time period, issue a warning to the General channel of the Team by posting a channel message containing the warning.

To get started, I enlisted the help of Lee Ford’s amazing post on getting started with Graph API and PowerShell. I suggest you head on over to his post to discover how to create the required AzureAD App Registration.

Back? Good. There are a few alterations to the AzureAD App permissions needed for the script to work. These are:

  • ChannelMessage.Read.All
  • Group.ReadWrite.All
  • User.Read.All
  • Files.Read.All
  • Directory.Read.All

Delegated User permissions are needed to post messages in the channel, so you need to add the following for these:

  • Group.ReadWrite.All

Unfortunately, we have to use both Application and Delegated permission because we cannot send a message to a Team as an Application. It has to be done by an account with membership to the Team in question. The end result will post a message in the offending Teams like this

The script will run, collect the information and then decide if it should post a message. If it does, then it will temporarily add the delegated user to the Team as an owner, post the message and subsequently remove the user from the Team to maintain information security. At the end the administrator will have the option to export the Teams that were affected to a CSV file for further analysis.

Below is the script:

<###########################################################################

SCRIPT: WARNUNUSEDTEAMS.PS1

THIS SCRIPT CAN BE LAUNCHED IN POWERSHELL AND YOU MUST HAVE AZUREAD POWERSHELL MODULE INSTALLED

TO INSTALL RUN INSTALL-MODULE AZUREAD

YOU MUST CREATE AN AZUREAD APP REGISTRATION FOR THIS SCRIPT. THIS SCRIPT REQUIRES BOTH APPLICATION
AND DELEGATED PERMISSIONS TO RUN

MORE INFORMATION HTTPS://blog.valeconsulting.co.uk

OFFERED WITHOUT WARRANTY, SUPPORT OR RESPONSIBILITY. USE AT YOUR OWN RISK

###########################################################################>


<###########################################################################

SCRIPT PARAMETERS - PLEASE MODIFY

###########################################################################>

#SET THE NUMBER OF DAYS FROM TODAY THE SCRIPT WILL CLASSIFY AS MINIMUM ACTIVITY

$days = "60"

#SET YOUR AZUREAD APP ID'S, SECRET AND SUPPORT UPN USED TO RUN AS USER TO POST MESSAGES IN TEAMS HERE

$clientId = ""
$tenantId = ""
$clientSecret = ''
$supportUPN = "itsupport@valeconsulting.co.uk"

<################################################################################################

PLEASE MODIFY THE CONTENT ELEMENT WITH YOUR CHOSEN MESSAGE YOU WANT POSTING IN THE TEAM

#################################################################################################>


$body = @"
{
"body": {
      "contentType": "html",
      "content": "Hello, we have noticed that this Team has not been used for at least $($days) Days. In line with our IT Policy, if there continues to be no activity for a further 30 Days, this team will be automatically deleted. Thank you. IT Support"
    }

}
"@

<#################################################################################################

DO NOT EDIT BELOW THIS LINE

##################################################################################################>

<#################################################################################################

CREDIT TO LEE FORD (WWW.LEE-FORD.CO.UK) FOR THIS SCRIPT ELEMENT

##################################################################################################>
# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"

# Construct Body
$body = @{
    client_id     = $clientId
    scope         = "https://graph.microsoft.com/.default"
    client_secret = $clientSecret
    grant_type    = "client_credentials"
}

# Get OAuth 2.0 Token
$tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body -UseBasicParsing

# Access Token
$token = ($tokenRequest.Content | ConvertFrom-Json).access_token

Write-Host "Connected to AzureAD App and Acquired Token" -ForegroundColor Yellow

# Base URL
$uri = "https://graph.microsoft.com/beta/"
$headers = @{Authorization = "Bearer $token"}
$ctype = "application/json"
<#################################################################################################

END CREDIT

##################################################################################################>


#Get Support UPN Object ID
Write-Host "Getting Azure Object ID of Support UPN" -ForegroundColor Yellow
$objID = Invoke-WebRequest -Method GET -Uri "$($uri)users/$($supportUPN)" -ContentType $ctype -Headers $headers | ConvertFrom-Json

 
#Get all Teams
Write-Host "Getting All O365 Groups that are Teams Enabled" -ForegroundColor Yellow
$graph = Invoke-WebRequest -Method GET -Uri "$($uri)groups?`$filter=resourceProvisioningOptions/Any(x:x eq 'Team')" -ContentType $ctype -Headers $headers | ConvertFrom-Json

#For each Team now find their channels, last message and last modified file.

$results = @()
Write-Host "Analyzing Teams Activity. Please Wait..." -ForegroundColor Yellow

ForEach ($team in $graph.value){

        #Get files activity

            $drive = Invoke-WebRequest -Method GET -Uri "$($uri)groups/$($team.id)/drive" -ContentType $ctype -Headers $headers | ConvertFrom-Json

            $activity = Invoke-WebRequest -Method GET -Uri "$($uri)drives/$($drive.id)/activities?`$top=1" -ContentType $ctype -Headers $headers | ConvertFrom-Json

            $lastTime = $activity.value.times.recordedDateTime

        #Get Teams Owners

            $owners = Invoke-WebRequest -Method Get -Uri "$($uri)groups/$($team.id)/owners" -ContentType $ctype -Headers $headers | ConvertFrom-Json       

             
        #Get Channels from Team

           $channels = Invoke-WebRequest -Method GET -Uri "$($uri)teams/$($team.id)/channels" -ContentType $ctype -Headers $headers | ConvertFrom-Json

        #Loop through channels and get last message by date      

                ForEach ($ch in $channels.value){

                    $chmsg = Invoke-WebRequest -Method GET -Uri "$($uri)teams/$($team.id)/channels/$($ch.id)/messages?`$top=1" -ContentType $ctype -Headers $headers | ConvertFrom-Json

                    #if there was a message and it was posted over the time period set warning flag to true
                    
                    if ($chmsg.value.createdDateTime -ne $null){
                        
                        $time = New-TimeSpan -Start $chmsg.value.createdDateTime -End (Get-Date)

                        if ($time.Days -gt $days){

                            $warn = $true

                        }else{
                            
                            $warn = $false

                        }

                    }else{
                        
                        $warn = $true

                    }

                    #if there has been no chat activity, check file activity

                    if ($warn -eq $true){
                            
                            if ($lastTime -ne $null){
                                    
                                    $ftime = New-TimeSpan -Start $lastTime -End (Get-Date)                                    

                                    if ($ftime.Days -le $days){

                                        #if there has been file activity within the configured number of days, turn the warning flag off

                                        $warn = $false    
                                    }

                            }

                    }

                    #store all results in array
                    $results += New-Object -TypeName psobject -Property @{TeamID=$team.id;TeamName=$team.displayName;ChannelID=$ch.id;ChannelName=$ch.displayName;MessageDate=$chmsg.value.createdDateTime;MessageDaysOld=$time.Days;FileDate=$lastTime;FileDaysOld=$ftime.Days;Owners=$owners.value.mail;Warn=$warn}


                }
}

#filter the results so that only teams with warning flags set to true are used

$teamfilter = $results | Where {$_.Warn -eq $true -and $_.ChannelName -eq "General"}

Write-Host "Require User Authentication for Message Sending..." -ForegroundColor Yellow

<##################################################################################################

CREDIT TO LEE FORD (WWW.LEE-FORD.CO.UK) FOR THIS SCRIPT ELEMENT

###################################################################################################>

#authenticate with the IT user account to post a message to the general channel of these groups.

# Azure AD OAuth User Token for Graph API
# Get OAuth token for a AAD User (returned as $token)

# Add required assemblies
Add-Type -AssemblyName System.Web, PresentationFramework, PresentationCore

# Application (client) ID, tenant ID and redirect URI
$clientId = "08549aa5-f9e9-4297-a1cc-6c776c0c3a17"
$tenantId = "e7a63b11-d255-40fd-b586-9daaa76de185"
$redirectUri = "https://login.microsoftonline.com/common/oauth2/nativeclient"


# Scope - Needs to include all permisions required separated with a space
$scope = "User.Read.All Group.ReadWrite.All" # This is just an example set of permissions

# Random State - state is included in response, if you want to verify response is valid
$state = Get-Random

# Encode scope to fit inside query string 
$scopeEncoded = [System.Web.HttpUtility]::UrlEncode($scope)

# Redirect URI (encode it to fit inside query string)
$redirectUriEncoded = [System.Web.HttpUtility]::UrlEncode($redirectUri)

# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/authorize?client_id=$clientId&response_type=code&redirect_uri=$redirectUriEncoded&response_mode=query&scope=$scopeEncoded&state=$state"

# Create Window for User Sign-In
$windowProperty = @{
    Width  = 500
    Height = 700
}

$signInWindow = New-Object System.Windows.Window -Property $windowProperty
    
# Create WebBrowser for Window
$browserProperty = @{
    Width  = 480
    Height = 680
}

$signInBrowser = New-Object System.Windows.Controls.WebBrowser -Property $browserProperty

# Navigate Browser to sign-in page
$signInBrowser.navigate($uri)
    
# Create a condition to check after each page load
$pageLoaded = {

    # Once a URL contains "code=*", close the Window
    if ($signInBrowser.Source -match "code=[^&]*") {

        # With the form closed and complete with the code, parse the query string

        $urlQueryString = [System.Uri]($signInBrowser.Source).Query
        $script:urlQueryValues = [System.Web.HttpUtility]::ParseQueryString($urlQueryString)

        $signInWindow.Close()

    }
}

# Add condition to document completed
$signInBrowser.Add_LoadCompleted($pageLoaded)

# Show Window
$signInWindow.AddChild($signInBrowser)
$signInWindow.ShowDialog()

# Extract code from query string
$authCode = $script:urlQueryValues.GetValues(($script:urlQueryValues.keys | Where-Object { $_ -eq "code" }))

if ($authCode) {

    # With Auth Code, start getting token

    # Construct URI
    $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"

    # Construct Body
    $body = @{
        client_id    = $clientId
        scope        = $scope
        code         = $authCode[0]
        redirect_uri = $redirectUri
        grant_type   = "authorization_code"
        client_secret = $clientSecret
    }

    # Get OAuth 2.0 Token
    $tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body

    # Access Token
    $2token = ($tokenRequest.Content | ConvertFrom-Json).access_token

}
else {

    Write-Error "Unable to obtain Auth Code!"

}
<##############################################################################################################

END CREDIT

###############################################################################################################>

# Base URL
$headers = @{Authorization = "Bearer $2token"}
$uri = "https://graph.microsoft.com/beta/"


Write-Host "Posting Messages in Affected Teams..." -ForegroundColor Yellow

ForEach ($team in $teamfilter){
         
         
        #Add IT Support Account to the Team as an Owner

         $userbody = @"
                    { 
                    "@odata.id": "https://graph.microsoft.com/beta/users/$($objID.id)" 
                    }
"@
        
        try{
            Invoke-WebRequest -Method POST -Uri "$($uri)groups/$($team.TeamID)/owners/`$ref" -Body $userbody -Headers $headers -ContentType $ctype -ErrorAction Stop
        }catch{

        }

        #send message in Team Channel
        
        try{
            Invoke-WebRequest -Method POST -Uri "$($uri)teams/$($team.TeamID)/channels/$($team.ChannelID)/messages" -ContentType $ctype -Headers $headers -Body $body -ErrorAction Stop
        }catch{

        }

        
       
        #Remove IT Support UPN from Team

        try{
            Invoke-WebRequest -Method Delete -Uri "$($uri)groups/$($team.TeamID)/owners/$($objID.id)/`$ref" -Headers $headers -ContentType $ctype -body $userbody -ErrorAction Stop
        }catch{

        }

        

       
}

Write-Host "Cleaned Up Team Membership" -ForegroundColor Yellow

$report = Read-Host "Do you want to export the results to a CSV (y/n)"

if ($report -ieq "y"){

    $results | Export-Csv -Path C:\Temp\Teamsreport.csv -NoTypeInformation -Force

}

Write-Host "Finished" -ForegroundColor Green

The script can be downloaded here as well

Migrating to Microsoft Teams from Hosted Skype

In this article, I wanted to discuss how you could move to Microsoft Teams if you’re currently in a Hosted Skype for Business environment. Depending on your contract with your provider, you could be using dedicated compute and software instance for your Skype workload, or you could be using a multi-tenanted version of Lync 2013. Obviously, your existing hosting provider should and probably can help you on your journey to Microsoft Teams in either of these environments. However, what if you’re using Microsoft Teams as the driver to exit a contract or service altogther? Can it be done? And what are the penalities for doing so?

At first breath, you may be thinking that this challenge is a bridge to far and that you feel locked into a model that is no longer suitable for your organization to grow. However, the journey to Microsoft Teams can be simpler and less painful than you have initially envisioned.

First, talk to your hosting provider about their capabilities in moving you to a Microsoft Teams solution that is suitable to support your needs. This is often the path of least resistance. However, this may limit you to a service contract that you’re not entirely happy with and want to consider your options.

Your organization should own your Office 365 tenant (assuming you have one). You shouldn’t rent this from anyone other than Microsoft and you should have full control over its evolution. If you do not have an Office 365 tenant, then sign up for one.

Obviously there are many different scenarios you could find yourself in already that adds complexity to your move. However, as long as you’re not currently consuming Skype for Business Online as part of your hosted Skype Solution, you can read on. If you are, then this may not work as intended for you.

1. Tenant Preparation

You can do a lot of configuration in your tenant to support Teams without impacting your user’s Skype functionality. You can configure Identity and Access Management, Create Teams policies, Compliance settings etc. all without BAU being impacted.

2. Domain Registration

As Teams relies of UPN for chat and calling routing and not SIP, you can (with a caveat) register what is going to be your existing SIP domain with your tenant without impacting Skype. The caveat is, that you shouldn’t enable the Domain for Skype for Business Use. If you do then any federated partners you currently communicate with who have Skype for Business Online will not be able to reach you via Skype.

3. Configure Direct Routing

Registering your domain allows you to configure Direct Routing in advance of your move to Teams, without Skype for Business hybrid being set up. If you are concerned about your Domain registration, then use a sub-domain as a registered domain for your SBC configuration, and address the main domain further down the line.

Assuming you have made it this far, you can create all your calling policies for your users, test out calling scenarios and make sure that the solution is stable at your own pace.

4. Enabling Users for Teams

Although this is labelled as step 4, in reality you could do this alongside Step 3 as the two are not intrinsically linked. Enabling users for Teams for Chat, P2P AV, Meetings and Collaboration at this stage means users get to start using Teams whilst your hosted Skype service is unaffected for important workloads like Enterprise Voice. This gives you an advantage and head start with early adoption while you focus on the more complex voice element of your move.

An important consideration at this stage is users will need to continue to use Skype for their Voice and any federation communication with external partners.

Think of this stage as a Pseudo-Islands Mode, but your tenant is operating in Teams Only Mode.

5. Move Federation

When you are ready, you need to move federation away from Skype and to Teams. You can do this by enabling the Skype service on your Office 365 Domain and changing just the SRV record for sipfederationtls._tcp.domain.com to sipdir.online.lync.com in your public DNS. After propagation has completed, federated chat will move from Skype to Teams.

6. Move Enterprise Voice

You have two choices here, or may be a mix. New DDIs for Direct Routing or number porting. With the latter, initiate your port request if you need to your DR SIP service. Assign users their phone numbers in Teams in advance of the porting schedule so that when the port completes, calls will be delivered to Teams instantly causing very little disruption to services.

7. Say Goodbye to Your Hosting Provider

That’s it, you can now turn off your hosted Skype as your organization is now using Teams and you never had to tell them a thing!

This article is not chapter and verse, its about showing you a way to achieve something. Your mileage may vary depending on your complexity so please consider carefully before executing.

PSTN Survival With Microsoft Teams, Polycom VVX and Ribbon SBC

At EC19, Yealink announced a partnership with Ribbon to deliver PSTN survival with Microsoft Teams for their Teams Phones. I don’t have a Yealink, but I do have a few VVX’s lying around my home office, so I thought I would give it a shot and see if I can get this working on other handsets. Turns out I can and that means I do not have to invest in new hardware for this disaster event solution. Better still it works for Skype for Business On-Prem and Online as well as Microsoft Teams!

So, how does it work conceptually? Basically the VVX phone registers to both Skype for Business Online (Teams via SIP Gateway) and to the SBC at the same time. When registration fails to Skype for Business / Teams, the phone will failover the active registration to the SBC and become essentially a basic SIP phone for making and receiving calls.

In order to facilitate this functionality, there are a couple of per-requisites.

  • The SBC must have local SIP registrar licenses to cover the number of phones you want to have this capability
  • The VVX phones must be running UCS 5.8 onwards

To set this up I will say right now is not that easy, or scale-able. This solution is really meant for mission critical phones that must survive a failure and not every phone in your business. Why, will become apparent as you read on. But basically, you would provide this capability maybe for your senior execs, inbound sales / support teams and main office reception type scenarios.

First we need to configure a Cloud IP Phone Policy in the tenant. This is so we can disable the management of firmware by Office 365. The reason for this is we need UCS 5.8 or higher, and Microsoft will force a rollback to UCS 5.6 if managed by Office 365. As there is only the global policy, this would mean that all phones would be affected.

Set-CsIPPhonePolicy -Identity Global -EnableDeviceUpdate $False

Now update your VVX to UCS 5.8 either by Phone Web UI or on-prem provisioning server.

Before we touch the phone any further, we now need to set up the SBC to support this. Assuming you now have the required Local Registrar license installed.

First on the SBC go to SIP > Local Registrars and create one. I’ve called mine “Teams Fallback SIP”

Now from SIP > Local / Pass thru Auth tables create an auth table. I’ve called mine “Teams Local Fallback”.

In this auth table, create an account for the phone that you want to survive. Note it is important that the address URI is the same as the DDI assigned to the user in Teams. The username and password can be anything. But for simplicity sake, the username is the DDI and I’ve set the password to 12345

Now we have the account set up, we need to catch registrations, so we need to create a signalling group. Under signalling groups create a SIP SG, I’ve called mine “Teams Fallback Phone”

The settings of the SG should be:

  • Call Routing Table (select any for now – we will come back to this)
  • SIP Profile: Default
  • SIP Mode: Local Registrar
  • Registrar: Teams Fallback SIP
  • Media List: Default
  • Listen Ports: 5060 TCP/UDP
  • Federated IP: <your network range>

Now create a call route table to handle outbound calls from phones when they are in a fallback mode.

Go back to the Signalling Group and set the call routing table to this and apply the settings.

Now we have the bones of the configuration we need. Now to configure the outbound call route. From the routing table we created, add a route to your ITSP. You can use the same transformation tables created for your Teams -> ITSP if it is compatible

This is now outbound calling configured. Now let’s configure inbound.

For inbound we need to ensure that the fallback route is only tried if the primary (via Microsoft Teams Direct Routing) is unresponsive. There are a few ways in which to achieve this, but I am going with the simple way. Rather than using cause code re-routes, I am simply going to add my fallback signalling group to the existing ITSP -> Teams route entry as a second possible destination for calls.

The way that this works is destination SGs are attempted on a first to last basis. As the Teams SG will almost always be up, the calls will always route via that. In an outage, which is what we want, it will not be available so the 2nd SG is tried and this is to our local SIP registrar table.

The SBC configuration is now complete. Now we need to configure the VVX phone.

You will need to add the following configuration to your phone’s cfg file

feature.sfbPstnFailover.enabled="1"
reg.1.srtp.simplifiedBestEffort="1"
reg.1.server.2.address="192.168.1.252"
reg.1.server.2.pstnServerAuth.userId="+441782977074"
reg.1.server.2.pstnServerAuth.password="12345"
call.enableOnNotRegistered="1"

Where 192.168.1.252 is the IP address of your SBC, the userId is the sip user we created in the local auth table and it’s password.

Now the solution is ready the last note of interest is the experience.

By default, phones register to Office 365 for a period of 10 minutes. usually the phone re-registers when the time gets to 50% or 5 minutes. The phone is a single line, therefore, can only have one active registration for calls at any one time.

During a failure event, there may be a period of 10 minutes where no calling is possible until the registration with Office 365 times out, the phone will then automatically mark the backup registration active. Calls during this time inbound will receive a busy tone, and the message back from the phone will be a SIP 486 “Busy Here” message.

Once the phone realises that it can no longer register to Office 365 calls will proceed as normal, but the phone will be in a basic mode, which is nothing more than a landline type service.

As you can see, the solution would be quite hard to scale beyond the few critical phones you need and it is quite limited, but its giving your critical users something rather than nothing in a time where you need to be focused on restoring a service, not providing an ad-hoc workaround on a case by case basis.

Microsoft Teams & Skype for Business Online Back-end Provisioning Monitor Script

Working in the Cloud should be fast. But sometimes you just got to wait it out. One of the biggest pain points for me is the lag between licensing a user in Office 365 and Skype for Business Online to complete its back-end provisioning so I can actually start assigning policies and phone number etc.

This delay can range from a minimum of 30 minutes to 24 hours! There is nothing I can do to speed it up and the biggest challenge is providing a predictable experience to the end user. Typically, I want to license and then do something in Skype. With this delay, I am not going to sit around and keep checking when I can actually complete the task. I’m going to do other stuff.

The problem with this is that I am introducing a lag between the back-end ready state and bringing myself back to this task. This could lead to end user realising functionality before I have tailored it to their needs.

Skype Online applies to Microsoft Teams as well. So this is needed if you’re deploying Teams too. Skype for Business Online gives out two properties, assigned plan and provisioned plan. You can access these properties by calling the user object out of PowerShell. Assigned Plan is the core functionality we have given the user based on their Office 365 licenses and Provisioned Plan is the current plan that has been provisioned. There may / will be a drift between these 2 properties when a user is first licensed. This is what takes time to get into sync.

Having been tired of this problem, I created a script that monitors the license provision in Office 365 every 5 minutes, if all assigned Skype licenses return a success the script will continue to Skype Online and check the provisioned plan against these licenses. The script will continue to check the provisioned plan every 5 minutes until all assigned plans return a success. Upon which I can then add my in band configuration commands such as Grant-CsTeamsMeetingPolicy etc.

This now means all I need to do is enter the user’s UPN into the script and hit enter. Simply call the script from the PS window

 .\SkypeProvisioningStatus.ps1 -upn user@mvc-labs.com

Self Service Phone Number Assignment in Microsoft Teams with Automated Provisioning

So you’ve got Microsoft Teams and you’ve got some calling plans. You don’t have enough to give every user one, so you come up with a business process that lets you be selective based on business justification. How do you integrate that business process to make it easy for IT and your users to follow?

Traditionally, you’d maybe make a shopping cart item in your service management portal and instruct the user to go there and place an order. On submission, it would trigger an approval process and may be even a provision on approval action. But what if you don’t have this expensive solution? What if you want to extract the value out of your Microsoft 365 subscription?

Well you can do this very easily and without too much hard work. In the following example I am using a simple online form that a user can submit and on submission, trigger a bunch of events that will ultimately lead to them being provisioned a phone number in Microsoft Teams.

For this exercise, we need the following components readily available of any Enterprise Subscription in Office 365

  • Microsoft Forms – Used for the end user form
  • Microsoft Flow – Used as a business process and conditional trigger / action solution (like IFTTT, but better)
  • AzureAD – We need this for Azure Group Based Licensing (easier to manage)

In addition to these inclusive features of your Office 365 Subscription, we also need an Azure Subscription so we can use the benefits of Azure Automation Accounts. We use Automation Accounts to store Azure Runbooks which are scripts that are triggered by a job. We use Flow to trigger these jobs.

By the end of this article, you will have a basic, automated provisioning process template to build on. My job is to show you the way, and therefore devoid of any error checking, which in a production scenario you’d absolutely need.

The experience will be this:

  • User will access request form online, complete and submit
  • Request will be emailed to their manager who will then approve* or reject the request.
  • On approval, the user will be licensed with the correct Office 365 licenses and automatically assigned a phone number from the Microsoft Cloud
  • The user will get an e-mail upon completion confirming the action has been completed and their new phone number that has been allocated.
  • *if the manager rejects, the user will get a rejection email

Creating Your Automation Account & Scripts

Head on over to the Azure Portal (https://portal.azure.com) and make sure that you have a valid subscription.

Go to Azure AD and create two AzureAD Security Groups (assume you know Azure Group Based Licensing):

  1. Teams Standard Phone User
  2. Teams International Phone User

In the standard user group assign the base O365 license e.g. E3 or E5 with Phone System and your domestic calling plan. In the international group assign the base license and your international calling plan, or communication credits.

Note down the Group Object ID of each of these groups. You’ll need them soon.

In the search bar type: automation accounts


This will take you to the Account page. Click the Add+ button and create an automation account (best to choose same data center location as your tenant)

Next, in the automation account you have just set up, click on the credentials blade and add a credential that will have the privileges to run your scripts. Give it a friendly name e.g. “Cred” (Change Scripts with correct name)

Now we need to load the required modules into the automation account. Open the modules blade and click on browse gallery

Search for the AzureAD Powershell Module

Add this module to your automation account. Next we need to load the SkypeOnline PS Module. This is not available in the gallery. Assuming you have this installed somewhere on your PC you will need to ZIP the contents of the SkypeOnlineConnector folder located in c:\program files\common files\skype for business online\modules folder.

Now that you have zipped this folder, upload it as a module to the modules blade in the automation account by clicking on Add a Module. You should see it become available after around 10 minutes.

Now click on the Runbooks blade and create 3 runbooks:

  1. For Licensing the User
  2. For Checking Provisioning Status of User
  3. For Provisioning the User

Make sure both runbooks are PowerShell runbooks

You should have 3 blank runbooks created now like this

Now load the scripts into each runbook.

Open the Licensing User and Paste the following code in, replace as necessary (remember those Azure AD Licensing Group GUIDs? You need them now).

Save the runbook and publish it.

Now open the ProvisionCheck runbook and paste the following code in. Again replace as necessary. This script basically checks Skype continuously until provisioning has completed. We need this to halt the number assignment until we can actually implement it.

Save the Runbook and Publish.

Finally Open the TeamsPhoneUser Runbook and paste the activation code in

Save the Runbook and Publish.

That’s it for Azure Automation, now the fun stuff can begin.

Create the Form

Head on over to Microsoft Forms (https://forms.microsoft.com) and create yourself a form. In my example here we have a simple form that asks the following questions:

  • What is your sign in address (UPN)
  • What Site Are You Normally At (Dropdown)
  • Who is Your Manager
  • Do you Need International Calls
  • What is Your Justification Reason

My form looks like this:

How you create your form, and how you word your questions will affect the flow, so for this example I recommend that you use the same wording as I have.

Now for the Flow..

Creating the Flow

Ok now before we go into Flow and start doing stuff, lets just recap what we need to do. We need to do something on Form submission, so this is the entry point into the flow.

Next we need to get the contents of the form so we have some data to use. Without it we are stuffed.

We then need to invoke and approval workflow that gives someone the authority to approve or reject the request. Then on either action we need to carry out the approvers commands

One of two things are going to happen at this point. If the approve approves the request, the flow will kick off all the cool stuff and provision the user. If they reject, the requestor is going to get a rejection email.

Now go to Microsoft Flow and create a flow.

Add your entry point by adding an action and searching for Microsoft Forms. Select Forms, then select Triggers and select “When a new Response is Submitted. Then choose the form Id

Now we need to apply the flow actions to each submission. Add a action and search for “Apply to Each”, then Select the Output “List of Response Notifications”

Inside this control we are going to house all our actions. Now we need to get the form content, so we search for forms again and then choose “Get Response Details”. Select the Form Id and select the Response Id from the menu

Now add another action, search for “start and approval”. In this example its a single user approval (their manager). Compile the approval email as you need. You can insert values from the form for clarity to them, like this:

Note I used the Managers email field from the form as the assigned to address so that they get the email.

Now we need a condition to decide what to do if we get an approval or rejection back. Add a action and search for “Condition”.

Add in the Response from the approval and what the expected value is e.g. “Approve”

Now you’ll get a Yes and No branch. The no is what to do if the response is not Approve. In this example, we are just going to send and email to the requestor.

In the No Branch add an action and search for “send email”. Fill out the email with the information you can pull from the submitted form e.g. the requestor UPN and any other information you want.

Step check, take a breather and when minimised you should now see this

Now for the Yes Branch.

The first thing we need to do is call our automation runbook for licensing the user with the appropriate requirements for calling. Add an action and search for “automation” choose Azure Automation and select Create a Job

Select your Azure Subscription, Resource Group and the Automation Account we created. Now select the Runbook LicenseUser. You should see it is asking for two input parameters. Choose the Email Address of the requestor from the form as the UPN and for International Calling the result from the International Calling Question in the Form, like this

The next step is we need to sit and wait for licensing and back end provisioning to complete before we can actually assign a number to the user. If we try now, it will fail. It can take up to 24 hours for this to work, so what do we do? This is where the second runbook comes in. It checks the users Skype and Teams licensing and back end provisioning repeatedly every 10 minutes until all the expected assigned plans have been provisioned. Only then will it return a value which we can then action on.

Create a new Azure Automation Job, doing the same as before, this time choosing the ProvisioningCheck Runbook

From this runbook, we are expecting something back and we need to tell Flow we are expecting something and what to do with it. Add an action and search for Azure Automation, choose “Get Job Output”. Here we are getting the job id of the previously submitted job so we can pull the output from it.

Now the putput from the job is in JSON format, so we need to tell flow to parse JSON. Add an action and search “Parse JSON”. The content will be the Content from the Azure Job and we need to tell flow content type and properties to expect in the content. In the schema enter the following.

We are expecting a JSON Object back, the “Status” is the property we are expecting back from the script and the content of that property is a string value.

 
{
"type": "object",
"properties": {
"status": {
"type": "string"
}
}
}

Progress check. You should now have these actions under the Yes Branch

Now we need to add a condition. The condition is if the property “Status” is equal to “Ready” then go ahead and provision, If not send an email to IT Support telling them of a flow fail. Add this condition.

Now under the No Branch add a send email like before and this time send it to your IT Support desk

Now to the Yes branch, what do we want to do if the result is set to Ready?

Add an action and create a new Automation Job this time calling the TeamsPhone Provisioning Runbook and supplying the UPN and Location of the user from the form

Now again, we are expecting a result back from this job. It will contain the UPN of the user and the phone number we allocated them. So add another action to get the job output

Again, the format of the output is JSON, so we need to parse it to send an email to the requestor informing them of their new number. Add a parse json action and use the following schema

 
{
"type": "object",
"properties": {
"upn": {
"type": "string"
},
"phone": {
"type": "string"
}
}
}

The last action we need to do is send an email to the requestor informing them that their request has been approved and completed. Add an email action and place the phone property in the email body like so

The complete flow should look like this

Seeing It All In Action

User can now go to the form and complete. On submission, their manager gets an approval email

As the manager I click Approve and I am told that my response has been submitted

In Flow I can see that my flow completed

If I check Azure Automation I can see my Runbook Jobs Completed

If I want I can click in to each one and see the output

If I go into the Teams Admin Center, I can see if the user has been provisioned with this number

The user will receive their email like this:

And that is it. Seems quite complex when you write in a blog, but its very straighforward and only takes a hour if that to set up (longest bit was writing the simple scripts).

Obviously you can take this as far as you want. But for a simple self service phone number assignment tool, it does the job!

%d bloggers like this: