Category: Coding

SharePoint Rest API Does Not Allow Unindexed Queries

I’ve been developing code templates for CRUD operations (that’s a real acronym — Create, Read, Update, Delete) against SharePoint — we need to use SharePoint lists to replace database tables. Retrieving information worked fine until I tried to filter the data through the REST call. SharePoint throws a generic error about exceeding some admin-set limit. (1) I know the limit, I can see the limit. The limit is 5,000, and I know my filtered result set is 121 records. WAY lower than 5,000. Oh, and (2) I can run the query without the filter — I’m paging it! — and read all 29,887 records so what does the limit have to do with anything? Reasoning with an HTTP response … well, doesn’t work. No matter how unassailable my argument is, the API call still returned:

{"error":
    {"code":"-2147024860, Microsoft.SharePoint.SPQueryThrottledException",
     "message":{"lang":"en-US",
       "value":"The attempted operation is prohibited because it exceeds the list 
                view threshold enforced by the administrator."}}}

It is, it turns out, a poorly worded error. I started thinking about the query limits on my LDAP servers — we have hard limits to operations and also require most people perform queries against indexed attributes. It’s computationally expensive to search through unindexed attributes (and the Right Thing To Do, generally speaking, is add an index for something that is a frequent query target). I wondered if there was an analogous “no unindexed queries” setting in SharePoint. Quick enough to test — add an index on the column(s) you use in the filter. In the site content listing, click the sideways hamburger menu by the list name. Select “Settings”

Scroll down to Index Columns and click the hyperlink.

Click ‘Create a new index’

Wait for the index process to complete, then try the filtered request again … I’ve got data! Evidently SharePoint ODATA filter queries to the REST API need to be performed against indexed columns. I’m sure Microsoft has that documented somewhere but quite a bit of Googling didn’t get me anywhere … so I’m posting this in case anyone else encounters the same error.

Using AI For Lead Qualification & Cost Reduction

Microsoft posted an overview of how they use AI (and a bot) to score leads in their sales process — Microsoft IT Showcase/Microsoft increases sales by using AI for lead qualification. My personal ‘most interesting thing’ is that they’re using scikit-learn in Python for some of the analysis — I’m using similar components in a custom-written spam filter at home. Their idea of running the text through a spell checker first is interesting — I want to try running my e-mails through Aspell and see if there are statistically significant changes in classifications.

 

They’ve previously detailed reducing energy usage through machine learning — that’s something I’d love to see more companies doing. Energy can be a significant operating cost, and reducing energy use has a positive environmental impact.

Creating An Azure Bot – Internally Hosted

While hosting a bot on the Azure network allows you to use pre-built solutions or develop a bot without purchasing dedicated hardware, the bots we’ve deployed thus far do not have access to internally-housed data. And program execution can be slow (expensive, or a combination of the two) depending on the chosen pricing plan. But you can build an Azure bot that is essentially a proxy to a self-hosted bot.

This means you can host the bot on your private network (it needs to be accessible from the Azure subnets) and access internal resources from your bot code. Obviously, there are security implications to making private data available via an Azure bot – you might want to implement user authentication to verify the bot user’s identity, and I wouldn’t send someone their current credit card information over a bot even with authentication.

How to Communicate with a Self-hosted Bot from Azure:

Register an Azure bot. From https://portal.azure.com, select “Create a resource”. Search for “bot” and select “Bot Channels Registration”.

On the pane which flies out to the right, click “Create” (if you will be deploying multiple self-hosted bots to Azure, click the little heart so you can find this item on “My Saved List” when creating a new resource).

Provide a unique name for your Azure bot. If you have not yet created a resource group, you will need to create one. Make sure the hosting location is reasonable for your user base – East Asia doesn’t make sense for something used on the East coast of the US!

Select the pricing tier you want – I use F0 (free) which allows unlimited messages in standard channels (Teams, Skype, Cortana) and 10,000 messages sent/received in premium channels (direct user interaction … which I specifically don’t want in this case). Then provide the endpoint URL to interact with your locally hosted bot.

Click “Create” and Azure will begin deploying your new resource. You can click the “Notifications” bell icon in the upper right-hand portion of the page to view deployment progress.

When deployment completes, click “Go to resource” to finish configuring your Azure bot.

Select “Settings” from the left-hand navigation menu, then find the application ID. Click “Manage”.

This will open a new portal – you may be asked to sign in again. You are now looking at the application registration in Microsoft’s developer application registration portal. There’s already an application secret created but beyond the first few letters … what is it? No idea! I’m a cautious person, and I don’t know if MS has embedded this secret somewhere within the bot resource. Since an application can have two secrets simultaneously, I do not delete the automatically-created secret and click “Generate New Password”.

A new pane will appear with your new secret – no, the one in the picture isn’t real. Copy that and store it somewhere – you’ll need to add it to your bot code later.

Close the application registration tab and return to the Azure portal tab. Click on “Channels” in your bot and add channels for any interactions you want to support. In this case, I want to publish my bot to Teams. There aren’t really settings* for teams – just click to create the channel.

* You can publish a bot to the Microsoft App Source … but is your bot something that should be available to the Internet at large? It depends! If you’re writing a bot to provide enterprise customers another support avenue, having the bot available through App Source makes sense. If you’re creating a bot to answer employee-specific questions, then you probably want to keep the bot out of App Source

Once the channel has been created, click on the “Get bot embed codes” hyperlink to obtain the bot URL.

Individuals can use the hyperlink provided to add your bot to their Teams chat.

Ok, done! Except for one little thing – you need something to answer on that endpoint we entered earlier. You need a bot! Microsoft publishes an SDK and tools for building your bot in .NET, JavaScript, Python, and Java.

In this example, I am using a sample Python bot. For convenience, I am handling SSL on my reverse proxy instead of using an ssl wrapper in my Python service. Grab the BotBuilder package from git (https://github.com/Microsoft/botbuilder-python.git)

Install the stuff:

pip3 install -e ./libraries/botframework-connector

pip3 install -e ./libraries/botbuilder-schema

pip3 install -e ./libraries/botbuilder-core

pip3 install -r ./libraries/botframework-connector/tests/requirements.txt

In the ./samples/ folder, you’ll find a few beginner bots. Rich-Cards-Bot requires msrest that has some async functionality and the branch in requirements.txt doesn’t exist. Tried a few others and never got anything that worked properly. Same problem with EchoBot-with-State. I am using Echo-Connector-Bot because it doesn’t have this msrest problem, and I can add my own state support later.

Edit main.py and add your Azure bot application id & secret to APP_ID and APP_PASSWORD

APP_ID = ”

APP_PASSWORD = ”

PORT = 9000

SETTINGS = BotFrameworkAdapterSettings(APP_ID, APP_PASSWORD)

ADAPTER = BotFrameworkAdapter(SETTINGS)

I stash my personal information in a config.py file and added an import to main.py:

from config import strDBHostname, strDBUserName, strDBPassword, strDBDatabaseName, strDBTableName, APP_ID, APP_PASSWORD

Tweak the code however you want – add natural language processing, make database connections to internal resources to determine responses, make calls to internal web APIs. I also added console output so I could debug bot operations.

When you’ve completed your changes, launch your bot by running “python main.py”

Now return to the Azure portal and select “Test in Web Chat” – this will allow you to test interactions with your bot. Ask questions – you should see your answers returned.

Once you confirm the bot is functioning properly, use the URL from the Teams channel to interact with your bot within Teams —

URL for my bot in Teams: https://teams.microsoft.com/l/chat/0/0?users=28:9699546d-fc09-41bf-b549-aed33280693a

The answer is served out of our home automation database – data that is only accessible on our private network.

Security – as I said earlier, you’ll probably want to take some measures to ensure access to your locally hosted bot is coming from legit sources. The app ID and secret provide one level of protection. If a connection does not supply the proper app ID & secret (or if you’ve mis-entered those values in your code!), you’ll get a 401 error.

 

But I don’t want the entire Internet DDoS’ing by bot either, and there is no reason for anyone outside of Microsoft Azure subnets should be accessing my locally hosted bot. My bot is hosted in a private container. The reverse proxy allows Internet-sourced traffic in to the private bot resource. Since communication from Azure will be sourced from a known set of networks, you can add a source IP restriction that prevents the general public from accessing your bot directly. See https://azurerange.azurewebsites.net/ for a convenient-to-use list of addresses.

 

Ugh! MFA

I am trying to use Microsoft Graph to read/write an Excel spreadsheet stored in SharePoint. It’s an ugly process to start with — they don’t exactly make it easy to find the right ID numbers so you can reference the spreadsheet in the first place, but I finally got the proper URL. And then I tried to do the password-based token authentication.

{“error”:”invalid_grant”,”error_description”:”AADSTS70002: Error validating credentials. AADSTS50126: Invalid username or password\r\nTrace ID: b43d1973-c253-4889-8756-354e5bd77200\r\nCorrelation ID: 9cf94602-9d72-4790-8dcc-6cf0471058f9\r\nTimestamp: 2019-01-08 00:01:47Z”,”error_codes”:[70002,50126],”timestamp”:”2019-01-08 00:01:47Z”,”trace_id”:”b43d1973-c253-4889-8756-354e5bd77200″,”correlation_id”:”9cf94602-9d72-4790-8dcc-6cf0471058f9″}

Hint: the password isn’t wrong. I’ve seen a lot of comments online about this meaning the secret is wrong — which seemed reasonable, since I’m not seeing any auth traffic against the user account. But if you put in a known bad secret, there is a different invalid secret error.

{“error”:”invalid_client”,”error_description”:”AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided.\r\nTrace ID: 59f4c7c8-73ab-4adb-bf15-10250a190d00\r\nCorrelation ID: 0a95810c-19e2-4fc6-a7be-c0ca7235d824\r\nTimestamp: 2019-01-08 00:00:37Z”,”error_codes”:[70002,50012],”timestamp”:”2019-01-08 00:00:37Z”,”trace_id”:”59f4c7c8-73ab-4adb-bf15-10250a190d00″,”correlation_id”:”0a95810c-19e2-4fc6-a7be-c0ca7235d824″}

But we use MFA … and I’ve got no way to perform the MFA validation. Sigh!

Microsoft Teams: Creating A Bot

Before you start, understand how billing works for Microsoft’s cloud services. There are generally free tiers for selections, but they are resource limited. When you first start with the Azure magic cloudy stuff, you get a 200$ credit. A message indicating your remaining credit is shown when you log into the Azure portal. Pay attention to that message – if you think you are using free tiers for everything but see your credit decreasing …you’ll need to investigate. Some features, like usage analytics, cost extra too.

Microsoft Teams uses Azure bots – so you’ll need to create an Azure bot to get started. From https://portal.azure.com, click on ‘Create a resource’. Search for “bot” and find the bots you are looking for. To host your bot on Azure, select either the “Functions Bot’ or “Web App Bot”. Functions bots use Azure functions, which are C# scripts, for logic processing; WebApp bots use WebAPI App Service for logic processing (C# or NodeJS). To host your bot elsewhere, select “Bot Channels Registration”. In this example, we are using a “Web App Bot”.

Give your bot a name– there will be a green check if the name is unique. Pick your language – C# or Node.JS – and then decide if you want an Echo bot (which gives you a starting place if you’re new to developing bots) or a blank slate (basic bot). Don’t forget to click “Select” otherwise you’ll be back to the defaults. You’ll need to create a resource group. Click on “Bot template” and select what you want to use as the basis for your bot. As of 14 Dec 2018, use v3 unless you need something new in v4 – there’s a lot more available there, and the Bot Builder extensions only work with v3 (https://github.com/OfficeDev/BotBuilder-MicrosoftTeams)

You may need to create a service plan

And storage configuration. Once you have completed the bot configuration, click “Create” and Azure will deploy resources for your bot.

You’ll see a deployment process message, and your messages will have a similar notification. Wait a minute or three.

Return to the dashboard & you’ll see your bot services. Go into the bot that you just created.

Select “Build” – you can use the online code editor or use an existing source repository and configure a continuous integration. I will be setting up a continuous integration – don’t click the link under “Publish”, it goes to an old resource. Click to download the source code – it takes a minute to generate a zip file for download.

Once the download link is available, download and extract the file – this will be the base of your project. Put it somewhere – in this example, I’ll be using a GitHub project. Extract the zip file and get the content into your source repository. 

Return to your dashboard and open the App Service for your bot. Select the “Deployment Center”.

Select the appropriate source repository. When GitHub is used, you will need to sign in and grant access for Azure to use your GitHub account. Click “Continue” once the repository has been set up.

Select the build provider – Kudu or Azure Pipelines. Which one – that’s a personal preference. Azure Pipelines can deploy code stored in git (at least GitHub, never tried other Git services). Kudu can build code housed in Azure DevOps. Kudu has a debugging console that I find useful, and I’ve successfully linked Kudu up with GitLab to manage the build process elsewhere. Azure Pipelines is integrated with the rest of the Azure DevOps (hosted TFS) stuff, which is an obvious advantage to anyone already using Azure DevOps. It uses WebDeploy to deploy artifacts to your Azure websites (again, an advantage to anyone already doing this elsewhere).

The two build environments can be different – MS doesn’t concurrently update SDK’s in each environment, so there can be version differences. It’s possible to have a build fail in one that works in the other. Settings defined in one platform don’t have any meaning if you switch to the other platform (i.e. you’ll be moving app settings into a Build Definition file if you want to switch from Kudu to Azure Pipelines) so it’s not always super quick to swing over to the other build provider, but it might be an option.

I prefer Kudu, so I’ll be using it here.

Select your repository name from the drop-down, then select the project and branch you want to use for deployment. In my repository, the master branch has functional code and there is a working branch for making and testing changes.

Review the summary and click “Finish”.

In GitHub, you confirm a webhook has been added to your project on push events. From your project’s settings tab, select “Webhooks” and look for a azurewebsites URL that includes your bot name. You can view the results of these webhook calls by clicking “Edit” and scrolling down to “Recent deliveries”.

Add the interactions you want – information needs to be accessible from the Azure network, otherwise your bot won’t be able to get there. You can test your bot from the Azure portal to identify anything that works fine from your local computer but fails from the cloud. From the Web App Bot (*note* we are no longer in the App Service on the Azure portal — you need to select the bot resource), select “Test in Web Chat” and interact with your bot.

Once you have your bot working, you need to add the Teams channel to allow the bot to be used from Teams. Select “Channels” and click on the Teams logo.

There’s not much to set up for a bot – messaging is enabled by default. I don’t want IVR or real-time media functionality … but if you do click on the “Calling” tab. The “Publish” tab is to publish your bot in the Windows store – this might be a consideration, for instance, if you wanted to create a customer service interaction bot that enterprise customers could add to their Teams spaces (i.e. something you want random people to find and use). Since I am answering employee specific questions, I do not want to publish this bot to the Internet. Click “Save” when you have configured the channel as needed (in my case, just click ‘save’ without doing anything).

Review the publication terms and privacy statement. If these are agreeable, click “Agree”.

You’ll be returned to the Channels overview. Click on the hyperlinked “Microsoft Teams” – this will open a new URL that is your bot.

You can copy the URL here – others can use the same URL to use your bot. Either open the link in the Teams app

Or cancel and click “Use the web app instead” at the bottom of the screen.

Wait for it … your bot is alive!

That’s great … how do I interact with company resources? Quick answer is “you don’t” – this bot uses resources available on the Internet. To interact with private sources, the magic cloudy Microsoft network must be able to get there. Personally, I’d host my own bot engine. Expose the bot to the Internet and create a “Bot Channels Registration” instead.

Open Password Filter (OPF) Detailed Overview

When we began allowing users to initiate password changes in Active Directory and feed those passwords into the identity management system (IDM), it was imperative that the passwords set in AD comply with the IDM password policy. Otherwise passwords were set in AD that were not set in the IDM system or other downstream managed directories. Microsoft does not have a password policy that allows the same level of control as the Oracle IDM (OIDM) policy, however password changes can be passed to DLL programs for farther evaluation (or, as in the case of the hook that forwards passwords to OIDM – the DLL can just return TRUE to accept the password but do something completely different with the password like send it along to an external system). Search for secmgmt “password filters” (https://msdn.microsoft.com/en-us/library/windows/desktop/ms721882(v=vs.85).aspx) for details from Microsoft.

LSA makes three different API calls to all of the DLLs listed in the NotificationPackages registry hive. First, InitializeChangeNotify(void) is called when LSA loads. The only reasonable answer to this call is “true” as it advises LSA that your filter is online and functional.

When a user attempts to change their password, LSA calls PasswordFilter(PUNICODE_STRING AccountName, PUNICODE_STRING FullName, PUNICODE_STRING Password, BOOLEAN SetOperation) — this is the mechanism we use to enforce a custom password policy. The response to a PasswordFilter call determines if the password is deemed acceptable.

Finally, when a password change is committed to the directory, LSA calls PasswordChangeNotify(PUNICODE_STRING UserName, ULONG RelativeId, PUNICODE_STRING NewPassword) — this is the call that should be used to synchronize passwords into remote systems (as an example, the Oracle DLL that is used to send AD-initiated password changes into OIDM). In our password filter, the function just returns ‘0’ because we don’t need to do anything with the password once it has been committed.

Our password filter is based on the Open Password Filter project at (https://github.com/jephthai/OpenPasswordFilter). The communication between the DLL and the service is changed to use localhost (127.0.0.1). The DLL accepts the password on failure (this is a point of discussion for each implementation to ensure you get the behaviour you want). In the event of a service failure, non-compliant passwords are accepted by Active Directory. It is thus possible for workstation-initiated password changes to get rejected by the IDM system. The user would then have one password in Active Directory and their old password will remain in all of the other connected systems (additionally, their IDM password expiry date would not advance, so they’d continue to receive notification of their pending password expiry).

While the DLL has access to the user ID and password, only the password is passed to the service. This means a potential compromise of the service (obtaining a memory dump, for example) will yield only passwords. If the password change occurred at an off time and there’s only one password changed in that timeframe, it may be possible to correlate the password to a user ID (although if someone is able to stack trace or grab memory dumps from our domain controller … we’ve got bigger problems!

The service which performs the filtering has been modified to search the proposed password for any word contained in a text file as a substring. If the case insensitive banned string appears anywhere within the proposed password, the password is rejected and the user gets an error indicating that the password does not meet the password complexity requirements.

Other password requirements (character length, character composition, cannot contain UID, cannot contain given name or surname) are implemented through the normal Microsoft password complexity requirements. This service is purely analyzing the proposed password for case insensitive matches of any string within the dictionary file.

Shell Script: Path To Script

We occasionally have to re-home our shell scripts, which means updating any static path values used within scripts. It’s quick enough to build a sed script to convert /old/server/path to /new/server/path, but it’s still extra work.

The dirname command works to provide a dynamic path value, provided you use the fully qualified path to run the script … but it fails spectacularly whens someone runs ./scriptFile.sh and you’re trying to use that path in, say, EXTRA_JAVA_OPTS. The “path” is just . — and Java doesn’t have any idea what to do with “-Xbootclasspath/a:./more/path/goes/here.jar”

Voila, realpath gives you the fully qualified file path for /new/server/path/scriptFile.sh, ./scriptFile.sh, or even bash scriptFile.sh … and the dirname of a realpath is the fully qualified path where scriptFile.sh resides:

#!/bin/bash
DIRNAME=`dirname $(realpath "$0")`
echo ${DIRNAME}

Hopefully next time we’ve got to re-home our batch jobs, it will be a simple scp & sed the old crontab content to use the new paths.

LDAP Auth With Tokens

I’ve encountered a few web applications (including more than a handful of out-of-the-box, “I paid money for that”, applications) that perform LDAP authentication/authorization every single time the user changes pages. Or reloads the page. Or, seemingly, looks at the site. OK, not the later, but still. When the load balancer probe hits the service every second and your application’s connection count is an order of magnitude over the probe’s count … that’s not a good sign!

On the handful of sites I’ve developed at work, I have used cookies to “save” the authentication and authorization info. It works, but only if the user is accepting cookies. Unfortunately, the IT types who typically use my sites tend to have privacy concerns. And the technical knowledge to maintain their privacy. Which … I get, I block a lot of cookies too. So I’ve begun moving to a token-based scheme. Microsoft’s magic cloudy AD via Microsoft Graph is one approach. But that has external dependencies — lose Internet connectivity, and your app becomes unusable. I needed another option.

There are projects on GitHub to authenticate a user via LDAP and obtain a token to “save” that access has been granted. Clone the project, make an app.py that connects to your LDAP directory, and you’re ready.

from flask_ldap_auth import login_required, token
from flask import Flask
import sys

app = Flask(__name__)
app.config['SECRET_KEY'] = 'somethingsecret'
app.config['LDAP_AUTH_SERVER'] = 'ldap://ldap.forumsys.com'
app.config['LDAP_TOP_DN'] = 'dc=example,dc=com'
app.register_blueprint(token, url_prefix='/auth')

@app.route('/')
@login_required
def hello():
return 'Hello, world'

if __name__ == '__main__':
app.run()

The authentication process is two step — first obtain a token from the URL http://127.0.0.1:5000/auth/request-token. Assuming valid credentials are supplied, the URL returns JSON containing the token. Depending on how you are using the token, you may need to base64 encode it (the httpie example on the GitHub page handles this for you, but the example below includes the explicit encoding step).

You then use the token when accessing subsequent pages, for instance http://127.0.0.1:5000/

import requests
import base64

API_ENDPOINT = "http://127.0.0.1:5000/auth/request-token"
SITE_URL = "http://127.0.0.1:5000/"

tupleAuthValues = ("userIDToTest", "P@s5W0Rd2T35t")

tokenResponse = requests.post(url = API_ENDPOINT, auth=tupleAuthValues)

if(tokenResponse.status_code is 200):
jsonResponse = tokenResponse.json()
strToken = jsonResponse['token']
print("The token is %s" % strToken)

strB64Token = base64.b64encode(strToken)
print("The base64 encoded token is %s" % strB64Token)

strHeaders = {'Authorization': 'Basic {}'.format(strB64Token)}

responseSiteAccess = requests.get(SITE_URL, headers=strHeaders)
print(responseSiteAccess.content)
else:
print("Error requesting token: %s" % tokenResponse.status_code)

Run and you get a token which provides access to the base URL.

[lisa@linux02 flask-ldap]# python authtest.py
The token is eyJhbGciOiJIUzI1NiIsImV4cCI6MTUzODE0NzU4NiwiaWF0IjoxNTM4MTQzOTg2fQ.eyJ1c2VybmFtZSI6ImdhdXNzIn0.FCJrECBlG1B6HQJKwt89XL3QrbLVjsGyc-NPbbxsS_U:
The base64 encoded token is ZXlKaGJHY2lPaUpJVXpJMU5pSXNJbVY0Y0NJNk1UVXpPREUwTnpVNE5pd2lhV0YwSWpveE5UTTRNVFF6T1RnMmZRLmV5SjFjMlZ5Ym1GdFpTSTZJbWRoZFhOekluMC5GQ0pyRUNCbEcxQjZIUUpLd3Q4OVhMM1FyYkxWanNHeWMtTlBiYnhzU19VOg==
Hello, world

A cool discovery I made during my testing — a test LDAP server that is publicly accessible. I’ve got dev servers at work, I’ve got an OpenLDAP instance on Docker … but none of that helps anyone else who wants to play around with LDAP auth. So if you don’t want to bother populating directory data within your own test OpenLDAP … some nice folks provide a quick LDAP auth source.

Using Microsoft Graph

Single Sign-On: Microsoft Graph

End Result: This will allow in-domain computers to automatically log in to web sites and applications. Computers not currently logged into the company domain will, when they do not have an active authenticated session, be presented with Microsoft’s authentication page.
Requirements: The application must be registered on Microsoft Graph.

Beyond that, requirements are language specific – I will be demonstrating a pre-built Python example here because it is simple and straight-forward. There are examples for a plethora of other languages available at https://github.com/microsoftgraph

Process – Application Development:
Application Registration

To register your application, go to the Application Registration Portal (https://apps.dev.microsoft.com/). Elect to sign in with your company credentials.

You will be redirected to the company’s authentication page

If ADSF finds a valid token for you, you will be directed to the application registration portal. Otherwise you’ll get the same logon page you see for many other MS cloud-hosted apps. Once you have authenticated, click “Add an app” in the upper right-hand corner of the page.

Provide a descriptive name for the application and click “Create”

Click “Generate New Password” to generate a new application secret. Copy it into a temporary document. Copy the “Application Id” into the same temporary document.

Click “Add Platform” and select “Web”

Enter the appropriate redirect/logout URLs (this will be application specific – in the pre-built examples, the post-authentication redirect URL is http://localhost:5000/login/authorized

Delegated permissions impersonate the signed in user, application permissions use the application’s credentials to perform actions. I use delegated permissions, although there are use cases where application permissions would be appropriate (batch jobs, for instance).

Add any permissions your app requires – for simple authentication, the default delegated permission “User.Read” is sufficient. If you want to perform additional actions – write files, send mail, etc – then you will need to click “Add” and select the extra permissions.

Profile information does not need to be entered, but I have entered the “Home page URL” for all of my applications so I am confident that I know which registered app corresponds with which deployed application (i.e. eighteen months from now, I can still figure out site is using the registered “ADSF Graph Sample” app and don’t accidentally delete it when it is still in use).

Click Save. You can return to your “My Applications” listing to verify the app was created successfully.

Application Implementation:

To use an example app from Microsoft’s repository, clone it.

git clone https://github.com/microsoftgraph/python-sample-auth.git

Edit the config.py file and update the “CLIENT_ID” variable with your Application Id and update the “CLIENT_SECRET” variable with your Application Secret password. (As they note, in a production implementation you would hash this out and store it somewhere else, not just drop it in clear text in your code … also if you publish a screen shot of your app ID & secret somewhere, generate a new password or delete the app registration and create a new one. Which is to say, do not retype the info in my example, I’ve already deleted the registration used herein.)

Install the prerequisites using “pip install -r requirements.txt”

Then run the application – in the authentication example, there are multiple web applications that use different interfaces. I am running “python sample_flask.py”

Once it is running, access your site at http://localhost:5000

The initial page will load; click on “Connect”

Enter your company user ID and click “Next”

This will redirect to the company’s sign-on page. For in-domain computers or computers that have already authenticated to ADSF, you won’t have to enter credentials. Otherwise, you’ll be asked to logon (and possibly perform the two-factor authentication verification).

Voila, the user is authenticated and you’ve got access to some basic directory info about the individual.

Process – Tenant Owner:
None! Any valid user within the tenant is able to register applications.
Implementation Recommendations:
There is currently no way to backup/restore applications. If an application is accidentally or maliciously deleted, a new application will need to be registered. The application’s code will need to be updated with a new ID and secret. Documenting the options selected when registering the application will ensure the application can be re-registered quickly and without guessing values such as the callback URL.

There is currently no way to assign ownership of orphaned applications. If the owner’s account is terminated, no one can manage the application. The application continues to function, so it may be some time before anyone realizes the application is orphaned. For some period of time after the account is disabled, it may remain in the directory — which means a directory administrator could re-enable the account and set the password to a known value. Someone could then log into the Microsoft App Registration Portal under that ID and add new owners. Even if the ID has been deleted from the directory, it exists as a tombstone and can be restored for some period of time. Eventually, though, the account ceases to exist — at which time the only option would be to register a new app under someone else’s ID and change the code to use the new ID and secret. Ensure multiple individuals are listed as the application owner helps avoid orphaned applications.

Edit the application and click the “Add Owner” button.

You can enter the person’s logon ID or their name in “last, first” format. You can enter their first name – with a unique first name, that may work. Enter “Robert” and you’re in for a lot of scrolling! Once you find the person, click “Add” to set them up as an owner of the application. Click “Save” at the bottom of the page to commit this change.

I have submitted a feature request to Microsoft both for reassigning orphaned applications within your tenant and for a mechanism to restore deleted applications — apparently their feature requests have a voting process, so it would be helpful if people would up-vote my feature request.

Ongoing Maintenance:
There is little ongoing maintenance – once the application is registered, it’s done.

Updating The Secret:

You can change the application secret via the web portal – this would be a good step to take when an individual has left the team, and can be done as a proactive security step as a routine. Within the application, select “Generate New Password” and create a new secret. Update your code with the new secret, verify it works (roll-back is to restore the old secret to the config – it’s still in the web portal and works). Once the application is verified to work with the new secret, click “Delete” next to the old one. Both the create time and first three characters of the secret are displayed on the site to ensure the proper one is removed.

Maintaining Application Owners:

Any application owner can remove other owners – were I to move to a different team, the owners I delegated could revoke my access. Just click the “X” to the far right of the owner you wish to remove.

 

 

DevOps Alternatives

While many people involved in the tech industry have a wide range of experience in technologies and are interested in expanding the breadth of that knowledge, they do not have the depth of knowledge that a dedicated Unix support person, a dedicated Oracle DBA, a dedicated SAN engineer person has. How much time can a development team reasonably dedicate to expanding the depth of their developer’s knowledge? Is a developer’s time well spent troubleshooting user issues? That’s something that makes the DevOps methodology a bit confusing to me. Most developers I know … while they may complain (loudly) about unresponsive operational support teams, about poor user support troubleshooting skills … they don’t want to spend half of their day diagnosing server issues and walking users through basic how-to’s.

The DevOps methodology reminds me a lot of GTE Wireline’s desktop and server support structure. Individual verticals had their own desktop support workforce. Groups with their own desktop support engineer didn’t share a desktop support person with 1,500 other employees in the region. Their tickets didn’t sit in a queue whilst the desktop tech sorted issues for three other groups. Their desktop support tech fixed problems for their group of 100 people. This meant problems were generally resolved quickly, and some money was saved in reduced downtime. *But* it wasn’t like downtime avoidance funded the tech’s salary. The business, and the department, decided to spend money for rapid problem resolution. Some groups didn’t want to spend money on dedicated desktop support, and they relied on corporate IT. Hell, the techs employed by individual business units relied on corporate IT for escalation support. I’ve seen server support managed the same way — the call center employed techs to manage the IVR and telephony system. The IVR is malfunctioning, you don’t put a ticket in a queue with the Unix support group and wait. You get the call center technologies support person to look at it NOW. The added advantage of working closely with a specific group is that you got to know how they worked, and could recommend and customize technologies based on their specific needs. An IM platform that allowed supervisors and resource management teams to initiate messages and call center reps to respond to messages. System usage reporting to ensure individuals were online and working during their prescribed times.

Thing is, the “proof” I see offered is how quickly new code can be deployed using DevOps methodologies. Comparing time-to-market for automated builds, testing, and deployment to manual processes in no way substantiates that DevOps is a highly efficient way to go about development. It shows me that automated processes that don’t involve waiting for someone to get around to doing it are quick, efficient, and generally reduce errors. Could similar efficiency be gained by having operation teams adopt automated processes?

Thing is, there was a down-side to having the major accounts technical support team in PA employ a desktop support technician. The major accounts technical support did not have broken computers forty hours a week. But they wanted someone available from … well, I think it was like 6AM to 10PM and they employed a handful of part time techs, but point remains they paid someone to sit around and wait for a computer to break. Their techs tended to be younger people going to school for IT. One sales pitch of the position, beyond on-the-job experience was that you could use your free time to study. Company saw it as an investment – we get a loyal employee with a B.S. in IT who moved into other orgs, college kid gets some resume-building experience, a chance to network with other support teams, and a LOT of study time that the local fast food joint didn’t offer. The access design engineering department hired a desktop tech who knew the Unix-based proprietary graphic workstations they used within the group. She also maintained their access design engineering servers. She was busier than the major accounts support techs, but even with server and desktop support she had technical development time.

Within the IT org, we had desktop support people who were nearly maxed out. By design — otherwise we were paying someone to sit around and do nothing. Pretty much the same methodology that went into staffing the call center — we might only expect two calls overnight, but we’d still employ two people to staff the phones between 10P and 6A *just* so we could say we had a 24×7 tech support line. During the day? We certainly wouldn’t hire two hundred people to handle one hundred’s worth of calls. Wouldn’t operations teams be quicker to turn around requests if they were massively overstaffed?

As a pure reporting change, where you’ve got developers and operations people who just report through the same structure to ensure priorities and goals align … reasonable. Not cost effective, but it’s a valid business decision. In a way, though, DevOps as a vogue ideology is the impetus behind financial decisions (just hire more people) and methodology changes (automate it!) that would likely have similar efficacy if implemented in silo’d verticals.