Authorization for Azure Logic Apps (token based)


As the internet suggests “While often used interchangeably, authentication and authorization represent fundamentally different functions. Authentication is the process of verifying who a user is, while authorization is the process of verifying what they have access to.”

Considering you already have an App in place for which you are having a Authentication mechanism already deployed and for some reason you want to off-board some task to a Logic app. Let’s say sending emails? with configurations that you wish to keep away from the rest of the app, BUT! you want your logic app to have the same Authorization (or explicit) as the rest of the API’s in your app have. Probably consume the same jwt token for logic app as well.

So what exactly you’ll need…
1. Logic App
2. Token from a valid Issuer

Setting up the logic app.

Go to portal.azure.com -> Create a Resource -> Search for “Logic App” -> Create.
Fill up the Subscriptions and Instance Details -> review & create.
Done.

The logic of logic app could be anything but let’s say you use it as a web-hook and start with when a HTTP request is received.

For the sake of simplicity – but of course you can be creative.

Click Save and that will generate you a HTTP Trigger Endpoint.

(take a note of this URL we will come back to this at a later point)

Considering we’re good here, let’s move to the Authorization part.

Setting Up Authorization.

We will be setting up Authorization using the Azure Active Directory Authorization Policies (Ref)

Use your JWT Token Decode it and get the Issuer, Audience and any other Claims that you wish to add a check against.

Decode your existing token and extract these values in case you don’t have them already.

In the logic app, Under settings click on Authorization and click on Add Policy.

Red – Standard Claims, Green – Custom claims

You can have multiple Policies configured here, but good to note when your logic app receives an incoming request that includes an authentication token, Azure Logic Apps compares the token’s claims against the claims in each authorization policy. If a match exists between the token’s claims at least the claims in a minimum of one policy, authorization succeeds.  

At the minimum, the Claims list must include the Issuer claim, which has a value that starts with the https://sts.windows.net/  or  https://login.microsoftonline.com/ as the Azure AD issuer ID.  And for this to work make sure you remove the SAS part from the HTTP Trigger URI, else this gets overridden and the logic app is authorized using the SAS key and signature.

The HTTP Trigger

Remember we kept this URL aside? now is the point to pick this up.

IMPORTANT: DO NOT USE THIS RIGHT AWAY, REMOVE THE SAS PART FROM THE ENDPOINT.

https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature&gt;
Remove sp=<permissions>sv=<SAS-version>sig=<signature> from the copied uri.

Not removing the shared access signatures will override any authorization policy that is set in Logic app’s authorization.

In case you have both SAS and Bearer Token you might bump here,

Removed the SAS and provided an Invalid (expired) bearer token,

After providing a legit AccessToken

And the Mail is Received.

That’s it, Your logic app is now set up to use Azure AD OAuth for authorizing inbound requests.
By all means you can also opt for any other method mentioned here.

Pingback for assistance, your Feedback’s are always a welcome… 🙂

Regards,
Aditya Deshpande

Automate GIT with NodeJs


Files stranded, source codes updated daily, inbound files or scheduled drops? Why not automate a job to push everything (or some) to Git.

A simple nodejs app or script, hosted or ran asynchronously is all you need.

+ NodeGit :

(github, API Documentation) one of those links will take you to a page that says “NodeGit can be quickly and painlessly installed via NPM”. which is very much true.

npm install nodegit

For more comprehensive installation techniques, check out the Install Guides.

+ Set up SSH key authentication:

Refer SSH key Authentication 

Create your SSH keys with the ssh-keygen command from the bash/cmd prompt. This will create a 2048-bit RSA key for use with SSH. You can give a passphrase for your private key when prompted—this provides another layer of security for your private key. This produces the two keys needed for SSH authentication: your private key ( id_rsa ) and the public key ( id_rsa.pub ). The default location of these generated keys would be the root of this directory unless changed.

C:\Users\<User>\

You can choose to move the files in the project working directory or refer to this location in the properties. However, it is suggested to keep all the project related certificates at one place.

For AzureDevOps/TFS setup you will need to Add the public key to Azure DevOps Services/TFS

Refer Documentation about adding a public key to Azure DevOps/TFS `

Go to your Azure DevOps Profile and under SSH public keys

Click + New Keys, give any suitable Name Copy the contents of the public key (for example, id_rsa.pub) that you generated into the Public Key Data field.

NOTE: Avoid adding whitespace or new lines into the Key Data field, as they can cause Azure DevOps Services to use an invalid public key. When pasting in the key, a newline often is added at the end. Be sure to remove this newline if it occurs.

Code:

Now that you have everything let’s put code to start.
I will divide the code in 4 parts:
1. Setup
2. Config and Init
3. Clone repo (in case not cloned already)
4. Sync

  1. Setup
    const debug = ""; //Import your Logger here or use Console just in case.
    const nodegit = require("nodegit");
    const fs = require('fs');
    const path = require("path");
    const fse = require("fs-extra");
    const dir = `./backup/`
    const repoFolder = `${dir}.git`;
    var privateKey, publicKey, passphrase, twoWaySync, url, credentialsCallback, signature_name, signature_email;
    module.exports = async (options) => {
    ///Initialize Git config.
    InitGit(options);
    ///Clone Repo if not exist already.
    try {
    if (!fs.existsSync(repoFolder)) {
    debug.info(`Cloning Repo.`)
    await CloneRepo();
    }
    } catch (error) {
    debug.error("Error during Cloning Repo.");
    debug.error(error);
    }
    //Prepare files for commit, maybe cleanup or renaming or conditioning.
    PrepareFilesForCommit();
    /// Sync Once all files and folders are prepared.
    try {
    if (fs.existsSync(`${dir}.git`)) {
    debug.info(`Attempting to push to server.`)
    await InitSync();
    }
    } catch (error) {
    debug.error("Error during pushing to server.");
    debug.error(error);
    }
    }
    view raw nodegit_snippet1.js hosted with ❤ by GitHub
  2. Config and Init
    var config = {
    "options": {
    "privateKey": "<pathToPrivateKey>/MyKeyFile",
    "publicKey": "<pathToPublicKey>/MyKeyFile.pub",
    "passphrase": "Y0ur.P@5$p#rAse.G0e5.H3rE!",
    "twoWaySync": true,
    "sshUrl": "<get_this_sshUrl_from_git>",
    "signature_name": "Someone's Name",
    "signature_email": "SomeoneWho@isMakingThisCommit.dev"
    }
    }
    function InitGit(options) {
    debug.info("Initializing Git...");
    //Fetch values from config.
    privateKey = options.privateKey;
    publicKey = options.publicKey;
    passphrase = options.passphrase;
    twoWaySync = options.twoWaySync;
    url = options.sshUrl;
    signature_name = options.signature_name;
    signature_email = options.signature_email;
    credentialsCallback = {
    credentials: function (url, userName) {
    return nodegit.Cred.sshKeyNew(userName, publicKey, privateKey, passphrase);
    }
    }
    }
    view raw nodegit_snippet2.js hosted with ❤ by GitHub
  3. Clone Repo
    async function CloneRepo() {
    var cloneOptions = {
    fetchOpts: { callbacks: credentialsCallback }
    };
    nodegit.Clone(url, dir, cloneOptions).then(function (repo) {
    debug.verbose("Cloned " + path.basename(url) + " to " + repo.workdir());
    }).catch(function (err) {
    debug.verbose(err);
    });
    }
    view raw nodegit_snippet3.js hosted with ❤ by GitHub
  4. Sync
    async function InitSync() {
    var remote, repo, count, fileNames, fileContent;
    nodegit.Repository.open(repoFolder)
    .then(function (repoResult) {
    repo = repoResult
    count = 0;
    fileNames = [];
    fileContent = {};
    if (twoWaySync) {
    debug.info(`TwoWaySync is Enabled, fetching, fast-forwarding and merging changes to local.`)
    return PullRepo(repo);
    }
    }).then(async function () {
    /// Adding Files
    const files = await repo.getStatusExt();
    files.forEach(function (file) {
    if ((file.isNew() || file.isModified() || file.isTypechange() || file.isRenamed()) &&
    file.inWorkingTree()) {
    var status = file.isNew() ? "New" : file.isModified() ? "Modified" : file.isTypechange() ? "Type Changed" : file.isRenamed() ? "Renamed" : " -Unknown. ";
    var fileID = file.path();
    const path = `${dir}${fileID}`;
    if (fs.statSync(path).isFile()) {
    data = fs.readFileSync(path);
    fileContent[fileID] = data;
    debug.verbose(`Adding ${fileID} with status ${status}`);
    }
    }
    });
    fileNames = Object.keys(fileContent);
    count = fileNames.length;
    })
    .then(async function () {
    return repo.refreshIndex()
    .then(async function (index) {
    if (count > 0) {
    return Promise.all(fileNames.map(function (fileName) {
    fse.writeFile(
    path.join(repo.workdir(), fileName), fileContent[fileName]);
    }))
    .then(function () {
    // This will add all files to the index
    return index.addAll(fileNames, nodegit.Index.ADD_OPTION.ADD_CHECK_PATHSPEC);
    })
    .then(function () {
    // this will write files to the index
    return index.write();
    })
    .then(function () {
    return index.writeTree();
    })
    /// COMMIT
    .then(function (oidResult) {
    oid = oidResult;
    return nodegit.Reference.nameToId(repo, "HEAD");
    })
    .then(function (head) {
    return repo.getCommit(head);
    })
    .then(function (parent) {
    var author = nodegit.Signature.now(signature_name, signature_email);
    var committer = nodegit.Signature.now(signature_name, signature_email);
    var message = `some commit message that makes sense...`;
    return repo.createCommit("HEAD", author, committer, message, oid, [parent]);
    })
    .then(function (commitId) {
    debug.verbose(`New Commit: ${commitId}`);
    })
    /// PULL if TwoWaySync
    .then(function () {
    if (twoWaySync) {
    debug.info(`TwoWaySync is Enabled. Fetching, fast-forwarding and merging changes to local.`)
    return PullRepo(repo);
    }
    })
    /// PUSH
    .then(function () {
    return repo.getRemote('origin');
    })
    .then(function (remoteResult) {
    remote = remoteResult;
    // Create the push object for this remote
    return remote.push(
    ["refs/heads/master:refs/heads/master"], //consider have branch as an parameter, i choose to exploite master <grin>
    { callbacks: credentialsCallback }
    );
    })
    .then(function () {
    count = 0;
    fileNames = [];
    fileContent = {};
    debug.verbose('remote Pushed!')
    })
    .catch(function (reason) {
    debug.verbose(reason);
    });
    }
    });
    }).done(function () {
    debug.verbose(`Successfully pushed to server.`)
    });
    }
    async function PullRepo(repo) {
    return repo.fetchAll({
    callbacks: credentialsCallback
    }).then(function () {
    return repo.mergeBranches("master", "origin/master");
    });
    }
    view raw nodegit_snippet4.js hosted with ❤ by GitHub

You can compile this and deploy as an azure function, a Cron job or whatever suits your requirement .

P.s. I don’t have considerable time to enhance this or publish code at the moment; I am open to suggestion and improvements feel free to drop by. 🙂

Pingback for assistance, your Feedback’s are always a welcome… 🙂

Regards,
Aditya Deshpande

How to run a Windows executable inside Azure Functions v2


When working with automation, I came across this weird less tried around scenario which required to run a Windows executable in the scope of an Azure Function.

Here is an easy way of getting any .exe running inside your Azure Function

You may also be  interested in Run Console Apps on Azure Functions

But for the time let’s say you have an executable made by you. Here I have a console app that prints “Hello world” on console output,  which I have generated using PowerShell but of course, you can be more creative.

Add-Type -outputtype consoleapplication -outputassembly helloworld.exe 'public class helloworld{public static void Main(){System.Console.WriteLine("hello world");}}'
view raw helloWorld.ps1 hosted with ❤ by GitHub

First things first.

Create an Azure function and get the publish profile for the same, so you can publish it right away from the IDE.

Once this is done, get your executable ready.
Copy this executable to the root of your function along with dependencies (if any).

Set the “WorkingDirectoryInfo” and “ExeLocation”

string WorkingDirectoryInfo = @"D:\home\site\wwwroot\ExecFunc";
string ExeLocation = @"D:\home\site\wwwroot\helloworld.exe";

Code:

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System;
using System.Diagnostics;
using System.Threading.Tasks;
namespace execFuncApp
{
public static class ExecFunc
{
[FunctionName("ExecFunc")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
ILogger log, ExecutionContext executionContext)
{
// you may have better ways to do this, for demonstration purpose i have chosen to keep things simple and declare varables locally.
string WorkingDirectoryInfo = @"D:\home\site\wwwroot\ExecFunc";
string ExeLocation = @"D:\home\site\wwwroot\helloworld.exe";
var msg = ""; //Final Message that is passed to function
var output = ""; //Intercepted output, Could be anything - String in this case.
try
{
//msg = $"WorkingDirectoryInfo : {WorkingDirectoryInfo} \n" +
// $"ExeLocation : {ExeLocation} \n" +
// $"FunctionDirectory: {executionContext.FunctionDirectory} \n" +
// $"FunctionAppDirectory: {executionContext.FunctionAppDirectory} ";
// Values that needs to be set before starting the process.
ProcessStartInfo info = new ProcessStartInfo
{
WorkingDirectory = WorkingDirectoryInfo,
FileName = ExeLocation,
Arguments = "",
WindowStyle = ProcessWindowStyle.Minimized,
UseShellExecute = false,
CreateNoWindow = true
};
Process proc = new Process
{
StartInfo = info
};
// Discard any information about the associated process that has been cached inside the process component.
proc.Refresh();
// for the textual output of an application to written to the System.Diagnostics.Process.StandardOutput stream.
proc.StartInfo.RedirectStandardOutput = true;
// Starts the Process, with the above configured values.
proc.Start();
// Scanning the entire stream and reading the output of application process and writing it to a local variable.
while (!proc.StandardOutput.EndOfStream)
{
output = proc.StandardOutput.ReadLine();
}
// More things that can be done with applications. 🙂
// proc.WaitForInputIdle();
// proc.WaitForExit();
msg = $"HelloWorld.exe {DateTime.Now} : HAHAHAH!, Should work! Output: {output}";
}
catch (Exception e)
{
msg = $"HelloWorld.exe {DateTime.Now} : DAMN-IT!, Failed Somewhere! Output: {e.Message}";
}
//Logging Output, you can be more creative than me.
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
log.LogInformation($"{msg}");
return (ActionResult)new OkObjectResult($"{msg}");
}
}
}
view raw execFunc.cs hosted with ❤ by GitHub

Before you publish this make sure to set the properties of your .exe and dependencies
Build Action: Embedded Resource
Copy to Output Directory: Copy Always

File Properties

Set the Build actions and Copy to output directory

Once Done, publish your Azure function using the Publish Profile.

Go to portal.azure.com and navigate to your function and open console.
make sure your artefacts are copied to the root of function.

Console Dir

Check your artefacts before you run.

You’re good to go. Run and check Output.

Output

Run and check the output.

That shall be all.
You can find the source code here.

Pingback for assistance, your Feedback’s are always a welcome… 🙂

Regards,
Aditya Deshpande

 

Using Azure Application Insights with Node.js


Azure Application Insights are no doubt a great utility for tracking performance and diagnosis of applications on-the-go.

Application Insights can do a lot of things for you such as recording custom telemetry using TelemetryClient API for tracking Events, Exception, Metric, Trace, Dependency, Request, etc.

Setting up App Insights Client.

Setting up Node.js SDK

the only requirement at this point would be the instrumentation key (ikey) from Azure Portal. Application Insights uses the ikey to map data to your Azure resource. Before the SDK can use your ikey, you must specify the ikey in an environment variable or in your code.

iKey

Add the Node.js SDK library to your app’s dependencies

npm install applicationinsights –save

It would be the best practice to load your ikey from environment variables however for demo purpose i have hardcoded it.

Setting up App Insight client

At this point, you are ready to initialize app insight client using the ikey.

Explicitly load the library in your code.
on the top of your .js file

const appInsights = require("applicationinsights")
let client = appInsights.defaultClient;
// this config part should be set in ENVIRONMENT VARIABLES
let config = {
Id: "xxxxxxxx-xxxx-xxxx-xxxx-26728f1bb4e9",
Name: "Server-Q12",
applicationInsightsKey: "xxxxxxxx-xxxx-xxxx-xxxx-d7a116b10831"
}
view raw appinsightclient1.js hosted with ❤ by GitHub

once that is done setup the client using app insight key, adding custom properties to all events is optional and should be only done to interpret diagnostics  

appInsights.setup(config.applicationInsightsKey);
appInsights.start();
// set dimensions so reports can distinquish the origin of the logger.
appInsights.defaultClient.commonProperties = {
customProperty1: config.Id,
customProperty2: config.Name
};
client = new appInsights.TelemetryClient(config.applicationInsightsKey);
view raw appinsightclient2.js hosted with ❤ by GitHub

Consuming App Insight client

Consuming client is pretty simple and is merely a one-liner as everything hereafter is taken care of by SDK
client.trackMetric(
{
name: "custom metric:POC app-2609-100",
value: 7.5,
properties: {
testValueA: 1,
testValueB: `Apple`
}
});
view raw appinsightclient3.js hosted with ❤ by GitHub
Similarly, the rest of the processes can also be called using the same client.
let client = appInsights.defaultClient;
client.trackEvent({name: "my custom event", properties: {customProperty: "custom property value"}});
client.trackException({exception: new Error("handled exceptions can be logged with this method")});
client.trackMetric({name: "custom metric", value: 3});
client.trackTrace({message: "trace message"});
client.trackDependency({target:"http://dbname&quot;, name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL"});
client.trackRequest({name:"GET /customers", url:"http://myserver/customers&quot;, duration:309, resultCode:200, success:true});
view raw appinsightclient4.js hosted with ❤ by GitHub
That shall be all.

Pingback for assistance, your Feedback’s are always a welcome… 🙂

Regards,
Aditya Deshpande

Handling Zip files in NodeJS applications.


Often it happens you might need to deploy, download or update resources for your application in runtime. For one such scenario, there is this quick way that will come handy.

process

Ideally, this would be the flow that we’ll follow.

To perform the ZIP operations we shall be consuming adm-zip which is available as npm package.

Installing adm-zip:

$ npm install adm-zip

Code:

var file_url = 'https://downloadurl.com/files/my_zip_file.zip&#39;;
var AdmZip = require('adm-zip'); //Reference: https://www.npmjs.com/package/adm-zip
var https = require('https');
var fs = require('fs')
https.get(file_url, function (res) {
var data = [], dataLen = 0;
res.on('data', function (chunk) {
data.push(chunk);
dataLen += chunk.length;
}).on('end', function () {
var buf = Buffer.alloc(dataLen);
for (var i = 0, len = data.length, pos = 0; i < len; i++) {
data[i].copy(buf, pos);
pos += data[i].length;
}
var zip = new AdmZip(buf);
var dir = './relative_path/zip_test/';
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir);
}
zip.extractAllTo(/*target path*/dir, /*overwrite*/true);
});
});
view raw zip-nodejs.js hosted with ❤ by GitHub

File Download URL: var file_url = 'https://downloadurl.com/files/my_zip_file.zip';

Download URL for the file can be dynamically handled and using http.get we will get the file from the host location. Once we have the file we read the data as a byte stream and write its content to buffer. Using that buffer with adm-zip we then unzip the file to a location desired, this could be an absolute or relative location as per requirement.

Certainly, this is just one thing that can be done using adm-zip you may explore more options available, like selectively unzipping, or updating an existing file system and overrides, etc. Here!

That shall be all.

Pingback for assistance, your Feedback’s are always a welcome… 🙂

Regards,
Aditya Deshpande

I have a dream


tendulkar-uvāca

I have a dream. A dream of using my machine with seamless background updates without worrying about the restarts. A dream of using any device without worrying about drivers. A dream to connect projectors, speakers, power plugs without carrying additional adapters. A dream to connect my headphones to any phone, any in-flight system without thinking about splitters and connectors. A dream to use a single USB spec cable with all devices. A dream to collaborate with my colleagues without thinking about collaborating software (and its resource utilization). I have a dream.

Collaboration: Outlook, Teams, Telegram, Slack, WhatsApp, Skype, Skype for Business

i-have-a-dream-collaboration

Adapters: US, Europe, Japan, Asia and what not

photo_2018-10-11_07-23-48

Headset/Speakers: 2.5mm/3.5mm and splitter for in-flight systems

photo_2018-10-11_07-23-51

Adapters/Dongles: USB-C/Mini Display to various formats and network connections

photo_2018-10-11_07-23-54

Namaste,
Mayur Tendulkar

View original post

The More interactive ListView.


List view being one of the most preferred item for displaying data. Out of the box it is well suited for displaying data or single type interactions with the list items, but what if we want to make the list view more interactive? And have different behaviours got different taps on a single list item. Example having a list of contacts, with their profile info and contact details and being able to connect with them in the least number of clicks (a good thing to consider from a UX point of view).

Continue reading “The More interactive ListView.”

Possible to-do’s that can fix, “Restore Nuget failed, process busy!”


Stuck while restoring Nuget packages? Well, there can be a multitude of reasons for the failure of the restoration of nuget packages. One of many is “because it is being used by another process.”

and you might end up with something similar on the error list:

ERROR:
error: error while writing anim: obj\Debug\android\bin\classes\android\support\design\R$anim.class (The process cannot access the file because it is being used by another process)

FILENAME:
public static final class anim {MyNameSpace.Mobile.Droid X:\..Path..\MyNameSpace.Mobile\MyNameSpace.Mobile.Droid\obj\Debug\android\src\android\support\design\R.java 

The process cannot access the file ‘R$anim.class’ because it is being used by another process. MyNameSpace.Mobile.Droid

or

NuGet Package restore failed: Microsoft.Bcl.Build.Tasks.dll used by another process

NuGet Package restore failed for project MyProject.Application: The process cannot access the file ‘C:\MySolution\packages\Microsoft.Bcl.Build.1.0.21\build/Microsoft.Bcl.Build.Tasks.dll’ because it is being used by another process..

1

So let’s get started,

Continue reading “Possible to-do’s that can fix, “Restore Nuget failed, process busy!””

Binding Events to Command


In the context of commanding, behaviors are a useful approach for connecting a control to a command. In addition, they can also be used to associate commands with controls that were not designed to interact with commands.

I will try to summarise an extract and a reusable code for binding Events to Commands.

Behaviors allow us to add functionality to UI controls like labels, etc without having to subclass them. Behaviors are written in code and added to controls in XAML or code.

Continue reading “Binding Events to Command”

Puneri Patya!


I just bumped into some sarcastic posters from IT World. If you are a developer or have ever been a part of a DevTeam and you can work around to understand Marathi, I’m sure you can relate to these and enjoy them. 😛

This slideshow requires JavaScript.