Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, August 15, 2016

Alexa Meet LUIS


LUIS (Language Understanding Intelligent Service) is a research project that uses a Bing Cognitive AI service to match spoken dialog with an intent.  When creating a LUIS app, you'll create an Intent Schema and Sample Utterances similar to that in an Alexa skill.

Once you have a LUIS application built, you need to train it to create its internal models that it uses to predict which intent best matches the phrase sent to it.  It responds with a JSON object that includes your intent names and the scores that represent how well that phrase matched an existing intent.



I wanted to leverage the power of LUIS to determine what intent has the highest probability to match a spoken phrase, when I didn't include that phrase as a sample utterance in my Alexa skill.

This required me to create a catchall intent.  The schema for this intent is exactly like I described in a previous blog article about matching all the spoken words...  Polly Want a Cracker? A Simple Alexa Skill that Echoes Your Words

Once I created a skill that had a number of intents, along with my catchall intent, I began testing phrases that were not included in my sample utterances.  I soon realized that the AVS already performs LUIS-like language understanding.  It was extremely difficult to ever have the Alexa skill choose my catchall intent.

The one way I could see using LUIS now, would be to build the Polly Want a Cracker skill.  Then use LUIS to define my intents and slots (which are called entities in LUIS).  LUIS has a great service that provides active learning in your app.  LUIS stores the new phrases it hears and lets you review them and label new entities.  This is not available in Alexa right now, so a skill developer has a difficult time understanding what phrases people are trying in their skill that may not be working.

Polly Want a Cracker? A Simple Alexa Skill that Echoes Your Words


Doesn't it bug you that your skill service doesn't receive the understood text that was spoken.  When you don't launch a custom skill, the card that is included in your Alexa app displays the understood spoken text.

Why doesn't Amazon pass spoken text to a custom skill?  I think this keeps Alexa more secure.  If an Alexa owner added a skill that was easy to accidentally launch, then Alexa would be sending that service spoken dialog at times when the owner did not intend to send data.  Also, in a loud room, it is possible background dialog could be sent to your skill.  If someone launches the skill in the same room where someone else is on the phone reciting their credit card number, that is data someone would not want sent to a random skill developer.

However, if you had a need to receive the understood spoken text in your skill, it is possible.  You can use a built-in slot type that only exists for compatibility with earlier ASK skills, AMAZON.LITERAL.  Here is an example of how that would work below:

Intent Schema
{
  "intents": [
     {
      "intent": "Repeater",
      "slots": [
        {
          "name": "A",
          "type": "AMAZON.LITERAL"
        },
        {
          "name": "B",
          "type": "AMAZON.LITERAL"
        },

        --- CLIPCLIPCLIP ---
        --- And so on... ---
        --- CLIPCLIPCLIP ---

        {
          "name": "Y",
          "type": "AMAZON.LITERAL"
        },
        {
          "name": "Z",
          "type": "AMAZON.LITERAL"
        }
      ]
    }
  ]
}


Sample Utterances
Repeater {a|A} {a|B} {a|C} {a|D} {a|E} {a|F} {a|G} {a|H} {a|I} {a|J} {a|K} {a|L} {a|M} {a|N} {a|O} {a|P} {a|Q} {a|R} {a|S} {a|T} {a|U} {a|V} {a|W} {a|X} {a|Y} {a|Z}


I only have one intent that I use to match a phrase.  You'll need to add the missing slots that I clipped.  My sample utterance matches the intent with all AMAZON.LITERALs.  The one thing I noticed while testing this, is if I didn't include enough slots for the spoken words, the AVS wouldn't match my intent.  So, I created a bunch of slots, to catch a decently long phrase.

Then I use some javascript math to loop over all the slots and rebuild the phrase that was heard by Alexa.

exports.handler = function (event, context) {
    console.log(JSON.stringify(event.request));

    if(event.request.type === "LaunchRequest")
        context.succeed(buildResponse("Say something", {}, false));
    else if(event.request.type === "IntentRequest") {
        var output = "";
        for(var i=0; i <= "Z".charCodeAt(0)-"A".charCodeAt(0); i++ ) {
            var slot = event.request.intent.slots[String.fromCharCode("A".charCodeAt(0) + i)];
            if(slot && slot.value)
                output += " " + slot.value;
        }
        context.succeed(buildResponse(output, {}, false));
    }
    else
        context.succeed(buildResponse("", {}, true));
};

function buildResponse(output, attributes, shouldEndSession) {
    return {
        version: "1.0",
        sessionAttributes: attributes,
        response: {
            outputSpeech: {
                type: "PlainText",
                text: output
            },
            reprompt: {
                outputSpeech: {
                    type: "PlainText",
                    text: output
                }
            },
            shouldEndSession: shouldEndSession
        }
    };
}

That should do it for you.  You will now get the complete spoken phrase in your response.

How might this be useful?  It actually won't be that useful generally, since you'd need to do your own language understanding AI to match the spoken phrase in your skill logic.  And generally, if you have other intents in your schema, AVS tries really hard to match the spoken dialog with one of those before it would use this one.

Friday, April 1, 2016

Hosting multiple client side telemetry Application Insight endpoints on same domain


We have a website where one division in the company controls and builds the main content of our website, and another division deals with the online ordering cart system.  We each host our websites on separate servers, but they are served up under the same domain name and we share cookies.  The routing to the correct server is all handled up-stream by our network team, so it is seamless to the developers and our users.

The division that handles the online cart has already installed AppInsights and have been collecting client-side javascript telemetry data for a while now.  I work on the other content side of the website and we want to collect our own telemetry.

Looking at the code, the tracking logic and config is loaded and saved to a global variable: window.appInsights.  That variable is reloaded every time a page is requested and the trackPageView() is called when the page loads.

If we were to install the ApplicationInsights client code in our part of the website, which is the typical entry point into our website, then the window.appInsights object will be initialized by us and use our instrumentationKey.

To prevent our collection from replacing theirs, we updated the client code to set our own global variable.

OLD:
<script type="text/javascript">
  var appInsights = window.appInsights || function (config) { 
  ... 
  window.appInsights = appInsights; 
  appInsights.trackPageView(); 
</script>

NEW:
<script type="text/javascript">
  var appInsights = window.appInsightsSitecore || function (config) { 
  ... 
  window.appInsightsSitecore = appInsights; 
  appInsights.trackPageView(); 
</script>

Application Insights on Sitecore – Filtering the SQL telemetry


Microsoft Application Insights is a great solution to monitor telemetry data from your Sitecore installs.

The only problem is that if you enable all of the normal telemetry modules, you’ll end up flooding your data points with SQL calls.  There are thousands of SQL calls every minute in an average Sitecore database (especially in the EventQueue table).
We wanted to filter out all of those SQL calls, because we have not seen performance issues with Sitecore and SQL in our setup.


This requires at least v2.0 of the AppInsights SDK.  And then you need to create a custom filter.  I followed an example from the AppInsights documention.

namespace Sitecore.Website.AppInsights.Filters

    public class SQLFilter : ITelemetryProcessor
    {
        private ITelemetryProcessor Next { get; set; }

        // Link processors to each other in a chain.
        public SQLFilter(ITelemetryProcessor next)
        {
            this.Next = next;
        }
        public void Process(ITelemetry item)
        {
            // To filter out an item, just return 
            if (!OKtoSend(item)) { return; }

            this.Next.Process(item);
        }

        // Example: replace with your own criteria.
        private bool OKtoSend(ITelemetry item)
        {
            var dependency = item as DependencyTelemetry;
            if (dependency != null
                && dependency.DependencyKind == "SQL")
            {
                return false;
            }

            return true;
        }
    }
}

Once you have your filter class built, you need to add it to the Process 
pipeline of AppInsights in your ApplicationInsights.config file.

<TelemetryProcessors>
  <Add Type="Sitecore.Website.AppInsights.Filters.SQLFilter, Sitecore.Website" />
</TelemetryProcessors>

That’s it!  This prevents any SQL calls from flooding your AppInsights Azure
resource.

If collecting some of this SQL data is important to you, you could also look 
into the Sampling features of the SDK, which allows you to throttle the 
data that is sent.

Another option is to inspect the CommandName property and possibly just filter out the chattiness to the EventQueue table, but allow the other commands through.  I chose not to do this for now, because the goal is to fail fast in this processor, so not to cause a lot of extra logic to happen on the data collection and slow things down.  If you decide to do the extra logic in here to allow some of the SQL data through, make sure to order you conditional tests correctly, so if the DependencyTelemetry object is not a SQL kind, short-circuit and skip testing the other conditionals.