Project Stark Framework

This is a YET ANOTHER attempt at an "Iron Man" Jarvis-like system

Similar projects worth following
Stark is another attempt at a Jarvis-like system as seen in the popular Marvel Iron Man movies. This project has actually existed in some form since 2013, but has finally gotten to the point where I wanted to showcase it a bit.

The main goal of this system is to assist in event based home automation scenarios through natural human interaction; with maximum flexibility in terms of systems that can be integrated. Both IoT and local protocols are supported. The goal is that anything with a network connection - or proxied through a network like z-wave hub - can be controlled by extending the framework.

Differences from other work in this category is that this isn't a recreate the wheel type of project, but a hub for automation control. By this I mean the goal is to integrate with frameworks that already do things well, but expand on the control and event capabilities. Think type integration but with more complex rule syste


I created this as an attempt to document my progress on a personal home automation system modeled after the Jarvis AI in the Iron Man films. My goal in doing this wasn't so much I wanted to talk with a computer as I wanted a system to automate tasks and respond automatically to inputs. Of course this also meant finding a way to blend speech recognition and synthesis as well since a home butler should really communicate at a human level, and not through a bunch of apps.

This page is pretty sparse now but I'm going to start collecting some videos, images, and descriptions of the various parts of the project. Expect a few high level overviews and then I'll deep dive on some of the ideas and methodologies.

Core System

The core system consists of 3 basic features. The first is the server software. This is a Java based application that runs the core Stark system. Using the Hazelcast java library the goal is to make the Stark server framework distributed, with modules that can run in multiple environments and communicate with each other. In theory this is already possible, but in practice has not been attempted yet. Upon loading, the server creates the various modules that are written by extending the StarkModule java class. These modules are what actually add any sort of interaction to the system.

The second component is a simple web frontend. While the ultimate goal of the system is to interact via text or voice commands, there is some more complex setup that is just easier with a basic web interface. This is stuff like user setup, triggers, schedules, etc. This is where any kind of third party integration is also setup. Especially useful where you have a situation that requires Oauth2 or similar hook for making calls in to a remote framework or system.

The last component are what I call Stark Clients. These are outside systems that connect and interact with the server on behalf of a user. I've written helper libraries in a variety of formats (Java, PHP, Python, Bash). The goal of a client is to connect to Stark and issue commands, or relay events. As an example, the Stark Bash client is a small Bash script that you can alias on your command prompt. At the CLI typing something like:

stark are you there

Sends the command to the server and returns the response. A more elaborate client is the Stark Windows client, which runs in the system tray. This client can actually play voice events through your speakers, or record voice via a microphone and execute commands via speech recognition (thanks to Google Speech API). These clients function off a combination of technologies such as JSON HTTP requests for commands and things like Websockets for events.


I refer to the integration points as Modules within the context of the framework. Here is a list of the ones completed so far. Obviously some are more complete than others; for instance the GitHub integration basically is a web hook for receiving commit and issue notifications while the OctoPrint integration actually allows for control of the printer and notifications when events happen. In future log posts I'll try and go through each of the modules at some point.

Where is the Code?

Good question. In short, I have it in a personal git repo. I'm not opposed to making it available I just haven't done so. Personally...

Read more »

Two classes used to facilitate JSON to Java object transformations via the org.json.simple java package.

x-zip-compressed - 2.35 kB - 06/24/2016 at 15:39


  • Machine Learning and Groovy Scripts

    robweber06/29/2017 at 19:45 0 comments

    Been a long time since I updated on here, but I realized soon after the last log post (Dec 2016) that my current way of mapping text to actions was in drastic need of an upgrade.

    Around Christmas I was seriously investigating systems like Google Home and Amazon Echo. I was really interested in the programming options, which Google Home was just starting to publish. After watching some of their YouTube tutorials I tried my hand at API.AI. More research into similar systems finally lead me to an open source project Rasa NLU.

    It didn't take long to figure out Rasa was exactly what I was looking for. I wanted the natural language processing of API.AI but wanted more flexibility in how the intent translated to a user action; and I really wanted it to run locally without a round trip to the internet. To say that ripping out the guts of the regex based text to actions system I had was hard is a severe understatement. It took me 3 months to even get a basic workflow of what I wanted through Rasa working. Even then that was for only a handful of intents - I'm still working on training the rest into the system.

    Part of what made this so daunting is that I also realized during this process that the actions were not flexible enough. The modules I'd created served well to perform individual functions, or integrate with online services. What I made the mistake of doing is having them perform "double duty" doing advanced logic or even conversational aspects of the system. This made things really messy from a programming perspective. If one module wanted to ask for information from another module this was outright impossible as modules wouldn't return results directly, but route them through the events system (async only).

    Take something as simple as "when does the sun set?". As written in the original framework the Time.Sunset function would have to figure this out. Seems easy right? But what if you ask "when does the sun set on Thursday?". Well now there is a date involved, the function needs to figure out what day we went and calculate that. You also need to account for the type of date given "Thursday" could be the same as "6/29/2017". Again, this all needs to be within that one function since it can't rely on another interpreter before the information gets passed (you could build a lib that functions across all modules but for different variables this would get out of control really fast).

    Enter Groovy scripts. I choose Groovy as it integrated well with the Java backend that already existed. Instead of having the Time.Sunset do all sorts of logic, instead the action is handed off to a Groovy script, which has the ability to call other Stark functions as needed. This way the Time.Sunset function can do one simple thing, calc the sunset time based on a unix timestamp. This is all pretty abstract at this point, but suffice it to say this was a major overhaul of the existing system. Let's look at how this works in practice.

    Input from user: "What time is the sunset on Thursday?"

    Stark takes the text and passes it to the Rasa server which returns the following:

      "entities": [
          "end": 20, 
          "entity": "sunrise_or_sunset", 
          "extractor": "ner_crf", 
          "start": 14, 
          "value": "sunset"
          "end": 32, 
          "entity": "time", 
          "extractor": "ner_duckling", 
          "start": 21, 
          "text": "on Thursday", 
          "value": {
            "grain": "day", 
            "others": [
                "grain": "day", 
                "value": "2017-06-29T00:00:00.000-05:00"
                "grain": "day", 
                "value": "2017-07-06T00:00:00.000-05:00"
                "grain": "day", 
                "value": "2017-07-13T00:00:00.000-05:00"
            "value": "2017-07-06T00:00:00.000-05:00"
      "intent": {
        "confidence": 0.6721804000375875, 
        "name": "check_sun"
    <span class="redactor-invisible-space">}</span>

    This is based on training already done that identifies that phrase as belonging to the "check_sun" intent. This is an action that will return either the sun rise or sun set of a given time. The entities returned are the arguments to the action....

    Read more »

  • Build System Fine Tuning

    robweber12/07/2016 at 18:47 0 comments

    Its been a while since I've had time to post anything on here. I've been spending a lot of time trying to reconfigure my build system for this project so that I may eventually be able to publish something usable.

    My original build system was rather piecemeal, definitely the kind of thing you'd expect from a homegrown project that took on various layers over time. It utilized several Eclipse Java projects to break out the functionality of the Stark Framework, Module Libraries, and various helper libraries. To pull everything together there was a Git repo full of jar files and several Ant build files. A massive Jenkins project built the whole thing into a single tar archive that included everything, along with the configuration files. As you can imagine this system very quickly became problematic as the complexity of the project increased. Something as simple as adding a dependent library for a new Stark Module trickled down into numerous build files and various Eclipse projects when testing. To further complicate things I decided at one point that the modules themselves should be dynamically loadable. For anyone familiar with plugins in the Spigot (Bukkit fork) project of Minecraft, that is basically where I got the idea. Modules can be bundled in their own Jar files that can be put into a "modules" directory and loaded at run time.

    This system, while ugly, worked for a long time. Then came the problem of attempting to use Stark outside my own home. I had a friend that wanted to "test drive" the system and I needed to be able to not only build, but distribute it. To further complicate things I wanted to include some modules but not others. My original massive Jenkins job was built on the idea of just bundling everything.

    About this time I knew I needed to make this system better and use a real build system. I turned to Maven, since I was familiar with it from smaller projects I'd done previously. Re-configuring all these projects to use Maven and end up with a customizable result took lots of time and broke a lot of things in the testing process. The result now is a series of projects that can be built in Jenkins using the Maven lifecycle and uploaded to a Jar repository. Maven assemblies can build tar files for distribution easily. I also wrote a few small Bash scripts to help with updates and module distribution. Since the idea behind the modules is that they should be independent of the main Stark Framework there is a downloader bash script that can pull in modules from an HTTP server based on module name and version number. Now the modules can be independently installed and updated separate from the main program. This really saved time in testing and distributing things since the modules are what most often change. A quick snippet of how this would work on a Linux CLI is:

    #stop the currently running service
    sudo service stark stop
    #download, or update, a module - syntax is module_name module_version
    #this removes old versions of the module and downloads the new one
    ./ stark-core-modules 0.3.0
    #start things back up
    sudo service stark start

    I'm hoping in the next couple months to release some files and instructions via this site.

  • Android and Contacts Integration

    robweber08/16/2016 at 18:57 0 comments

    One of the most convenient ways to interact with anything is to just do it right on your smart phone. Interacting with Stark via my phone was actually one of the first reasons for creating any sort of client library. Most of the functionality you'd want with an automated assistant like this is already built right into the phone; which is why a lot of companies put the assistant right in the phone and offload the processing to the cloud.

    The Stark Framework was always meant to be more about personal automation than some sort of glorified phone based notification system. As I've written in other logs the goal of the Stark Framework has always been to aggregate information from other systems to provide event based automation and control. Other systems and libraries do things like SMS, lighting control, and media management far greater than I could just write up myself. The idea was to leverage those systems and get their data into a place where you can then feed it into other systems - thus linking them together. With this in mind the Stark Android app was always meant to be a client to interface with the system, rather than a component of the framework itself.

    That being said there are some interesting things you can do with data from the phone, given that it's sort of the hub for all communications these days. Below is a short video showcasing the Android app and what it can do to interact with the system.

    What is in the Video?

    This video was made in my development environment running an Android Virtual Device. It's a bit slower but easier to test and screen capture than on my actual phone. The only component missing is the speech to text feature (which is arguably the most cool). You'll see the "recognizer not installed" message which is where you'd launch the built-in Google speech recognizer. Google is doing all the processing here and the result is then returned to the app to send as regular text.

    As you'd expect the app can send commands and receive responses from the Stark server application. I tried to emulate a chat window type system, but I'm not very good at UI programming so it isn't super flashy. As with the other methods I've shown, this is user based so you need a login. You can also choose to send events from the phone to the Stark server. These events are:

    • Incoming Call events
    • SMS Received events
    • Battery Low Events
    • Wifi Connected Events (based on a list of SSIDs)

    These events are sent to the Stark server and can be used as the basis for Monitors and Triggers. A really basic example is that when I'm home I have a monitor that forwards all my SMS and Phone call events to my Kodi media center instances. Even if my phone is in another room I'll see "Text Message received from XYX" as a notification on my TV. I do the same with the low battery event. I often leave my phone around the house during the day, and knowing it's about to die is a useful notification. A more complex example could be a Wifi based Trigger. You could have a task set to happen at home based on when your phone connects to your home Wifi SSID. Since you can tap in to any other integration the Stark Framework has the Trigger could turn on lights, switches, music, turn on a computer, or send a text to someone else just saying "hey I'm home".

    One other function of the Stark Android app is its integration with your phone's contacts. In the early days of the framework I wrote a lot of code that tried to integrate with various contact management systems. It occurred to me, once the Android app was working, that since your phone is already aggregating all these contacts; why not use that as the collection point instead? The app can send contacts from your phone to the Stark server, where they are stored on a per user basis.

    The benefits to this are that suddenly Stark now has access to not only the Users configured in the system (which extend the Contact class anyway) but other contacts you might care about. The ContactManager class within the framework is available...

    Read more »

  • Quick Note on Users and Security

    robweber07/20/2016 at 15:50 0 comments

    The very first versions of the framework made no distinction about users or actions security. It was designed as a personal project to extend some accessible frameworks and provide basic automation. As things grew the need for creating individual users, and a security context, became more apparent. Family members wanted to use parts of the functionality; with the integration of the Icinga Monitoring system co-workers wanted to use parts of the functionality. Pretty soon I needed a way to make sure I could segment what users were allowed to use which functions.

    There are 2 main security areas. The first is at the User level, the second is at a Group level. The StarkUser object is how a user accesses the system. Different modules can extend the user object by adding additional meta-data keys that can be tied to user. By default things like the username, password, phone, and email are given. In the screenshot below you can see that there is the addition of a field for a GitLab user token from the GitLab module, the Minecraft username from the Minecraft module, and a Pushover token from the Pushover Module. Any OAuth tokens for other apps (like SmartThings) are also stored per StarkUser. There is one very simple flag that serves as an override to the entire system - the "Is Admin" boolean value. If this is checked the user is given Administrator rights to the system and any other security group setting is ignored.

    Once a user is set up they can authenticate to the framework with their username and password. Each user can set up their own monitors, triggers, and jobs. When using a client the authentication system returns an "application key" that is tied to the user. After their initial authentication this key is used for all interactions with the system. For simple clients, like the Bash client script, the application key is just a value inserted into the top of the script.

    Unless they are an administrator, a basic user really can't do a whole lot. Any additional functionality is added via the Security Groups system. By default every user is placed in the "Everyone" group with a very basic set of permissions. Additional groups can be made, and they are explicitly given permission to perform various functions within the framework. These functions are pulled from each module's self-reported list of methods. Users can belong to more than one security group, allowing for overlap of permission sets.

    If a user is allowed to create new Jobs on the system, they are creating jobs that are only visible to their user account. These jobs can be shared with other security groups on the Job screen. For example, I created a set of jobs adding some integration with the Icinga Monitoring system that we use where I work. I've shared these tools with some co-workers so we can all easily schedule downtime or acknowledge issues via our IM client. The co-workers can't do anything else on the system, but I've shared these jobs with them.

    The flow of operations during any method call checks a few things. The first is if the user authentication (application key) is valid. If it is the Job is found and a check is done to see if this Job is accessible to the user. If it is the job is created based on the input. Since each job may contain multiple Methods (actions), as it is executed each action is checked to see if the user is allowed to run this method. There are some built in security functions so the check is quickly done across all security groups with something like:

    StarkMethod foundMethod =  m_modules.findMethod(job.getNext())
    if(foundMethod != null)
        if(job.getOrigin().getUser() != null && m_security.hasPermission(job.getOrigin().getUser(),foundMethod))
             //run the command here
            //abort job, return message to user
       //method no longer exists (module disabled?)
       //abort and return no-method error to user

    These two security objects are very simple to use, but provide some needed segmentation to the framework. Individual users...

    Read more »

  • Custom Scripts - Internet Speed Test

    robweber07/13/2016 at 15:15 0 comments

    So far all the functionality I've shown has been based on the ability to write functionality into a StarkModule class that allows for actions to be run. This is the use case for the framework about 90% of the time, but what about those use cases where a Java based library isn't available, or really even necessary? Sometimes you just need a quick call out to a local script, or you want to monitor another program entirely via it's standard output.

    Enter the Scripts Module. This module has one defined action, Script.Run().

    As you can see in the screenshot there are a few arguments that the Run() method will accept. The two required are the command to run, and the event to trigger when the script completes. This uses the Java ProcessBuilder class to kick off another process and then return the output, generating the given event. The optional commands can further customize how this process runs.

    • background - if this command should run in the background instead of waiting for the result
    • start_message - a custom message to be displayed if you are backgrounding the command. If blank Stark returns a generic "starting script"
    • starting_dir - if the starting directory of your script matters set it here

    Setting up this action in a job allows you to call out to custom written functions, or just external programs, to run tasks. When they complete the resulting message will be returned to the Stark Framework as an event.

    Internet Speed Test

    I've been having some issues with my ISP and my internet speed. The speed I'm paying for is often not the speed I get - by a lot. I wanted a quick way to test my internet speed without going to all the time - especially since I'm often not at home and want this done remotely. I found this awesome python script called speedtest-cli. It does exactly what it sounds like, runs the speed test via the CLI in a python wrapper and returns the result. Perfect.

    Once I had this working I wanted to turn it into a Stark task using the Script.Run method. This took a little work. First I had to build a python wrapper for the script.

    import subprocess
    import json
    #get the speed test data
    cmd = ['python','','--simple','--share']
    process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
    out,err = process.communicate()
    #results returned as a series of newlines
    output_array = out.splitlines()
    result = {'message':'Internet speed test has completed, ' + output_array[1] + ' and ' + output_array[2],'data':{'media_picture_url':output_array[3][15:]}}
    #return result to Stark

    Stark expects external scripts to return a JSON object so I needed to wrap the output that way. The returned result looks something like this (note the picture_url in the data, this can be used by Clients that can display pictures)

    {"message": "Internet speed test has completed, Download: xxx Mbit/s and Upload: xxx Mbit/s", "data": {"media_picture_url" : "http://picuturl here" } }
    So once this was working I setup the Stark Job using the Script.Run method and tested it using an instant messenger client. One of the many Stark Clients I've written is an integration with XMPP servers via the Smack library. In order to talk with the Stark Framework via this connection your IM username must be available as a field in the Stark User setup (more on users and security in a later post).

    Looking at the IM window notice the phrase "let me know when that is done". This triggers a job I call the "Inferred One Time Monitor". The idea behind it is that Stark sets up a one time monitor for the event we expect, SpeedTest.Complete, and forwards the result to the "work" client group. This is because my IM client runs off my work computer - running this remote. Stark is able to infer all this based on the previously run command and then pull the event needed from that result. Once the event is triggered the monitor disappears. You could set up a permanent monitor for that event to a specific client - like a...

    Read more »

  • OctoPrint Lib Source

    robweber07/11/2016 at 15:34 0 comments

    For anyone interested I've pulled the OctoPrint library code from the OctoPrint Stark Module and made it a stand alone repository.

    I've included a basic README file to help get started, but other than one dependency to encode/decode the JSON values it's a pure Java implementation that should get you started very easily. I imagine with a little effort some automated efforts could be scripted just using this library and a working OctoPrint server.

    Any functions that aren't implemented either just request them or code and do a pull request (please!). The OctoPrint API is easy to deal with, I just implemented the stuff I needed at the moment.

  • OctoPrint and Triggers

    robweber07/09/2016 at 15:03 0 comments

      If you're familiar with 3D printing, you've probably at least heard of OctoPrint. It is an awesome web app designed to let your 3D printer go "headless". You can store 3D print files, take time lapse video, and control aspects of your printer with OctoPrint rather than connect via a USB to your computer. For the purposes of the Stark Framework, OctoPrint is great because it comes with a REST API.

      I have a LulzBot Mini 3D printer connected to OctoPrint. To help automate some tasks I've also developed a small java library to work with the OctoPrint REST API and then created a Stark Module to issue commands. To get some push events going I went the extra mile of writing a small OctoPrint Plugin that will push commands like "Print Complete" to Stark for immediate notification. Using this integration I was able to create some of the most complex rule sets Stark has been able to do so far.

      What Is In the Video?

      In this video I'm trying to showcase a few things. The first is the OctoPrint integration, the second is some advanced Job setup, and the third is the Trigger system. The Trigger system is something I haven't really touched on before. Most of the things I've shown so far are very simply give command, perform action type scenerios. The Trigger aspect of the Stark Framework allows a Trigger to be setup for a specific type of event. When that event happens it will trigger a Job to perform other actions. A simple example of this might be when an event from Kodi is sent like "PlaybackStopped" it could trigger and event to turn lights on in your living room. Triggers can have some additional conditional properties related to time, or the nature of the event.

      In order, the actions performed in the video are:

      1. Issue a command to start the 3D printer. This will trigger SmartThings to turn on the outlet the printer is connected to, as well as tell OctoPrint to connect to that printer.
      2. Issue a change filament command. This is a custom command for convience that moves the print head into an easy-to-access position, and heats the hotend to the temp needed to change out the filament type.
      3. Issue a start a print job command. Pretty basic, start the print job.
      4. Activate shutdown trigger. Activate the trigger for the "PrintComplete" event. This will make Stark issue the commands to disconnect from the printer and turn off the SmartThings switch when the print is complete.
      5. Ask for a status update. Ask Stark for the progress of the print job in two different formats. The first via CLI, and the second via SMS. Since SMS can display picture messages, Stark will forward a screenshot via the webcam from OctoPrint to the device.

      The picture function is something that can be utilized by any module by setting a picture into the MEDIA_URL variable in the response. It is up to the Client to decide if this will do anything when it is available. Some clients, like the BASH script interface, do nothing. Others, like IM or SMS integration, can display pictures and will grab the image.

  • The StarkBrain File

    robweber06/24/2016 at 15:30 0 comments

    Continuing with the idea of more of a "deep dive" into the structure, this is an overview of the backend storage of the framework. Dubbed the StarkBrain, this is how the system attempts to store information in an abstract way so it can be easily recalled and manipulated by the different modules.

    In the early days of the framework the idea of a "brain" for the system became evident very quickly. There are things like settings to consider, conversation history, data around jobs, etc, etc. The easiest way to solve this in the beginning was to write some flat files, but as the system got more complex a better way was needed.

    Currently the Stark Framework utilizes two data stores. The first is for the History object, which stores all event and command history. This is a not-very-exciting SQLite database. It has a defined structure, is easily portable, and quick to query. Usually only information about the previous event, or events within the last day are needed anyway.

    The more complex, and fun, is the data object known as the "StarkBrain". This is a Java object that utilizes Redis as a backend, with JSON as a data storage mechanism. The abstraction here is done via the excellent jedis and json simple Java packages.

    This structure allows modules to store their own unique lists, maps, and data objects without having to know anything about the underlying structure. Each simply has to implement the Java interfaces to dump the JSON information and read it back in again. In the case of strings and primitives, those can be stored directly. Here is an example from the iCal Calendar module for the "AddCalendar" method.

    //get all the calendars for this user
    Map<String,UserCalendar> allCalendars = brain.getMap("calendars", UserCalendar.class);
    UserCalendar calendar = allCalendars.get(user.getUsername());
    //add a new calendar based on the iCal URL
    calendar.addCalendar(args.getArgString("name"), args.getArgString("url"));
    //initiate an update
    allCalendars.put(calendar.getUser(), calendar);
    //save everything"calendars", allCalendars);

    Notice the calls to the brain.getMap() and functions. Every module is issued it's own namespace within the Redis db by using its unique java class name. This namespace is accessed via the "BrainFile" object. This object can store a simple string, a java List, java Map, or a unique object type that implements the JSONAware interface. In this example the UserCalendar class implements this interface so the brain object is asked to load the JSON object for the key "calendars" and create a map, converting each JSON object to a UserCalendar object. The method similarly will take this object and call UserCalendar.toJSON() to save these objects as a JSON string back in the Redis data structure.

    In trying to emulate the brain theme a little more an additional function was added - the ability to forget. Using the Redis EXPIRE feature a module can purposely set a value to be forgotten by the brain system. This is useful for values that will only maintain there relevance for a particular time period. I've often used it in situations where you only want to know about an event once a day, just saving a key value for the event with an expiration at midnight. If the key still exists, don't resend the event.

    Another interesting feature is that modules can load each other's BrainFile information. Since the namespaces are created based on the java class name, other modules can load these namespaces. An application of this can be seen in the Network module. The Wake On Lan feature can take a MAC address, or optionally the name of a specific Kodi computer name. This is a shortcut method to knowing the MAC since the Kodi module already records the MAC address of each Kodi machine that registers with it.

    //load the kodi brain file
    BrainFile kodiBrain = StarkBrain.getInstance(KODI_CLASS_NAME);
    Map<String,KodiHost> hosts = kodiBrain.getMap("hosts", XbmcHost.class);
    Read more »

  • Mapping Text to Actions

    robweber06/09/2016 at 20:57 0 comments

    Rather than a video I took a few screenshots to show the start to finish flow of a command going from text input through to an action. In previous logs I've shown sort of the high level processing. How you actually interact with Stark via voice or text. Here I'm going to try and elaborate on more of the setup.

    When you turn Stark on for the first time he knows how to do basically nothing. The system is meant to be a framework; so other than a few default commands like "are you there" you really won't get a lot done. Enter the web interface.

    This is the commands page of the web interface. On the left hand side are all the various Methods you can run. These are loaded from any of the enabled Modules via a config file bundled with the module. This is basically a list of all the actions you can perform. Clicking on a method displays any arguments that the method accepts (optional and required) as well as a quick way to test it.

    All tests are done via a JSON query to the Stark server which return a JSON response. Responses have the following formatting.

    message: "Message returned as text for user",
    time: 1465842197,
    arguments: {
        arg1: "text",
        arg2: "text"
    data: {
       response: "Message again"
       wave_file_name: "name of generated voice file"
       long_response: "Longer response text if available"
       response_arg_1: "",
       response_arg_2: ""
    method: "Module.Action",
    command_id: 00000000,
    originator: {
       id: 00000000,
       name: "module.full.package",
       callback: true/false
       instance: "instance_name",
       user: "username"

    The main things to notice here are that you always get the status of the response, either Success, Failure, Question, or In Progress. Question responses mean the system is waiting for more input, in progress responses mean the action succeeded but kicked off a threaded job which may still be running. There are also JSON hashes that contain all the arguments sent to the server, as well as a response object with any response information returned (data). The response can contain more JSON elements that can be used programmatically. An example would be in the module you'll get the weather information as an object with all the data points (temp, percent precip, etc) as well as a written message. The response object also contains the file name of a generated text-to-speech wave file that is generated and saved before the response is sent. This allows clients to poll the server for the response in a speech format.

    Once you confirm that a given method works, the next step is attaching it to an action - or Job. A job is what takes the method and puts it around a phrase that a human would actually use to trigger it. Off the shelf systems - like Alexa - allow the programmer to specify what these phrases are. This makes them very specific, which allows for good speech recognition. This also makes them very inflexible.

    Stark allows you to define your own expressions using regex syntax for matching. The simplest expressions are ones that trigger an action with no arguments. This is the case with the default "Are you awake" command. This is a very simple Job that triggers based on the phrases "are you awake" or "are you there". Notice how you can use the power of the regular expressions to define variances in what you say/type. It's this variance that allows for multiple phrases to trigger the same actions. It more closely resembles how a human would actually interact. Besides, if the phrase isn't quite right, you can go back in and tweak it.

    For a more complicated example I'm going to use the SmartThings method of SmartThings.ToggleSwitch. This action will turn on or off a given "switch" type device within the SmartThings hub. This is any device, defined by name, with a switch characteristic associated with it. Here is an example regular expression you could use to trigger the method:


    Examining this you'll see various expression groups....

    Read more »

  • Integrations Demo

    robweber06/07/2016 at 20:01 0 comments

    The last video showcased pretty much one type of integration, with the Kodi Media Center. Although it did also show the speech synthesis and voice to text features; I wanted to give more of an overview for how the integrations worked and how you can setup actions and events.

    What are you seeing?

    This video shows quite a few of the different integrations, specifically:

    All of these integrations exist as Modules that are then tied to phrases within Jobs that trigger them. Pretty simple, ask a question, get an answer. What is more impressive, at least to me, is the context specific interactions that you can also see happen.

    The most complex example is the file transcoding example. The framework, through the module, can lookup the file, gather information about how to transcode it, and then also update itself with a monitor for the completion via SMS. These are fairly complex question/answer interactions that require context for what is happening from step to step.

    As opposed to the speech-to-text examples in the last video I opted to use the Stark Bash Client instead. Mostly this is so the replies for the system can understood. A post simply on the clients is probably forthcoming as well.

    You can also catch sort of a "sneak peek" at the web interface too. In my next video I want to go through this more and how the Modules are configured and then how that translates into more complicated event triggers.

    Thanks to the people that liked this project so far! This is my first attempt at explaining it beyond friends/family so anything you want to see or know please ask.

View all 11 project logs

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates