Introduction

Blue Prism, through TAP (Technology Alliance Program), has integrations available for several intelligent services provided by Google, IBM, and Microsoft. While these serve well for many use cases, there could be a case where one would want to use a custom ML model instead.
After trying a few methods, I found one that helped me a lot. This article lays down the steps that are required to implement a custom model and integrate it into Blue Prism. By the end of this article, you will get a fair idea about how to deploy a machine learning model using the Flask framework in Python and use it in Blue Prism.

Options to Implement Machine Learning models

Rewriting the whole code in the language that Blue Prism supports seems like a good idea, but that requires much effort to create a replica of those ML methods. The majority of languages like C# or VB do not have great libraries to perform ML. Whereas, Web APIs have made it easy for cross-language applications to work well. If a developer wants to create ML-powered automation, they only require the URL endpoint from where the API is being served.

Python Environment Setup & Flask Basics

Linux is more widely used for training and developing ML models. Anaconda distributions help in creating a private environment in Python that keeps the dependencies separated or can be used to share the environment settings.

  1. Miniconda installation for Python here

  2. or wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh

  3. $ bash Miniconda3-latest-Linux-x86_64.sh

  4. Follow the sequence of questions & install

  5. $ source .bashrc

  6. After running:$ conda, you should be able to get the list of commands & help.

  7. To create a new environment, run:

    $ conda create --name  python=3.6
    

  8. After creating an environment, activate it using:

    $ source activate 
    

  9. Install the python packages required for machine learning methods (e.g. Pandas, TensorFlow, etc.), these two are important for creating an API:

    1. Flask: $ pip install Flask

    2. gunicorn: $ pip install gunicorn



    Note: These packages must be in installed in the virtual environment (i.e. after activating virtual environment)

    We’ll try a simple Flask application to understand how it works and serve it using gunicorn:

  10. Open any text editor and create helloworld.py file

  11. Write the code below:

  12. from flask import Flask
    app = Flask(__name__
    @app.route('/users/')
    def helloworld(username=None):
          return("Hello {}!".format(username))
    
  13. Save the file and return to the terminal.

  14. To serve the API (to start running it), execute:$ gunicorn --bind 0.0.0.0:8000 helloworld:app on your terminal. It will start the server on the localhost.

  • On your browser, try out: https://localhost:8000/users/any-name

As we have created a simple Flask application, we were able to create web-endpoints that are accessible over the network. With the help of Flask, we can wrap a machine learning model and serve them as Web APIs efficiently.

Creating a Machine Learning Model

Here’s a simple machine learning model for the demonstration that predicts gender using the height, weight, and shoe size of a person.


from sklearn import tree
import pandas as pd
import dill as pickle
import os
#ensure that all the dependency packages are installed in your virtual environment.

clf = tree.DecisionTreeClassifier() 
#load Decision Tree Classifier 

# [height, weight, shoe_size]


X = [[177, 70, 40], [160, 60, 38], [154, 54, 37], [159, 55, 37], [171, 75, 42], [181, 80, 44], [177, 70, 43], [166, 65, 40], [190, 90, 47], [175, 64, 39], [181, 85, 43]]


Y = ['male', 'male', 'female', 'female', 'male', 'male', 'female', 'female', 'female', 'male', 'male'] 
#store training dataset in array

test_df = pd.read_csv('/home/webonise/test.csv',header=None) 
#get test data from a local file or temp values can be used as
#test_df = [[153, 50, 37]]

clf = clf.fit(X, Y) 
#train the model

prediction = clf.predict(test_df) 
Use model.predict(test_Data) to predict the test values.


Saving Machine Learning Model: Serialization & Deserialization

In computer science, in the context of data storage, serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer, or transmitted across a network connection link) and reconstructed later in the same or another computer environment.

In Python, pickling is one of the standard ways to store the objects and retrieve them as their original state. To give a simple example:


$ Python> list_to_pickle = [1, 'here', 123, 'walker']

$ Python> import pickle

Try pickling the list
$ Python> list_pickle = pickle.dumps(list_to_pickle)


$ Python> list_pickle


b'\x80\x03]q\x00(K\x01X\x04\x00\x00\x00hereq\x01K{X\x06\x00\x00\x00walkerq\x02e.'

Try loading picked list back
$ Python> loaded_pickle = pickle.loads(list_pickle)


$ Python> loaded_pickle


[1, 'here', 123, 'walker']


The pickled object can be saved to a file as well so that we can retrieve it when required and use it. It is advisable to create a separate training.py file that contains all the code for training the model only.


Install dill as $ pip install dill

import dill as pickle


filename = 'model_v1.pk'
with open('../flask_api/models/'+filename, 'wb') as file:
        pickle.dump(grid, file)

So, the model is saved in the location above. Now that the model is pickled, creating a Flask wrapper would be the next step.

from sklearn import tree
import pandas as pd
import dill as pickle
import os


def train():
            model = tree.DecisionTreeClassifier()
            # [height, weight, shoe_size]
            X = [[177, 70, 40], [160, 60, 38], [154, 54, 37], [159, 55, 37], [171, 75, 42], [181, 80, 44], [177, 70, 43], [166, 65, 40], [190, 90, 47], [175, 64, 39], [181, 85, 43]]


            Y = ['male', 'male', 'female', 'female', 'male', 'male', 'female', 'female', 'female', 'male', 'male']


           model = model.fit(X, Y)


            return(model)


if __name__ == '__main__':


            model = train()
            filename = 'example_model.pk'
 
            with open('/home/webonise/flask-app/models/'+filename, 'wb') as file: pickle.dump(model, file)



Before that, be sure that our pickled object file works fine – let’s try load it back and do a prediction:

 $ Python> with open('/home/webonise/flask-app/models/'+filename ,'rb') as f:
    loaded_model = pickle.load(f)


$ Python> loaded_model.predict(test_df)


array([‘female’])

Creating an API using Flask

While constructing the wrapper function, apicall(), there are three essential parts:

  1. Getting the request data in the JSON format (for which predictions are to be made)

  2. Loading our pickled object and predict the values

  3. Convert predictions to JSON and response back with status code: 200

HTTP messages are made of a header and a body. As a standard, the majority of the body content sent across is in JSON format. We’ll be sending (POST-URL-endpoint/) the incoming data as a batch to get predictions.

(Note: You can send plain text, XML, CSV or image directly but for the sake of interchangeability of the format, it is advisable to use JSON)
“““ File Name: server.py ”””


import os
import pandas as pd
import numpy as np
import dill as pickle
import string
import json
from flask import Flask, jsonify, request
app = Flask(__name__)


@app.route('/predict', methods=['POST'])


def apicall():
            """Pandas dataframe will sent as a payload to API Call"""
            try:
                        test_json = json.dumps(request.get_json())
                        print (test_json)


                        test = pd.read_json(test_json, orient='records')
                        print(test)


            except Exception as e:


                        raise e
                        clf = 'example_model.pk'


            if test.empty:
                        return(bad_request())


            else:
                        #Load the saved model
                        print("Loading the model...")
                        loaded_model = None
                        with open('/home/webonise/flask-app/models/'+clf,'rb') as f:
                                    loaded_model = pickle.load(f)


                        print("The model has been loaded...doing predictions now...")
                        prediction = loaded_model.predict(test)


                        pre_df=pd.DataFrame(np.array(prediction))


                        with app.app_context():


                                    responses = jsonify(pre_df.to_json(orient="records"))


                        responses.status_code = 200


                        return (responses)


@app.errorhandler(400)


def bad_request(error=None):


            message = {
            'status': 400, 'message': 'Bad Request: ' + request.url + '--> Please check your data payload...',
            }


            resp = jsonify(message)
            resp.status_code = 400


            return (resp)

Once done, run: $ gunicorn --bind 0.0.0.0:8000 server:app


Let’s generate some prediction data and query the API running locally at http://0.0.0.0:8000/predict, use following python code (execute each line separately):


#setting the headers to send and accept json responses
$ Python> header = {'Content-Type': 'application/json', 'Accept': 'application/json'}

#reading test batch
$ Python> df = pd.read_csv('/home/webonise/test.csv', encoding="utf-8-sig", header=None)

#converting Pandas dataframe to json
$ Python> data = df.to_json(orient='records')


$ Python> data


‘[{“0”:153,”1”:50,”2”:37}]’


$ Python> #POST /predict


$ Python> resp = requests.post("http://0.0.0.0:8000/predict",data = json.dumps(data),headers= header)


$ Python> resp.status_code


200

(If this doesn’t give an expected response status code, send the request without dumping the data like:
resp = requests.post("http://0.0.0.0:8000/predict",data ,headers= header)

If still doesn’t work don’t worry keep going because while sending the request from Blue Prism it sends the body in a little different way)

#The final response we get is as follows:
$ Python> resp.json()


{'female’}


Call API in Blue Prism

Once the server is started (remember to start server while in virtual environment 
i.e. after $ source activate ): 
$ gunicorn --bind 0.0.0.0:8000 server:app
  1. Note down the IP Address of the server for later

  2. $ ifconfig

  1. Create an object in Blue Prism

  2. Add a collection with field’s name as 0, 1 and 2

  3. Add an action - VBO: Utility JSON: Collection to JSON (in order to convert the collection to JSON object to send it via an HTTP request)

  4. It will convert the collection to JSON e.g. ‘[{“0”:153,”1”:50,”2”:37}]’ (0 as height, 1 as weight ….)

  5. Add another action for sending an HTTP request to the server and configure it as follows:

  1. Here, use the IP address of the server in address URL with 8000 port no and append with “/predict” as shown in the above snapshot

  2. Give input data as a JSON object in the body field

  3. Store the output of HTTP request to get a response from the server that contains actual prediction value

  4. Convert the response from JSON to the collection in order to find classified gender (instead, I’ve used a decision stage)

  5. In the end, the action looks like:

Conclusion

For a small scale machine learning model that doesn’t require a massive amount of processing power (using multiple GPUs), building a local API server is a better alternative. This guide can assist from prototyping an intelligent automation process, to make it as a fully functional production-ready business process. There are a few cloud services open that can be used to deploy customized machine learning models. However, building a local API server for machine learning models is cheaper and faster (in terms of network latency), than the other intelligent cloud services integrated by other providers of Blue Prism.

Share :