Connect with us

AI

Automated monitoring of your machine learning models with Amazon SageMaker Model Monitor and sending predictions to human review workflows using Amazon A2I

When machine learning (ML) is deployed in production, monitoring the model is important for maintaining the quality of predictions. Although the statistical properties of the training data are known in advance, real-life data can gradually deviate over time and impact the prediction results of your model, a phenomenon known as data drift. Detecting these conditions […]

Published

on

When machine learning (ML) is deployed in production, monitoring the model is important for maintaining the quality of predictions. Although the statistical properties of the training data are known in advance, real-life data can gradually deviate over time and impact the prediction results of your model, a phenomenon known as data drift. Detecting these conditions in production can be challenging and time-consuming, and requires a system that captures incoming real-time data, performs statistical analyses, defines rules to detect drift, and sends alerts for rule violations. Furthermore, the process must be repeated for every new iteration of the model.

Amazon SageMaker Model Monitor enables you to continuously monitor ML models in production. You can set alerts to detect deviations in the model quality and take corrective actions, such as retraining models, auditing upstream systems, or fixing data quality issues. You can use insights from Model Monitor to proactively determine model prediction variance due to data drift and then use Amazon Augmented AI (Amazon A2I), a fully managed feature in Amazon SageMaker, to send ML inferences to human workflows for review. You can use Amazon A2I for multiple purposes, such as:

  • Reviewing results below a threshold
  • Human oversight and audit use cases
  • Augmenting AI and ML results as required

In this post, we show how to set up an ML workflow on Amazon SageMaker to train an XGBoost algorithm for breast cancer predictions. We deploy the model on a real-time inference endpoint, launch a model monitoring schedule, evaluate monitoring results, and trigger a human review loop for below-threshold predictions. We then show how the human loop workers review and update the predictions.

We walk you through the following steps using this accompanying Jupyter notebook:

  1. Preprocess your input dataset.
  2. Train an XGBoost model and deploy to a real-time endpoint.
  3. Generate baselines and start Model Monitor.
  4. Review the model monitor reports and derive insights.
  5. Set up a human review loop for low-confidence detection using Amazon A2I.

Prerequisites

Before getting started, you need to create your human workforce and set up your Amazon SageMaker Studio notebook.

Creating your human workforce

For this post, you create a private work team and add only one user (you) to it. For instructions, see Create an Amazon Cognito Workforce Using the Labeling Workforces Page.

Enter your email in the email addresses box for workers. To invite your colleagues to participate in reviewing tasks, include their email addresses in this box.

After you create your private team, you receive an email from no-reply@verificationemail.com that contains your workforce username, password, and a link that you can use to log in to the worker portal. Enter the username and password you received in the email to log in. You must then create a new, non-default password. This is your private worker’s interface.

When you create an Amazon A2I human review task using your private team (explained in the Starting a human loop section), your task should appear in the Jobs section. See the following screenshot.

After you create your private workforce, you can view it on the Labeling workforces page, on the Private tab.

Setting up your Amazon SageMaker Studio notebook

To set up your notebook, complete the following steps:

  1. Onboard to Amazon SageMaker Studio with the quick start procedure.
  2. When you create an AWS Identity and Access Management (IAM) role to the notebook instance, be sure to specify access to Amazon Simple Storage Service (Amazon S3). You can choose Any S3 Bucket or specify the S3 bucket you want to enable access to. You can use the AWS-managed policies AmazonSageMakerFullAccess and AmazonAugmentedAIFullAccess to grant general access to these two services.

  1. When user is created and is active, choose Open Studio.

  1. On the Studio landing page, from the File drop-down menu, choose New.
  2. Choose Terminal.

  1. In the terminal, enter the following code:
git clone https://github.com/aws-samples/amazon-a2i-sample-jupyter-notebooks

  1. Open the notebook by choosing Amazon-A2I-with-Amazon-SageMaker-Model-Monitor.ipynb in the amazon-a2i-sample-jupyter-notebooks folder.

Preprocessing your input dataset

You can follow the steps in this post using the accompanying Jupyter notebook. Make sure you provide an S3 bucket and a prefix of your choice. We then import the Python data science libraries and the Amazon SageMaker Python SDK that we need to run through our use case.

Loading the dataset

For this post, we use a dataset for breast cancer predictions from the UCI Machine Learning Repository. Please refer to the accompanying Jupyter notebook for the code to load and split this dataset. Based on the input features, we first train a model to detect a benign (label=0) or malignant (label=1) condition.

The following screenshot shows some of the rows in the training dataset.

Training and deploying an Amazon SageMaker XGBoost model

XGBoost (eXtreme Gradient Boosting) is a popular and efficient open-source implementation of the gradient boosted trees algorithm. For our use case, we use the binary:logistic objective. The model applies logistic regression for binary classification (in this example, whether a condition is benign or malignant). The output is a probability that represents the log likelihood of the Bernoulli distribution.

With Amazon SageMaker, you can use XGBoost as a built-in algorithm or framework. For this use case, we use the built-in algorithm. To specify the Amazon Elastic Container Registry (Amazon ECR) container location for Amazon SageMaker implementation of XGBoost, enter the following code:

from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost', '1.0-1')

Creating the XGBoost estimator

We use the XGBoost container to construct an estimator using the Amazon SageMaker Estimator API and initiate a training job (the full walkthrough is available in the accompanying Jupyter notebook):

sess = sagemaker.Session() xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m5.2xlarge', output_path='s3://{}/{}/output'.format(bucket, prefix), sagemaker_session=sess)

Specifying hyperparameters and starting training

We can now specify the hyperparameters for our training. You set hyperparameters to facilitate the estimation of model parameters from data. See the following code:

xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', num_round=100) xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})

For more information, see XGBoost Parameters.

Deploying the XGBoost model

We deploy a model that’s hosted behind a real-time inference endpoint. As a prerequisite, we set up a data_capture_config for the Model Monitor after the endpoint is deployed, which enables Amazon SageMaker to collect the inference requests and responses for use in Model Monitor. For more information, see the accompanying notebook.

The deploy function returns a Predictor object that you can use for inference:

xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m5.2xlarge', endpoint_name=endpoint_name, data_capture_config=data_capture_config)

Invoking the deployed model using the endpoint

You can now send data to this endpoint to get inferences in real time. The request and response payload, along with some additional metadata, is saved in the Amazon S3 location that you specified in DataCaptureConfig. You can follow the steps in the walkthrough notebook.

The following JSON code is an example of an inference request and response captured:

Starting Amazon SageMaker Model Monitor

Amazon SageMaker Model Monitor continuously monitors the quality of ML models in production. To start using Model Monitor, we create a baseline, inspect the baseline job results, and create a monitoring schedule.

Creating a baseline

The baseline calculations of statistics and constraints are needed as a standard against which data drift and other data quality issues can be detected. The training dataset is usually a good baseline dataset. The training dataset schema and the inference dataset schema should match (the number and order of the features). From the training dataset, you can ask Amazon SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data. To create the baseline, you can follow the detailed steps in the walkthrough notebook. See the following code:

# Start the baseline job
from sagemaker.model_monitor import DefaultModelMonitor
from sagemaker.model_monitor.dataset_format import DatasetFormat my_default_monitor = DefaultModelMonitor( role=role, instance_count=1, instance_type='ml.m5.4xlarge', volume_size_in_gb=100, max_runtime_in_seconds=3600,
) my_default_monitor.suggest_baseline( baseline_dataset=baseline_data_uri+'/train.csv', dataset_format=DatasetFormat.csv(header=False), # changed this to header=False since train.csv does not have header. output_s3_uri=baseline_results_uri, wait=True
)

Inspecting baseline job results

When the baseline job is complete, we can inspect the results. Two files are generated:

  • statistics.json – This file is expected to have columnar statistics for each feature in the dataset that is analyzed. For the schema of this file, see Schema for Statistics.
  • constraints.json – This file is expected to have the constraints on the features observed. For the schema of this file, see Schema for Constraints.

Model Monitor computes per column/feature statistics. In the following screenshot, c0 and c1 in the name column refer to columns in the training dataset without the header row.

The constraints file is used to express the constraints that a dataset must satisfy. See the following screenshot.

Next we review the monitoring configuration in the constraints.json file:

  • datatype_check_threshold – During the baseline step, the generated constraints suggest the inferred data type for each column. You can tune the monitoring_config.datatype_check_threshold parameter to adjust the threshold for when it’s flagged as a violation.
  • domain_content_threshold – If there are more unknown values for a String field in the current dataset than in the baseline dataset, you can use this threshold to dictate if it needs to be flagged as a violation.
  • comparison_threshold – This value is used to calculate model drift.

For more information about constraints, see Schema for Constraints.

Create a monitoring schedule

With a monitoring schedule, Amazon SageMaker can start processing jobs at a specified frequency to analyze the data collected during a given period. Amazon SageMaker compares the dataset for the current analysis with the baseline statistics and constraints provided and generates a violations report. To create an hourly monitoring schedule, enter the following code:

from sagemaker.model_monitor import CronExpressionGenerator
from time import gmtime, strftime mon_schedule_name = 'xgb-breast-cancer-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
my_default_monitor.create_monitoring_schedule( monitor_schedule_name=mon_schedule_name, endpoint_input=xgb_predictor.endpoint, output_s3_uri=s3_report_path, statistics=my_default_monitor.baseline_statistics(), constraints=my_default_monitor.suggested_constraints(), schedule_cron_expression=CronExpressionGenerator.hourly(), enable_cloudwatch_metrics=True, )

We then invoke the endpoint continuously to generate traffic for the model monitor to pick up. Because we set up an hourly schedule, we need to wait at least an hour for traffic to be detected.

Reviewing model monitoring

The violations file is generated as the output of a MonitoringExecution, which lists the results of evaluating the constraints (specified in the constraints.json file) against the current dataset that was analyzed. For more information about violation checks, see Schema for Violations. For our use case, the model monitor detects a data type mismatch violation in one of the requests sent to the endpoint. See the following screenshot.

For more details, see the walkthrough notebook.

Evaluating the results

To determine the next steps for our experiment, we should consider the following two perspectives:

  • Model Monitor violations: We only saw the datatype_check violation from the Model Monitor; we didn’t see a model drift violation. In our use case, Model Monitor uses the robust comparison method based on the two-sample K-S test to quantify the distance between the empirical distribution of our test dataset and the cumulative distribution of the baseline dataset. This distance didn’t exceed the value set for the comparison_threshold. The prediction results are aligned with the results in the training dataset.
  • Probability distribution of prediction results: We used a test dataset of 114 requests. Out of this, we see that the model predicts 60% of the requests to be malignant (over 90% probability output in the prediction results), 30% benign (less than 10% probability output in the prediction results), and the remaining 10% of the requests are indeterminate. The following chart summarizes these findings.

As a next step, you need to send the prediction results that are distributed with output probabilities of over 10% and less than 90% (because the model can’t predict with sufficient confidence) to a domain expert who can look at the model results and identify if the tumor is benign or malignant. You use Amazon A2I to set up a human review workflow and define conditions for activating the review loop.

Starting the human review workflow

To configure your human review workflow, you complete the following high-level steps:

  1. Create the human task UI.
  2. Create the workflow definition.
  3. Set the trigger conditions to activate the human loop.
  4. Start your human loop.
  5. Check that the human loop tasks are complete.

Creating the human task UI

The following example code shows how to create a human task UI resource, giving a UI template in liquid HTML. This template is rendered to the human workers whenever a human loop is required. You can follow through the complete steps using the accompanying Jupyter notebook. After the template is defined, set up the UI task function and run it.

def create_task_ui(): ''' Creates a Human Task UI resource. Returns: struct: HumanTaskUiArn ''' response = sagemaker_client.create_human_task_ui( HumanTaskUiName=taskUIName, UiTemplate={'Content': template}) return response
# Create task UI
humanTaskUiResponse = create_task_ui()
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)

Creating the workflow definition

We create the flow definition to specify the following:

  • The workforce that your tasks are sent to.
  • The instructions that your workforce receives. This is specified using a worker task template.
  • Where your output data is stored.

See the following code:

create_workflow_definition_response = sagemaker_client.create_flow_definition( FlowDefinitionName= flowDefinitionName, RoleArn= role, HumanLoopConfig= { "WorkteamArn": WORKTEAM_ARN, "HumanTaskUiArn": humanTaskUiArn, "TaskCount": 1, "TaskDescription": "Review the model predictions and determine if you agree or disagree. Assign a label of 1 to indicate malignant result or 0 to indicate a benign result based on your review of the inference request", "TaskTitle": "Using Model Monitor and A2I Demo" }, OutputConfig={ "S3OutputPath" : OUTPUT_PATH } )
flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use

Setting trigger conditions for human loop activation

We need to send the prediction results that are distributed with output probabilities of over 10% and under 90% (because the model can’t predict with sufficient confidence in this range). We use this as our activation condition, as shown in the following code:

# assign our original test dataset model_data_categorical = test_data[list(test_data.columns)[1:]] LOWER_THRESHOLD = 0.1
UPPER_THRESHOLD = 0.9
small_payload_df = model_data_categorical.head(len(predictions))
small_payload_df['prediction_prob'] = predictions
small_payload_df_res = small_payload_df.loc[ (small_payload_df['prediction_prob'] > LOWER_THRESHOLD) & (small_payload_df['prediction_prob'] < UPPER_THRESHOLD)
]
print(small_payload_df_res.shape)
small_payload_df_res.head(10)

Starting a human loop

A human loop starts your human review workflow and sends data review tasks to human workers. See the following code:

# Activate human loops
import json
humanLoopName = str(uuid.uuid4()) start_loop_response = a2i.start_human_loop( HumanLoopName=humanLoopName, FlowDefinitionArn=flowDefinitionArn, HumanLoopInput={ "InputContent": json.dumps(ip_content) } )

The workers in this use case are domain experts that can validate the request features and determine if the result is malignant or benign. The task requires reviewing the model predictions, agreeing or disagreeing, and updating the prediction as 1 for malignant and 0 for benign. The following screenshot shows a sample of tasks received.

The following screenshot shows updated predictions.

For more information about task UI design for tabular datasets, see Using Amazon SageMaker with Amazon Augmented AI for human review of Tabular data and ML predictions.

Checking the status of task completion and human loop

To check the status of the task and the human loop, enter the following code:

completed_human_loops = []
resp = a2i.describe_human_loop(HumanLoopName=humanLoopName)
print(f'HumanLoop Name: {humanLoopName}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('n') if resp["HumanLoopStatus"] == "Completed": completed_human_loops.append(resp)

When the human loop tasks are complete, we inspect the results of the review and the corrections made to prediction results.

You can use the human-labeled output to augment the training dataset for retraining. This keeps the distribution variance within the threshold and prevents data drift, thereby improving model accuracy. For more information about using Amazon A2I outputs for model retraining, see Object detection and model retraining with Amazon SageMaker and Amazon Augmented AI.

Cleaning up

To avoid incurring unnecessary charges, delete the resources used in this walkthrough when not in use, including the following:

Conclusion

This post demonstrated how you can use Amazon SageMaker Model Monitor and Amazon A2I to set up a monitoring schedule for your Amazon SageMaker model endpoints; specify baselines that include constraint thresholds; observe inference traffic; derive insights such as model drift, completeness, and data type violations; and send the low-confidence predictions to a human workflow with labelers to review and update the results. For video presentations, sample Jupyter notebooks, and more information about use cases like document processing, content moderation, sentiment analysis, object detection, text translation, and more, see Amazon A2I Resources.

References

[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.


About the Authors

Prem Ranga is an Enterprise Solutions Architect at AWS based out of Houston, Texas. He is part of the Machine Learning Technical Field Community and loves working with customers on their ML and AI journey. Prem is passionate about robotics, is an Autonomous Vehicles researcher, and also built the Alexa-controlled Beer Pours in Houston and other locations.

Jasper Huang is a Technical Writer Intern at AWS and a student at the University of Pennsylvania pursuing a BS and MS in computer science. His interests include cloud computing, machine learning, and how these technologies can be leveraged to solve interesting and complex problems. Outside of work, you can find Jasper playing tennis, hiking, or reading about emerging trends.

Talia Chopra is a Technical Writer in AWS specializing in machine learning and artificial intelligence. She works with multiple teams in AWS to create technical documentation and tutorials for customers using Amazon SageMaker, MxNet, and AutoGluon. In her free time, she enjoys meditating, studying machine learning, and taking walks in nature.

Source: https://aws.amazon.com/blogs/machine-learning/automated-monitoring-of-your-machine-learning-models-with-amazon-sagemaker-model-monitor-and-sending-predictions-to-human-review-workflows-using-amazon-a2i/

AI

5 Work From Home Office Essentials

Working remotely from home had been increasing in popularity, but it’s now become a necessity for many professionals due to the pandemic. “Some companies are eager to reopen their doors and return to the office, but a large number of employer and employees are making the transitional work environment a permanent change.”  They can’t guarantee […]

The post 5 Work From Home Office Essentials appeared first on Aiiot Talk – Artificial Intelligence | Internet of Things | Technology.

Published

on

Working remotely from home had been increasing in popularity, but it’s now become a necessity for many professionals due to the pandemic.

“Some companies are eager to reopen their doors and return to the office, but a large number of employer and employees are making the transitional work environment a permanent change.” 

They can’t guarantee their health and safety in a socially-crowded space, plus, companies are able to save tons of money they would have spent on their commercial lease or mortgage payments.

That’s not the say that working from home doesn’t come at its own costs, however. It can lead to a huge hit in productivity without the right equipment in place. To maximize your performance and efficiency in a remote setting, be sure to purchase these five office essentials.

1. Powerful PC

This one probably feels like an obvious pointer, but let’s knock it off our list. You won’t be able to get by with a make-shift work station and in today’s digital domain, your computer will be at the core of everything you do.

Never-ending loading wheels, delayed downloads, and slow rendering will add seconds to every task you do, so if your company didn’t provide you with a workhorse computer tower, you might look into investing in one yourself, then deduct the cost in your tax return.

Depending on your line of work, it might make more sense to go for a laptop vs a desktop computer. Unless your tasks demand super sophisticated software and large storage space, you can probably get by with a portable PC. That way, when coffee shops begin to reopen and allow patrons to sit inside, you can work on-the-go without feeling tethered to your desk.

2. Ergonomic Office Chair

If you’re looking at a long-term remote situation, it’s worth spending the big bucks on an ergonomic office chair. You should feel comfortably locked into your seat for eight hours a day—at least if you want to concentrate on your workflow, rather than the cramp in your back.

Shop around for an office chair that’s sophisticated in design and specifically built to hold the human body. Some stand-out features you should look out for include:

  • Targeted support around the lumbar spine
  • Adjustable height so you can adjust the seat as necessary for your arms to rest naturally on the keyboard
  • Swivel base to effortlessly turn your body, preventing neck strain
  • Cushioned seat to comfort your tailbone
  • Ventilated fabric that promotes airflow so you don’t feel overheated when sitting in the chair for several hours

You might have to pay a couple of hundred dollars for the best-of-the-line features, but there is another item that might qualify as an eligible tax deduction—just be sure to keep all your receipts organized with a document scanner in case the IRS raises their eyebrows and issues an audit.

3. Wireless Keyboard

If you want to type faster and feel better while you’re at it, then a wireless keyboard is clutch. They enable you to bring the keys closer, decreasing the extension length of your arms and accompanying shoulder strain.

“It also helps reduce the strain on your eyes by moving the bright screen farther away from your direct line of sight.” 

And, last but not least, the keys are placed in an ergonomic position for a more natural finger splay, with ample cushioning wrist cushioning that helps prevent overuse injuries such as a carpal tunnel.

4. Noise-cancelling Headphones

To truly get in the zone, you should block out distractions with headphones the cancel noise in your environment—especially if your work station is set up in a common area. Other tips to stay focused include installing a website blocker and leaving your cellphone on the other side of the room.

5. House plant or flowers

People are scientifically proven to be more productive when working near fresh flowers or lush greenery. The good news is that you don’t need to have a green thumb or natural lighting to achieve this effect—even artificial foliage can brighten your mood and improve your performance.

Working from home sometimes can feel like you’re locked inside all day, so bringing the outside world inside your space can help ward off burnout.

Take these tips with you into 2021 and set yourself up for success in your new home office setting.

Source: https://www.aiiottalk.com/business/work-from-home-office-essentials/

Continue Reading

AI

zomato digitizes menus using Amazon Textract and Amazon SageMaker

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India. Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing […]

Published

on

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India.

Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing else will hit the spot! To help picky eaters satisfy their cravings, we at zomato have recently added enhanced search engine capabilities to our restaurant aggregation and food delivery platform. These capabilities enable us to recommend restaurants to zomato users based on searches for specific dishes.

We power this functionality with machine learning (ML), using it to extract and structure text data from menu images. To develop this menu digitization technology, we partnered with Amazon ML Solutions Lab to explore the capabilities of the AWS ML Stack. This post summarizes how we used Amazon Textract and Amazon SageMaker to develop a customized menu digitization solution.

Extracting raw text from menus with Amazon Textract

The first component of this solution was to accurately extract all the text in the menu image. This process is known as optical character recognition (OCR). For our use case, we experimented with both in-house and commercial OCR solutions.

We first created an in-house OCR solution by stacking a pre-trained text detection model and a pre-trained text recognition model. The challenge with these models was that they were trained on a standard text dataset that didn’t match the eclectic fonts found in restaurant menus. To improve system performance, we fine-tuned these models by generating a dataset of 1.5 million synthetic text images that were more representative of text in menus.

After evaluating our in-house solution and several commercial OCR solutions, we found that Amazon Textract offers the best text recognition precision and recall. Restaurants often get creative when designing their menus, so OCR robustness was crucial for this use case. Amazon Textract particularly differentiated itself when processing menus with unique fonts, background images, and low image resolutions. Using it is as simple as making an API call:

#Python 3.6
import boto3
textract_client = boto3.client( 'textract', region_name = '' #insert the AWS region you're working in
)
textract_response = textract_client.detect_document_text( Document={ 'S3Object': { 'Bucket': '', #insert the name of the S3 bucket containing your image 'Name': '' #insert the S3 key of your image } }
) print(textract_response)

The following code is the Amazon Textract output for a sample image:

{'DocumentMetadata': {'Pages': 1}, 'Blocks': [{'BlockType': 'PAGE', 'Geometry': {'BoundingBox': {'Width': 1.0, 'Height': 1.0, 'Left': 0.0, 'Top': 0.0}, ... {'BlockType': 'WORD', 'Text': 'Dim', 'Geometry': {'BoundingBox': {'Width': 0.10242128372192383, 'Height': 0. 048968635499477386, 'Left': 0. 24052166938781738, 'Top': 0. 02556285448372364},
... 

The raw outputs are visualized by overlaying them on top of the image. The following image visualizes the preceding raw output. The black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Note the unconventional fonts, colors, and images on this menu.

The following image visualizes Amazon Textract outputs for a menu with a different design. Black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Again, this menu has unconventional fonts, colors, and images.

Using Amazon SageMaker to build a menu structure detector

The next component of this solution was to group the detections from Amazon Textract by menu section. This enabled our search engine to distinguish between entrees, desserts, beverages, and so on. We framed this as a computer vision problem—object detection, to be precise—and used Amazon SageMaker Ground Truth to collect training data. Ground Truth accelerated this process by providing a fully managed annotation tool that we customized to ask human annotators to draw bounding boxes around every menu section in the image. We used an annotation workforce from AWS Marketplace because this was a niche labeling task, and public labelers from Amazon Mechanical Turk didn’t perform well. With Ground Truth, it took just a few days and approximately $1,400 to label 4,086 images with triplicate redundancy.

With labeled data in hand, we faced a paradox of choice when selecting model-building approaches because object detection is such a thoroughly studied problem. Our choices included:

  • Removing low-confidence labels from the labeled dataset – Because even human annotators can make mistakes, Ground Truth calculates confidence scores for labels by having multiple annotators (for this use case, three) label the same image. Setting a higher confidence threshold for labels can decrease the noise in the training data at the expense of having less training data.
  • Data augmentation – Techniques for image data augmentation include horizontal flipping, cropping, shearing, and rotation. Data augmentation can make models more robust by increasing the amount of training data. However, excessive data augmentation may result in poor model convergence.
  • Feature engineering – From our experience in applying computer vision to processing menus, we had a variety of techniques in mind to emphasize or de-emphasize various aspects of the input images. For example, see the following images.

The following is the original image of a menu.

The following image shows the redacted image (overlay white boxes on a black background where text detections were found).

The following is a text cropped image. On a black background, the image has overlay crops from the original image where text detections were found.

The following is a single channel and text cropped image. The image is encoded as a single RGB channel (for this image, green). You can apply this with other transformations, in this case text cropping.

 

We also had the following additional model-building methods to choose from:

  • Model architectures like YOLO, SSD, and RCNN, with VGG or ResNet backbones – Each architecture has different trade-offs of model accuracy, inference time, model size, and more. For this use case, model accuracy was the most important metric because menu images were batch processed.
  • Using a model pre-trained on a general object detection task or starting from scratch – Transfer learning can be helpful when training complex models on small datasets. However, the task of detecting menu sections is very different from a general object detection task (for example, PASCAL VOC), so the pre-training may not be relevant.
  • Optimizer parameters – These include learning rate, momentum, regularization coefficients, and early stopping configuration.

With so many hyperparameters to consider, we turned to the automatic tuning feature of Amazon SageMaker to coordinate a massive tuning job across all these variables. The following code is an example of tuning a single model architecture and input data configuration:

import sagemaker
import boto3
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.estimator import Estimator
from sagemaker.tuner import HyperparameterTuner, IntegerParameter, CategoricalParameter, ContinuousParameter
import itertools
from time import sleep #set to the region you're working in
REGION_NAME = ''
#set a S3 path for SageMaker to store the outputs of the training jobs S3_OUTPUT_PATH = ''
#set a S3 location for your training dataset, #assumed to be an augmented manifest file
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest.html
TRAIN_DATA_LOCATION = ''
#set a S3 location for your validation data, #assumed to be an augmented manifest file
VAL_DATA_LOCATION = ''
#specify which fields in the augmented manifest file are relevant for training
DATA_ATTRIBUTE_NAMES = [,]
#specify image shape
IMAGE_SHAPE = #specify label width
LABEL_WIDTH = #specify number of samples in the training dataset
NUM_TRAINING_SAMPLES = sgm_role = sagemaker.get_execution_role()
boto_session = boto3.session.Session( region_name = REGION_NAME
)
sgm_session = sagemaker.Session( boto_session = boto_session
)
training_image = get_image_uri( region_name = REGION_NAME, repo_name = 'object-detection', repo_version = 'latest'
) #set training job configuration
object_detection_estimator = Estimator( image_name = training_image, role = sgm_role, train_instance_count = 1, train_instance_type = 'ml.p3.2xlarge', train_volume_size = 50, train_max_run = 360000, input_mode = 'Pipe', output_path = S3_OUTPUT_PATH, sagemaker_session = sgm_session
) #set input data configuration
train_data = sagemaker.session.s3_input( s3_data = TRAIN_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) val_data = sagemaker.session.s3_input( s3_data = VAL_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) data_channels = { 'train': train_data, 'validation' : val_data
} #set static hyperparameters
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-api-config.html
static_hyperparameters = { 'num_classes' : 1, 'epochs' : 100, 'lr_scheduler_step' : '15,30', 'lr_scheduler_factor' : 0.1, 'overlap_threshold' : 0.5, 'nms_threshold' : 0.45, 'image_shape' : IMAGE_SHAPE, 'label_width' : LABEL_WIDTH, 'num_training_samples' : NUM_TRAINING_SAMPLES, 'early_stopping' : True, 'early_stopping_min_epochs' : 5, 'early_stopping_patience' : 1, 'early_stopping_tolerance' : 0.05,
} #set ranges for tunable hyperparameters
hyperparameter_ranges = { 'learning_rate': ContinuousParameter( min_value = 1e-5, max_value = 1e-2, scaling_type = 'Auto' ), 'mini_batch_size': IntegerParameter( min_value = 8, max_value = 64, scaling_type = 'Auto' )
} #Not all hyperparameters are feasible to tune directly
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html
#For these we run model tuning jobs in parallel using a for loop
#We take this approach for tuning over different model architectures #and different feature engineering configurations
use_pretrained_options = [0, 1]
base_network_options = ['resnet-50', 'vgg-16'] for use_pretrained, base_network in itertools.product(use_pretrained_options, base_network_options): static_hyperparameter_configuration = { **static_hyperparameters, 'use_pretrained_model' : use_pretrained, 'base_network' : base_network } object_detection_estimator.set_hyperparameters( **static_hyperparameter_configuration ) tuner = HyperparameterTuner( estimator = object_detection_estimator, objective_metric_name = 'validation:mAP', strategy = 'Bayesian', hyperparameter_ranges = hyperparameter_ranges, max_jobs = 24, max_parallel_jobs = 2, early_stopping_type = 'Auto', ) tuner.fit( inputs = data_channels ) print(f'Started tuning job: {tuner.latest_tuning_job.name}') #wait a bit before starting next job so auto generated names don't conflict sleep(60)

This code uses version 1.72.0 of the Amazon SageMaker Python SDK, which is the default version installed in Amazon SageMaker notebook instances. Version 2.X introduces breaking changes. For more information, see Use Version 2.x of the SageMaker Python SDK.

We used powerful GPU hardware (p3.2xlarge instances), and it took us just 1 week and approximately $1,500 to explore 455 unique parameter configurations. Of these configurations, Amazon SageMaker found that a fine-tuned Faster R-CNN model with text cropping performed the best, with a mean average precision score of 0.93. This aligned with results from our prior work in this space, which found that two-stage detectors generally outperform single-stage detectors in processing menus.

The following is an example of how the object detection model processed a menu. In this image, the purple boxes are the predicted bounding boxes from the menu section detection model. Black boxes are the text detection bounding boxes provided by Amazon Textract.

Using Amazon SageMaker to build rule- and ML-based text classifiers

The final component in the solution was a layer of text classification. To enable our enhanced search functionality, we had to know if each detection within a menu section was the menu section title, name of a dish, price of a dish, or something else (such as a description of a dish or the name of the restaurant). To this end, we developed a hybrid rule- and ML-based text classification system.

The first step of the classification was to use a rule to determine if a detection was a price or not. This rule simply calculated the proportion of numeric characters in the detection. If the proportion was greater than 40%, the detection was classified as a price. Although simple, this classifier worked well in practice. We used Amazon SageMaker notebook instances as a convenient interactive environment to develop this and other rules.

After the prices were filtered out, the remaining detections were classified as dish or not dish. From our experience in processing menus, we intuitively knew that in many cases, the location of prices was sufficient to do this classification. For these menus, dishes and prices are listed side by side, so simply classifying detections located to the left of prices as dishes worked well.

The following example shows how the rules-based text classification system processed a menu. Green boxes are detections classified as dishes (by the price location rule). Red boxes are detections classified as not dishes (by the price location rule). Blue boxes are detections classified as prices. Final dish detections are on the right.

Some menus might include lengthy dish descriptions or may not list prices next to individual dishes. These menus violate the assumptions of the price location rules, so we turned to model-based text classification. We used Amazon SageMaker training jobs to experiment with many modeling approaches in parallel, including an XGBoost model trained on hashed word count vectors. In the end, we found that a fine-tuned BERT model from GluonNLP achieved the best performance with an AUROC score of 0.86.

The following image is an example of how the model-based text classification system processed a menu. Green boxes are detections classified as dishes (by the BERT model). Red boxes are detections classified as not dishes (by the BERT model). Blue boxes are detections classified as prices. The final dish detections are on the right.

Of the remaining detections (those not classified as prices or dishes), a final round of classification identified menu section titles. We created features that captured the font size of the detection, the location of the detection on the menu, and the length of the words within the detection. We used these features as inputs to a logistic regression model that predicted if a detection is a menu section title or not.

Key features of Amazon SageMaker

In the end, we found that doing OCR was as simple as making an API call to Amazon Textract. However, our use case required additional customization. We selected Amazon SageMaker as an ML platform to develop this customization because it offered several key features:

  • Amazon SageMaker Notebooks made it easy to spin up Jupyter notebook environments for prototyping and testing rules and models.
  • Ground Truth helped us build and deploy a custom image annotation tool with no front-end experience required.
  • Amazon SageMaker automatic tuning enabled us to run massive hyperparameter tuning jobs on powerful hardware, and included an intuitive interface for tracking the results of hundreds of experiments. You can implement tuning jobs with early stopping conditions, which makes experimentation cost-effective.

Amazon SageMaker offers additional integration benefits from including all the preceding features in a single platform:

  • Amazon SageMaker Notebooks come pre-installed with all the dependencies needed to build models that can be optimized with automatic tuning.
  • Ground Truth offers easy access to labelers from Mechanical Turk or AWS Marketplace.
  • Automatic tuning can directly ingest the manifest files created by Amazon SageMaker Ground Truth.

Putting it all together

Our menu digitization system can extract text from images of menus, group it by menu section, extract the title of the section, extract the dishes within each section, and pair each dish with its price. The following is a visualization of the end-to-end solution.

The workflow contains the following steps:

  1. The input is an image of a menu.
  2. Amazon Textract performs OCR on the input image.
  3. An ML-based computer vision model predicts bounding boxes for menu sections in the menu image.
  4. A rules-based classifier classifies Amazon Textract detections as price or not price.
  5. A rules-based classifier (5a) attempts to use the location of price detections to classify the not price detections as dish or not dish. If this rule doesn’t successfully classify most of the detections on the page, an ML-based classifier is used instead (5b).
  6. The ML-based classifier uses hand-crafted features to classify not dish detections as menu section title or not menu section title.
  7.  The menu text is structured by combining the menu section detections and the text classification results.

The following image visualizes a sample output of the system. Green boxes are detections classified as dishes. Blue boxes are detections classified as prices. Yellow boxes are detections classified as menu section titles. Purple boxes are predicted menu section bounding boxes.

The following code is the structured output:

[ { "title":{ "text":"Shrimp Dishes" }, "dishes":[ { "text":"Shrimp Masala", "price":{ "text":"140" } }, { "text":"Shrimp Biryani", "price":{ "text":"170" } }, { "text":"Shrimp Pulav", "price":{ "text":"160" } } ] }, ...
]

Conclusion

We built a system that uses ML to digitize menus without any human input required. This system will improve user experience by powering new features such as advanced dish search and review highlight verification. Our content team will also use it to accelerate creating menus for online ordering.

To explore these capabilities of Amazon Textract and Amazon SageMaker in more depth, see Automatically extract text and structured data from documents with Amazon Textract and Amazon SageMaker Automatic Model Tuning: Using Machine Learning for Machine Learning.

The Amazon ML Solutions Lab helped us accelerate our use of ML by pairing our team with ML experts. The ML Solutions Lab brings to every customer engagement learnings from more than 20 years of Amazon’s ML innovations in areas such as fulfillment and logistics, personalization and recommendations, computer vision and translation, fraud prevention, forecasting, and supply chain optimization. To learn more about the AWS ML Solutions Lab, contact your account manager or visit Amazon Machine Learning Solutions Lab.


About the Authors

Chiranjeev Ghai is a Machine Learning Engineer. In his current role, he has been aiding automation at zomato by leveraging a wide variety of ML optimisations ranging from Image Classification, Product Recommendation, and Text Detection. When not building models, he likes to spend his time playing video games at home.

Ryan Cheng is a Deep Learning Architect in the Amazon ML Solutions Lab. He has worked on a wide range of ML use cases from sports analytics to optical character recognition. In his spare time, Ryan enjoys cooking.

Andrew Ang is a Deep Learning Architect at the Amazon ML Solutions Lab, where he helps AWS customers identify and build AI/ML solutions to address their business problems.

Vinayak Arannil is a Data Scientist at the Amazon Machine Learning Solutions Lab. He has worked on various domains of data science like computer vision, natural language processing, recommendation systems, etc.

Source: https://aws.amazon.com/blogs/machine-learning/zomato-digitizes-menus-using-amazon-textract-and-amazon-sagemaker/

Continue Reading

AI

How to Improve Your Supply Chain With Deep Reinforcement Learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart. Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources […]

The post How to Improve Your Supply Chain With Deep Reinforcement Learning appeared first on TOPBOTS.

Published

on

reinforcement learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart.

Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources to create the products we use and consume every day. To track all of the inputs to a single, simple product would be staggering. Yet supply chain organizations inside vertically integrated corporations are tasked with managing inputs from raw materials, to manufacturing, warehousing, and distribution to customers. The companies that do this best cut down on waste from excess storage, to unneeded transportation costs, and lost time to get products and materials to later stages in the system. Optimizing these systems is a key component in businesses as dissimilar as Apple and Saudi Aramco.

A lot of time and effort has been put into building effective supply chain optimization models, but due to their size and complexity, they can be difficult to build and manage. With advances in machine learning, particularly reinforcement learning, we can train a machine learning model to make these decisions for us, and in many cases, do so better than traditional approaches!

TL;DR

We train a deep reinforcement learning model using Ray and or-gym to optimize a multi-echelon inventory management model and benchmark it against a derivative free optimization model using Powell’s Method.

Multi-Echelon Supply Chain

In our example, we’re going to work with a multi-echelon supply chain model with lead times. This means that we have different stages of our supply chain that we need to make decisions for, and each decision that we make at different levels are going to affect decisions downstream. In our case, we have M stages going back to the producer of our raw materials all the way to our customers. Each stage along the way has a different lead time, or time it takes for the output of one stage to arrive and become the input for the next stage in the chain. This may be 5 days, 10 days, whatever. The longer these lead times become, the earlier you need to anticipate customer orders and demand to ensure you don’t stock out or lose sales!

If this in-depth educational content on is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Inventory Management with OR-Gym

The OR-Gym library has a few multi-echelon supply chain models ready to go to simulate this structure. For this, we’ll use the InvManagement-v1 environment, which has the structure shown above, but results in lost sales if you don’t have sufficient inventory to meet customer demand.

If you haven’t already, go ahead and install the package with:

pip install or-gym

Once installed, we can set up our environment with:

env = or_gym.make('InvManagement-v1')

This is a four-echelon supply chain by default. The actions determine how much material to order from the echelon above at each time step. The orders quantities are limited by the capacity of the supplier and their current inventory. So, if you order 150 widgets from a supplier that has a shipment capacity of 100 widgets and only has 90 widgets on hand, you’re going to only get 90 sent.

Each echelon has its own costs structure, pricing, and lead times. The last echelon (Stage 3 in this case) provides raw materials, and we don’t have any inventory constraints on this stage, assuming that the mine, oil well, forest — or whatever produces your raw material inputs — is large enough that this isn’t a constraint we need to concern ourselves with.

Default parameter values for the Invmanagement-v1 environment.

As with all or-gym environments, if these settings don’t suit you, simply pass an environment configuration dictionary to the make function to customize your supply chain accordingly (an example is given here).

Training with Ray

To train your environment, we’re going to leverage the Ray library to speed up our training, so go ahead and import your packages.

import or_gym
from or_gym.utils import create_env
import ray
from ray.rllib import agents
from ray import tune

To get started, we’re going to need a brief registration function to ensure that Ray knows about the environment we want to run. We can register that with the register_env function shown below.

def register_env(env_name, env_config={}): env = create_env(env_name) tune.register_env(env_name, lambda env_name: env(env_name, env_config=env_config))

From here, we can set up our RL configuration and everything we need to train the model.

# Environment and RL Configuration Settings
env_name = 'InvManagement-v1'
env_config = {} # Change environment parameters here
rl_config = dict( env=env_name, num_workers=2, env_config=env_config, model=dict( vf_share_layers=False, fcnet_activation='elu', fcnet_hiddens=[256, 256] ), lr=1e-5
) # Register environment
register_env(env_name, env_config)

The rl_config dictionary is where you can set all of the relevant hyperparameters or set your system to run on a GPU. Here, we’re just going to use 2 workers for parallelization, and train a two-layer network with an ELU activation function. Additionally, if you’re going to use tune for hyperparameter tuning, then you can use tools like tune.gridsearch() to systematically update learning rates, change the network, or whatever you like.

Once your happy with that, go head and choose your algorithm and get to training! Below, I just use the PPO algorithm because I find it trains well on most environments.

# Initialize Ray and Build Agent
ray.init(ignore_reinit_error=True)
agent = agents.ppo.PPOTrainer(env=env_name, config=rl_config) results = []
for i in range(500): res = agent.train() results.append(res) if (i+1) % 5 == 0: print('\rIter: {}\tReward: {:.2f}'.format( i+1, res['episode_reward_mean']), end='')
ray.shutdown()

The code above will initialize ray, then build the agent according to the configuration you specified previously. If you’re happy with that, then let it run for a bit and see how it does!

One thing to note with this environment: if the learning rate is too high, the policy function will begin to diverge such that the loss becomes astronomically large. At that point, you’ll wind up getting an error, typically stemming from Ray’s default pre-processor with state showing bizarre values because the actions being given by the network are all nan. This is easy to fix by bringing the learning rate down a bit and trying again.

Let’s take a look at the performance.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec # Unpack values from each iteration
rewards = np.hstack([i['hist_stats']['episode_reward'] for i in results])
pol_loss = [ i['info']['learner']['default_policy']['policy_loss'] for i in results]
vf_loss = [ i['info']['learner']['default_policy']['vf_loss'] for i in results] p = 100
mean_rewards = np.array([np.mean(rewards[i-p:i+1]) if i >= p else np.mean(rewards[:i+1]) for i, _ in enumerate(rewards)])
std_rewards = np.array([np.std(rewards[i-p:i+1]) if i >= p else np.std(rewards[:i+1]) for i, _ in enumerate(rewards)]) fig = plt.figure(constrained_layout=True, figsize=(20, 10))
gs = fig.add_gridspec(2, 4)
ax0 = fig.add_subplot(gs[:, :-2])
ax0.fill_between(np.arange(len(mean_rewards)), mean_rewards - std_rewards, mean_rewards + std_rewards, label='Standard Deviation', alpha=0.3)
ax0.plot(mean_rewards, label='Mean Rewards')
ax0.set_ylabel('Rewards')
ax0.set_xlabel('Episode')
ax0.set_title('Training Rewards')
ax0.legend() ax1 = fig.add_subplot(gs[0, 2:])
ax1.plot(pol_loss)
ax1.set_ylabel('Loss')
ax1.set_xlabel('Iteration')
ax1.set_title('Policy Loss') ax2 = fig.add_subplot(gs[1, 2:])
ax2.plot(vf_loss)
ax2.set_ylabel('Loss')
ax2.set_xlabel('Iteration')
ax2.set_title('Value Function Loss') plt.show()
Image by author.

It looks like our agent learned a decent policy!

One of the difficulties of deep reinforcement learning for these classic, operations research problems is the lack of optimality guarantees. In other words, we can look at that training curve above and see that it is learning a better and better policy — and it seems to be converging on a policy — but we don’t know how good that policy is. Could we do better? Should we invest more time (and money) into hyperparameter tuning? To answer this, we need to turn to some different methods and develop a benchmark.

Derivative Free Optimization

A good way to benchmark an RL model is with derivative free optimization (DFO). Like RL, DFO treats the system as a black-box model providing inputs and getting some feedback in return to try again as it seeks the optimal value.

Unlike RL, DFO has no concept of a state. This means that we will try to find a fixed re-order policy to bring inventory up to a certain level to balance holding costs and profit from sales. For example, if the policy at stage 0 is to re-order up to 10 widgets, and the currently, we have 4 widgets, then the policy states we’re going to re-order 6. In the RL case, it would take into account the current pipeline and all of the other information that we provide into the state. So RL is more adaptive and ought to outperform a straightforward DFO implementation. If it doesn’t, then we know we need to go back to the drawing board.

While it may sound simplistic, this fixed re-order policy isn’t unusual in industrial applications, partly because real supply chains consist of many more variables and interrelated decisions than we’re modeling here. So a fixed policy is tractable and something that supply chain professionals can easily work with.

Implementing DFO

There are a lot of different algorithms and solvers out there for DFO. For our purposes, we’re going to leverage Scipy’s optimize library to implement Powell’s Method. We won’t get into the details here, but this is a way to quickly find minima on functions and can be used for discrete optimization – like we have here.

from scipy.optimize import minimize

Because we’re going to be working with a fixed re-order policy, we need a quick function to translate inventory levels into actions to evaluate.

def base_stock_policy(policy, env): ''' Implements a re-order up-to policy. This means that for each node in the network, if the inventory at that node falls below the level denoted by the policy, we will re-order inventory to bring it to the policy level. For example, policy at a node is 10, current inventory is 5: the action is to order 5 units. ''' assert len(policy) == len(env.init_inv), ( 'Policy should match number of nodes in network' + '({}, {}).'.format( len(policy), len(env.init_inv))) # Get echelon inventory levels if env.period == 0: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period]) else: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period] - env.B[env.period-1, :-1]) # Get unconstrained actions unc_actions = policy - inv_ech unc_actions = np.where(unc_actions>0, unc_actions, 0) # Ensure that actions can be fulfilled by checking # constraints inv_const = np.hstack([env.I[env.period, 1:], np.Inf]) actions = np.minimum(env.c, np.minimum(unc_actions, inv_const)) return actions

The base_stock_policy function takes the policy levels we supply and calculates the difference between the level and the inventory as described above. One thing to note, when we calculate the inventory level, we include all of the inventory in transit to the stage as well (given in env.T). For example, if the current inventory on hand for stage 0 is 100, and there is a lead time of 5 days between stage 0 and stage 1, then we take all of those orders for the past 5 days into account as well. So, if stage 0 ordered 10 units each day, then the inventory at this echelon would be 150. This makes policy levels greater than capacity meaningful because we’re looking at more than just the inventory in our warehouse today, but looking at everything in transit too.

Our DFO method needs to make function evaluation calls to see how the selected variables perform. In our case, we have an environment to evaluate, so we need a function that will run an episode of our environment and return the appropriate results.

def dfo_func(policy, env, *args): ''' Runs an episode based on current base-stock model settings. This allows us to use our environment for the DFO optimizer. ''' env.reset() # Ensure env is fresh rewards = [] done = False while not done: action = base_stock_policy(policy, env) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break rewards = np.array(rewards) prob = env.demand_dist.pmf(env.D, **env.dist_param) # Return negative of expected profit return -1 / env.num_periods * np.sum(prob * rewards)

Rather than return the sum of the rewards, we’re returning the negative expectation of our rewards. The reason for the negative is the Scipy function we’re using seeks to minimize whereas our environment is designed to maximize the reward, so we invert this to ensure everything is pointing in the right direction. We calculate the expected rewards by multiplying by the probability of our demand based on the distribution. We could take more samples to estimate the distribution and calculate our expectation that way (and for many real-world applications, this would be required), but here, we have access to the true distribution so we can use that to reduce our computational burden.

Finally, we’re ready to optimize.

The following function will build an environment based on your configuration settings, take our dfo_func to evaluate, and apply Powell’s Method to the problem. It will return our policy and ensure that our answer contains only positive integers (e.g. we can’t order half a widget or a negative number of widgets).

def optimize_inventory_policy(env_name, fun, init_policy=None, env_config={}, method='Powell'): env = or_gym.make(env_name, env_config=env_config) if init_policy is None: init_policy = np.ones(env.num_stages-1) # Optimize policy out = minimize(fun=fun, x0=init_policy, args=env, method=method) policy = out.x.copy() # Policy must be positive integer policy = np.round(np.maximum(policy, 0), 0).astype(int) return policy, out

Now it’s time to put it all together.

policy, out = optimize_inventory_policy('InvManagement-v1', dfo_func)
print("Re-order levels: {}".format(policy))
print("DFO Info:\n{}".format(out))Re-order levels: [540 216 81]
DFO Info: direc: array([[ 0. , 0. , 1. ], [ 0. , 1. , 0. ], [206.39353826, 81.74560612, 28.78995703]]) fun: -0.9450780368543933 message: 'Optimization terminated successfully.' nfev: 212 nit: 5 status: 0 success: True x: array([539.7995151 , 216.38046861, 80.66902905])

Our DFO model found a fixed-stock policy with re-order levels at 540 for stage 0, 216 for stage 1, and 81 for stage 2. It did this with only 212 function evaluations, i.e. it simulated 212 episodes to find the optimal value.

We can run then feed this policy into our environment, say 1,000 times, to generate some statistics and compare it to our RL solution.

env = or_gym.make(env_name, env_config=env_config)
eps = 1000
rewards = []
for i in range(eps): env.reset() reward = 0 while True: action = base_stock_policy(policy, eenv) s, r, done, _ = env.step(action) reward += r if done: rewards.append(reward) break

Comparing Performance

Before we get into the reward comparisons, note that these are not perfect, 1:1 comparisons. As mentioned before, DFO yields us a fixed policy whereas RL has a more flexible, dynamic policy that changes based on state information. Our DFO approach was also given some information in terms of probabilities of demand to calculate the expectation on, RL had to infer that from additional sampling. So while RL learned from nearly ~65k episodes and DFO only had to make 212 function calls, they aren’t exactly comparable. Considering that to enumerate every meaningful fixed policy once would require ~200 million episodes, then RL doesn’t look so sample inefficient given its task.

So, how do these stack up?

Image by author.

What we can see above is that RL does indeed outperform our DFO policy by 11% on average (460 to 414). The RL model overtook the DFO policy after ~15k episodes and improved steadily after that. There is some higher variance with the RL policy however, with a few terrible episodes thrown in to the mix. All things considered, we did get stronger results overall from the RL approach, as expected.

In this case, neither method was very difficult to implement nor computationally intensive. I forgot to change my rl_config settings to run on my GPU and it still only took about 25 minutes to train on my laptop while the DFO model took ~2 seconds to run. More complex models may not be so friendly in either case.

Another thing to note, both methods can be very sensitive to initial conditions and neither are guaranteed to find the optimum policy in every case. If you have a problem you’d like to apply RL to, maybe use a simple DFO solver first, try a few initial conditions to get a feel for the problem, then spin up the full, RL model. You may find that the DFO policy is sufficient for your task.

Hopefully this gave a good overview of how to use these methods and the or-gym library. Leave feedback or questions if you have any!

This article was originally published on DataHubbs and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more applied AI updates.

We’ll let you know when we release more technical education.

Continue Reading
AI13 mins ago

5 Work From Home Office Essentials

AI25 mins ago

zomato digitizes menus using Amazon Textract and Amazon SageMaker

AI54 mins ago

How to Improve Your Supply Chain With Deep Reinforcement Learning

AI58 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI58 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI4 hours ago

Conversation Designers: who are they and what do they do?

AI4 hours ago

Automating Bot Testing at Haptik

AI6 hours ago

Why Facebook’s New Machine Translation Model is a Great Step for AI

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

Trending