Connect with us

AI

Optimizing the cost of training AWS DeepRacer reinforcement learning models

AWS DeepRacer is a cloud-based 3D racing simulator, an autonomous 1/18th scale race car driven by reinforcement learning, and a global racing league. Reinforcement learning (RL), an advanced machine learning (ML) technique, enables models to learn complex behaviors without labeled training data and make short-term decisions while optimizing for longer-term goals. But as we humans can attest, learning something […]

Published

on

AWS DeepRacer is a cloud-based 3D racing simulator, an autonomous 1/18th scale race car driven by reinforcement learning, and a global racing league. Reinforcement learning (RL), an advanced machine learning (ML) technique, enables models to learn complex behaviors without labeled training data and make short-term decisions while optimizing for longer-term goals. But as we humans can attest, learning something well takes time—and time is money. You can build and train a simple “all-wheels-on-track” model in the AWS DeepRacer console in just a couple of hours. However, if you’re building complex models involving multiple parameters, a reward function using trigonometry, or generally diving deep into RL, there are steps you can take to optimize the cost of training.

As a Senior Solutions Architect and an AWS DeepRacer PitCrew member, you ultimately rack up a lot of training time. Recently we shared tips for keeping it frugal with Blaine Sundrud, host of DeepRacer TV News. This post discusses that advice in more detail. To see the interview, check out the August 2020 Qualifiers edition of DRTV.

Also, look out for the cost-optimization article coming soon to the AWS DeepRacer Developer Guide for step-by-step procedures on these topics.

The AWS DeepRacer console provides you with many tools to help you get the most of training and evaluating your RL models. After you build a model based on a reward function, which is the incentive plan you create for the agent, your AWS DeepRacer vehicle, you need to train it. This means you enable the agent to explore various actions in its environment, which, for your vehicle is its track. There it attempts to take actions that result in rewards. Over time it learns the behaviors that will lead to a maximum reward—training time that takes machine time and costs money. My goal is to share how avoiding overtraining, validating your model, analyzing logs, using transfer learning, and creating a budget can help keep the focus on fun, not cost.

Overview

In this post, we walk you through some strategies for training better performing and more cost-effective AWS DeepRacer models:

Avoid overtraining

When training an RL model, more isn’t always better. Training longer than necessary can lead to overfitting, which means a model doesn’t adapt, or generalize well, from the environment it’s trained in to a novel environment, real or online. For AWS DeepRacer, a model that is overfit may perform well on a virtual track, but conditions like gravity, shadows on the track, the friction of the wheels on the track, wear in the gears, degradation of the battery, and even smudges on the camera lens can lead to the car running slowly or veering off a replica of that track in the real world. When training and racing exclusively in the AWS DeepRacer console, a model overfitted to an oval track will not do as well on a track with s-curves. In practical terms, you can think of an email spam filter that has been overtrained on messages about window replacements, credit card programs, and rich relatives in foreign lands. It might do an excellent job detecting spam related to those topics, but a terrible job finding spam related to scam insurance plans, gutters, home food delivery, and more original get-rich-quick schemes. To learn more about overfitting, watch AWS DeepRacer League – Overfitting.

We now know overtraining that leads to overfitting isn’t a good thing, but one of the first lessons an ML practitioner learns is that undertraining isn’t good either. So how much training is enough? The key is to stop training at the point when performance begins to degrade. With AWS DeepRacer, the Training Reward graph shows the cumulative reward received per training episode. You can expect this graph to be volatile initially, but over time the graph should trend upwards and to the right, and, as your model starts converging, the average should flatten out. As you watch the reward graph, also keep an eye on the agent’s driving behavior during training. You should stop training when the percentage of the track the car completes is no longer improving. In the following image, you can see a sample reward graph with the “best model” indicated. When the model’s track completion progress per episode continuously reaches 100% and the reward levels out, more training will lead to overfitting, a poorly generalized model, and wasted training time.

When to stop training

Validate your model

A reward function describes the immediate feedback, as a reward or penalty score, your model receives when your AWS DeepRacer vehicle moves from one position on the track to a new one. The function’s purpose is to encourage the vehicle to make moves along the track that reach a destination quickly, without incident or accident. A desirable move earns a higher score for the action, or target state, and an illegal or wasteful move earns a lower score. It may seem simple, but it’s easy to overlook errors in your code or find that your reward function unintentionally incentivizes undesirable moves. Validating your reward function both in theory and practice helps you avoid wasting time and money training a model that doesn’t do what you want it to do.

The validate function is similar to a Python lint tool. Choosing Validate checks the syntax of the reward function, and if successful, results in a “passed validation” message.

After checking the code, validate the performance of your reward function early and often. When first experimenting with a new reward function, train for a short period of time, such as 15 minutes, and observe the results to determine whether or not the reward function is performing as expected. Look at the reward results and percentage of track completion on the reward graph to see that they’re increasing (see the following example graph). If it looks like a well performing model, you can clone that model and train for additional time or start over with the same reward function. If the reward doesn’t improve, you can investigate and make adjustments without wasting training time and putting a dent in your pocketbook.

Analyze logs to improve efficiency

Focusing on the training graph alone does not give you a complete picture. Fortunately, AWS DeepRacer produces logs of actions taken during training. Log analysis involves a detailed look at the outputs produced by the AWS DeepRacer training job. Log analysis might involve an aggregation of the model’s performance at various locations on the track or at different speeds. Analysis often includes various kinds of visualization, such as plotting the agent’s behavior on the track, the reward values at various times or locations, or even plotting the racing line around the track to make sure you’re not oversteering and that your agent is taking the most efficient path. You can also include Python print() statements in your reward function to output interim results to the logs for each iteration of the reward function.

Without studying the logs, you’re likely only making guesses about where to improve. It’s better to rely on data to make these adjustments. You usually get a better model sooner by studying the logs and tweaking the reward function. When you get a decent model, try conducting log analysis before investing in further training time.

The following graph is an example of plotting the racing line around a track.

For more information about log analysis, see Using Jupyter Notebook for analysing DeepRacer’s logs.

Try transfer learning

In ML, as in life, there is no point in reinventing the wheel. Transfer learning involves relying on knowledge gained while solving one problem and applying it to a different, but related, problem. The shape of the AWS DeepRacer Convolutional Neural Network (CNN) is determined by the number of inputs (such as the cameras or LIDAR) and the outputs (such as the action space). A new model has weights set to random values, and a certain amount of training is required to converge to get a working model.

Instead of starting with random weights, you can copy an existing trained model. In the AWS DeepRacer environment, this is called cloning. Cloning works by making a deep copy of the neural network—the AWS DeepRacer CNN—including all the nodes and their weights. This can save training time and money.

The learning rate is one of the hyperparameters that controls the RL training. During each update, a portion of the new weight for each node results from the gradient-descent (or ascent) contribution, and the rest comes from the existing node weight. The learning rate controls how much a gradient-descent (or ascent) update contributes to the network weights. If you are interested in learning more about gradient descent, check out this post on optimizing deep learning.

You can use a higher learning rate to include more gradient-descent contributions for faster training, but the expected reward may not converge if the learning rate is too large. Try setting the learning rate reasonably high for the initial training. When it’s complete, clone and train the network for additional time with a reduced learning rate. This can save a significant amount of training time by allowing you to train quickly at first and then explore more slowly when you’re nearing an optimal solution.

Developers often ask why they can’t modify the action space during or after cloning. It’s because cloning a model results in a duplicate of the original network, and both the inputs and the action space are fixed. If you increase the action space, the behavior of a network with additional output nodes that had no connections to the other layers and no weights is unpredictable, and could lead to a lot more training or even a model that can’t converge at all. CNNs with node weights equal to zero are unpredictable. The nodes might even be deactivated (recall that 0 times anything is 0). Likewise, pruning one or more nodes from the output layer also drives unknown outcomes. Both situations require additional training to ensure the model works as expected, and there is no guarantee it will ever converge. Radically changing the reward function may result in a cloned model that doesn’t converge quickly or at all, which is a waste of time and money.

To try transfer learning following steps in the AWS DeepRacer Developer Guide, see Clone a Trained Model to Start a New Training Pass.

Create a budget

So far, we’ve looked at things you can do within the RL training process to save money. Aside from those I’ve discussed in the AWS DeepRacer console, there is another tool in AWS Management console that can help you keep your spend where you want it—AWS Budgets. You can set monthly, quarterly, and annual budgets for cost, usage, reservations, and savings plans.

On the Cost Management page, choose Budgets and create a budget for AWS DeepRacer.

To set a budget, sign in to the console and navigate to AWS Budgets. Then select a period, effective dates, and a budget amount. Next, configure an alert so that you receive an email notification when usage exceeds a stated percentage of that budget.

You can also configure an Amazon Simple Notification Service (Amazon SNS) topic to have chatbot alerts sent to Amazon Chime or Slack.

Clean up when done

When you’re done training, evaluating, and racing, it’s good practice to shut down unneeded resources and perform cleanup actions. Storage costs are minimal, but delete any models or log files that aren’t needed. If you used Amazon SageMaker or AWS RoboMaker, save and stop your notebooks and if they are no longer needed, delete them. Make sure you end any running training jobs in both services.

Conclusion

In this post, we covered several tips for optimizing spend for AWS DeepRacer, which you can apply to many other ML projects. Try any or all of these tips to minimize your expenses while having fun learning ML, by getting started in the AWS DeepRacer Console today!


About the Authors

 Tim O’Brien brings over 30 years of experience in information technology, security, and accounting to his customers. Tim has worked as a Senior Solutions Architect at AWS since 2018 and is focused on Machine Learning and Artificial Intelligence.
Previously, as a CTO and VP of Engineering, he led product design and technical delivery for three startups. Tim has served numerous businesses in the Pacific Northwest conducting security related activities, including data center reviews, lottery security reviews, and disaster planning.

A wordsmith, futurist, and relatively fresh recruit to the position of technical writer – AI/ML at AWS, Heather Johnston-Robinson is excited to leverage her background as a maker and educator to help people of all ages and backgrounds find and foster their spark of ingenuity with AWS DeepRacer. She recently migrated from adventures in the maker world with Foxbot Industries, Makerologist, MyOpen3D, and LEGO robotics to take on her current role at AWS.

Source: https://aws.amazon.com/blogs/machine-learning/optimizing-the-cost-of-training-aws-deepracer-reinforcement-learning-models/

AI

5 Work From Home Office Essentials

Working remotely from home had been increasing in popularity, but it’s now become a necessity for many professionals due to the pandemic. “Some companies are eager to reopen their doors and return to the office, but a large number of employer and employees are making the transitional work environment a permanent change.”  They can’t guarantee […]

The post 5 Work From Home Office Essentials appeared first on Aiiot Talk – Artificial Intelligence | Internet of Things | Technology.

Published

on

Working remotely from home had been increasing in popularity, but it’s now become a necessity for many professionals due to the pandemic.

“Some companies are eager to reopen their doors and return to the office, but a large number of employer and employees are making the transitional work environment a permanent change.” 

They can’t guarantee their health and safety in a socially-crowded space, plus, companies are able to save tons of money they would have spent on their commercial lease or mortgage payments.

That’s not the say that working from home doesn’t come at its own costs, however. It can lead to a huge hit in productivity without the right equipment in place. To maximize your performance and efficiency in a remote setting, be sure to purchase these five office essentials.

1. Powerful PC

This one probably feels like an obvious pointer, but let’s knock it off our list. You won’t be able to get by with a make-shift work station and in today’s digital domain, your computer will be at the core of everything you do.

Never-ending loading wheels, delayed downloads, and slow rendering will add seconds to every task you do, so if your company didn’t provide you with a workhorse computer tower, you might look into investing in one yourself, then deduct the cost in your tax return.

Depending on your line of work, it might make more sense to go for a laptop vs a desktop computer. Unless your tasks demand super sophisticated software and large storage space, you can probably get by with a portable PC. That way, when coffee shops begin to reopen and allow patrons to sit inside, you can work on-the-go without feeling tethered to your desk.

2. Ergonomic Office Chair

If you’re looking at a long-term remote situation, it’s worth spending the big bucks on an ergonomic office chair. You should feel comfortably locked into your seat for eight hours a day—at least if you want to concentrate on your workflow, rather than the cramp in your back.

Shop around for an office chair that’s sophisticated in design and specifically built to hold the human body. Some stand-out features you should look out for include:

  • Targeted support around the lumbar spine
  • Adjustable height so you can adjust the seat as necessary for your arms to rest naturally on the keyboard
  • Swivel base to effortlessly turn your body, preventing neck strain
  • Cushioned seat to comfort your tailbone
  • Ventilated fabric that promotes airflow so you don’t feel overheated when sitting in the chair for several hours

You might have to pay a couple of hundred dollars for the best-of-the-line features, but there is another item that might qualify as an eligible tax deduction—just be sure to keep all your receipts organized with a document scanner in case the IRS raises their eyebrows and issues an audit.

3. Wireless Keyboard

If you want to type faster and feel better while you’re at it, then a wireless keyboard is clutch. They enable you to bring the keys closer, decreasing the extension length of your arms and accompanying shoulder strain.

“It also helps reduce the strain on your eyes by moving the bright screen farther away from your direct line of sight.” 

And, last but not least, the keys are placed in an ergonomic position for a more natural finger splay, with ample cushioning wrist cushioning that helps prevent overuse injuries such as a carpal tunnel.

4. Noise-cancelling Headphones

To truly get in the zone, you should block out distractions with headphones the cancel noise in your environment—especially if your work station is set up in a common area. Other tips to stay focused include installing a website blocker and leaving your cellphone on the other side of the room.

5. House plant or flowers

People are scientifically proven to be more productive when working near fresh flowers or lush greenery. The good news is that you don’t need to have a green thumb or natural lighting to achieve this effect—even artificial foliage can brighten your mood and improve your performance.

Working from home sometimes can feel like you’re locked inside all day, so bringing the outside world inside your space can help ward off burnout.

Take these tips with you into 2021 and set yourself up for success in your new home office setting.

Source: https://www.aiiottalk.com/business/work-from-home-office-essentials/

Continue Reading

AI

zomato digitizes menus using Amazon Textract and Amazon SageMaker

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India. Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing […]

Published

on

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India.

Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing else will hit the spot! To help picky eaters satisfy their cravings, we at zomato have recently added enhanced search engine capabilities to our restaurant aggregation and food delivery platform. These capabilities enable us to recommend restaurants to zomato users based on searches for specific dishes.

We power this functionality with machine learning (ML), using it to extract and structure text data from menu images. To develop this menu digitization technology, we partnered with Amazon ML Solutions Lab to explore the capabilities of the AWS ML Stack. This post summarizes how we used Amazon Textract and Amazon SageMaker to develop a customized menu digitization solution.

Extracting raw text from menus with Amazon Textract

The first component of this solution was to accurately extract all the text in the menu image. This process is known as optical character recognition (OCR). For our use case, we experimented with both in-house and commercial OCR solutions.

We first created an in-house OCR solution by stacking a pre-trained text detection model and a pre-trained text recognition model. The challenge with these models was that they were trained on a standard text dataset that didn’t match the eclectic fonts found in restaurant menus. To improve system performance, we fine-tuned these models by generating a dataset of 1.5 million synthetic text images that were more representative of text in menus.

After evaluating our in-house solution and several commercial OCR solutions, we found that Amazon Textract offers the best text recognition precision and recall. Restaurants often get creative when designing their menus, so OCR robustness was crucial for this use case. Amazon Textract particularly differentiated itself when processing menus with unique fonts, background images, and low image resolutions. Using it is as simple as making an API call:

#Python 3.6
import boto3
textract_client = boto3.client( 'textract', region_name = '' #insert the AWS region you're working in
)
textract_response = textract_client.detect_document_text( Document={ 'S3Object': { 'Bucket': '', #insert the name of the S3 bucket containing your image 'Name': '' #insert the S3 key of your image } }
) print(textract_response)

The following code is the Amazon Textract output for a sample image:

{'DocumentMetadata': {'Pages': 1}, 'Blocks': [{'BlockType': 'PAGE', 'Geometry': {'BoundingBox': {'Width': 1.0, 'Height': 1.0, 'Left': 0.0, 'Top': 0.0}, ... {'BlockType': 'WORD', 'Text': 'Dim', 'Geometry': {'BoundingBox': {'Width': 0.10242128372192383, 'Height': 0. 048968635499477386, 'Left': 0. 24052166938781738, 'Top': 0. 02556285448372364},
... 

The raw outputs are visualized by overlaying them on top of the image. The following image visualizes the preceding raw output. The black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Note the unconventional fonts, colors, and images on this menu.

The following image visualizes Amazon Textract outputs for a menu with a different design. Black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Again, this menu has unconventional fonts, colors, and images.

Using Amazon SageMaker to build a menu structure detector

The next component of this solution was to group the detections from Amazon Textract by menu section. This enabled our search engine to distinguish between entrees, desserts, beverages, and so on. We framed this as a computer vision problem—object detection, to be precise—and used Amazon SageMaker Ground Truth to collect training data. Ground Truth accelerated this process by providing a fully managed annotation tool that we customized to ask human annotators to draw bounding boxes around every menu section in the image. We used an annotation workforce from AWS Marketplace because this was a niche labeling task, and public labelers from Amazon Mechanical Turk didn’t perform well. With Ground Truth, it took just a few days and approximately $1,400 to label 4,086 images with triplicate redundancy.

With labeled data in hand, we faced a paradox of choice when selecting model-building approaches because object detection is such a thoroughly studied problem. Our choices included:

  • Removing low-confidence labels from the labeled dataset – Because even human annotators can make mistakes, Ground Truth calculates confidence scores for labels by having multiple annotators (for this use case, three) label the same image. Setting a higher confidence threshold for labels can decrease the noise in the training data at the expense of having less training data.
  • Data augmentation – Techniques for image data augmentation include horizontal flipping, cropping, shearing, and rotation. Data augmentation can make models more robust by increasing the amount of training data. However, excessive data augmentation may result in poor model convergence.
  • Feature engineering – From our experience in applying computer vision to processing menus, we had a variety of techniques in mind to emphasize or de-emphasize various aspects of the input images. For example, see the following images.

The following is the original image of a menu.

The following image shows the redacted image (overlay white boxes on a black background where text detections were found).

The following is a text cropped image. On a black background, the image has overlay crops from the original image where text detections were found.

The following is a single channel and text cropped image. The image is encoded as a single RGB channel (for this image, green). You can apply this with other transformations, in this case text cropping.

 

We also had the following additional model-building methods to choose from:

  • Model architectures like YOLO, SSD, and RCNN, with VGG or ResNet backbones – Each architecture has different trade-offs of model accuracy, inference time, model size, and more. For this use case, model accuracy was the most important metric because menu images were batch processed.
  • Using a model pre-trained on a general object detection task or starting from scratch – Transfer learning can be helpful when training complex models on small datasets. However, the task of detecting menu sections is very different from a general object detection task (for example, PASCAL VOC), so the pre-training may not be relevant.
  • Optimizer parameters – These include learning rate, momentum, regularization coefficients, and early stopping configuration.

With so many hyperparameters to consider, we turned to the automatic tuning feature of Amazon SageMaker to coordinate a massive tuning job across all these variables. The following code is an example of tuning a single model architecture and input data configuration:

import sagemaker
import boto3
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.estimator import Estimator
from sagemaker.tuner import HyperparameterTuner, IntegerParameter, CategoricalParameter, ContinuousParameter
import itertools
from time import sleep #set to the region you're working in
REGION_NAME = ''
#set a S3 path for SageMaker to store the outputs of the training jobs S3_OUTPUT_PATH = ''
#set a S3 location for your training dataset, #assumed to be an augmented manifest file
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest.html
TRAIN_DATA_LOCATION = ''
#set a S3 location for your validation data, #assumed to be an augmented manifest file
VAL_DATA_LOCATION = ''
#specify which fields in the augmented manifest file are relevant for training
DATA_ATTRIBUTE_NAMES = [,]
#specify image shape
IMAGE_SHAPE = #specify label width
LABEL_WIDTH = #specify number of samples in the training dataset
NUM_TRAINING_SAMPLES = sgm_role = sagemaker.get_execution_role()
boto_session = boto3.session.Session( region_name = REGION_NAME
)
sgm_session = sagemaker.Session( boto_session = boto_session
)
training_image = get_image_uri( region_name = REGION_NAME, repo_name = 'object-detection', repo_version = 'latest'
) #set training job configuration
object_detection_estimator = Estimator( image_name = training_image, role = sgm_role, train_instance_count = 1, train_instance_type = 'ml.p3.2xlarge', train_volume_size = 50, train_max_run = 360000, input_mode = 'Pipe', output_path = S3_OUTPUT_PATH, sagemaker_session = sgm_session
) #set input data configuration
train_data = sagemaker.session.s3_input( s3_data = TRAIN_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) val_data = sagemaker.session.s3_input( s3_data = VAL_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) data_channels = { 'train': train_data, 'validation' : val_data
} #set static hyperparameters
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-api-config.html
static_hyperparameters = { 'num_classes' : 1, 'epochs' : 100, 'lr_scheduler_step' : '15,30', 'lr_scheduler_factor' : 0.1, 'overlap_threshold' : 0.5, 'nms_threshold' : 0.45, 'image_shape' : IMAGE_SHAPE, 'label_width' : LABEL_WIDTH, 'num_training_samples' : NUM_TRAINING_SAMPLES, 'early_stopping' : True, 'early_stopping_min_epochs' : 5, 'early_stopping_patience' : 1, 'early_stopping_tolerance' : 0.05,
} #set ranges for tunable hyperparameters
hyperparameter_ranges = { 'learning_rate': ContinuousParameter( min_value = 1e-5, max_value = 1e-2, scaling_type = 'Auto' ), 'mini_batch_size': IntegerParameter( min_value = 8, max_value = 64, scaling_type = 'Auto' )
} #Not all hyperparameters are feasible to tune directly
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html
#For these we run model tuning jobs in parallel using a for loop
#We take this approach for tuning over different model architectures #and different feature engineering configurations
use_pretrained_options = [0, 1]
base_network_options = ['resnet-50', 'vgg-16'] for use_pretrained, base_network in itertools.product(use_pretrained_options, base_network_options): static_hyperparameter_configuration = { **static_hyperparameters, 'use_pretrained_model' : use_pretrained, 'base_network' : base_network } object_detection_estimator.set_hyperparameters( **static_hyperparameter_configuration ) tuner = HyperparameterTuner( estimator = object_detection_estimator, objective_metric_name = 'validation:mAP', strategy = 'Bayesian', hyperparameter_ranges = hyperparameter_ranges, max_jobs = 24, max_parallel_jobs = 2, early_stopping_type = 'Auto', ) tuner.fit( inputs = data_channels ) print(f'Started tuning job: {tuner.latest_tuning_job.name}') #wait a bit before starting next job so auto generated names don't conflict sleep(60)

This code uses version 1.72.0 of the Amazon SageMaker Python SDK, which is the default version installed in Amazon SageMaker notebook instances. Version 2.X introduces breaking changes. For more information, see Use Version 2.x of the SageMaker Python SDK.

We used powerful GPU hardware (p3.2xlarge instances), and it took us just 1 week and approximately $1,500 to explore 455 unique parameter configurations. Of these configurations, Amazon SageMaker found that a fine-tuned Faster R-CNN model with text cropping performed the best, with a mean average precision score of 0.93. This aligned with results from our prior work in this space, which found that two-stage detectors generally outperform single-stage detectors in processing menus.

The following is an example of how the object detection model processed a menu. In this image, the purple boxes are the predicted bounding boxes from the menu section detection model. Black boxes are the text detection bounding boxes provided by Amazon Textract.

Using Amazon SageMaker to build rule- and ML-based text classifiers

The final component in the solution was a layer of text classification. To enable our enhanced search functionality, we had to know if each detection within a menu section was the menu section title, name of a dish, price of a dish, or something else (such as a description of a dish or the name of the restaurant). To this end, we developed a hybrid rule- and ML-based text classification system.

The first step of the classification was to use a rule to determine if a detection was a price or not. This rule simply calculated the proportion of numeric characters in the detection. If the proportion was greater than 40%, the detection was classified as a price. Although simple, this classifier worked well in practice. We used Amazon SageMaker notebook instances as a convenient interactive environment to develop this and other rules.

After the prices were filtered out, the remaining detections were classified as dish or not dish. From our experience in processing menus, we intuitively knew that in many cases, the location of prices was sufficient to do this classification. For these menus, dishes and prices are listed side by side, so simply classifying detections located to the left of prices as dishes worked well.

The following example shows how the rules-based text classification system processed a menu. Green boxes are detections classified as dishes (by the price location rule). Red boxes are detections classified as not dishes (by the price location rule). Blue boxes are detections classified as prices. Final dish detections are on the right.

Some menus might include lengthy dish descriptions or may not list prices next to individual dishes. These menus violate the assumptions of the price location rules, so we turned to model-based text classification. We used Amazon SageMaker training jobs to experiment with many modeling approaches in parallel, including an XGBoost model trained on hashed word count vectors. In the end, we found that a fine-tuned BERT model from GluonNLP achieved the best performance with an AUROC score of 0.86.

The following image is an example of how the model-based text classification system processed a menu. Green boxes are detections classified as dishes (by the BERT model). Red boxes are detections classified as not dishes (by the BERT model). Blue boxes are detections classified as prices. The final dish detections are on the right.

Of the remaining detections (those not classified as prices or dishes), a final round of classification identified menu section titles. We created features that captured the font size of the detection, the location of the detection on the menu, and the length of the words within the detection. We used these features as inputs to a logistic regression model that predicted if a detection is a menu section title or not.

Key features of Amazon SageMaker

In the end, we found that doing OCR was as simple as making an API call to Amazon Textract. However, our use case required additional customization. We selected Amazon SageMaker as an ML platform to develop this customization because it offered several key features:

  • Amazon SageMaker Notebooks made it easy to spin up Jupyter notebook environments for prototyping and testing rules and models.
  • Ground Truth helped us build and deploy a custom image annotation tool with no front-end experience required.
  • Amazon SageMaker automatic tuning enabled us to run massive hyperparameter tuning jobs on powerful hardware, and included an intuitive interface for tracking the results of hundreds of experiments. You can implement tuning jobs with early stopping conditions, which makes experimentation cost-effective.

Amazon SageMaker offers additional integration benefits from including all the preceding features in a single platform:

  • Amazon SageMaker Notebooks come pre-installed with all the dependencies needed to build models that can be optimized with automatic tuning.
  • Ground Truth offers easy access to labelers from Mechanical Turk or AWS Marketplace.
  • Automatic tuning can directly ingest the manifest files created by Amazon SageMaker Ground Truth.

Putting it all together

Our menu digitization system can extract text from images of menus, group it by menu section, extract the title of the section, extract the dishes within each section, and pair each dish with its price. The following is a visualization of the end-to-end solution.

The workflow contains the following steps:

  1. The input is an image of a menu.
  2. Amazon Textract performs OCR on the input image.
  3. An ML-based computer vision model predicts bounding boxes for menu sections in the menu image.
  4. A rules-based classifier classifies Amazon Textract detections as price or not price.
  5. A rules-based classifier (5a) attempts to use the location of price detections to classify the not price detections as dish or not dish. If this rule doesn’t successfully classify most of the detections on the page, an ML-based classifier is used instead (5b).
  6. The ML-based classifier uses hand-crafted features to classify not dish detections as menu section title or not menu section title.
  7.  The menu text is structured by combining the menu section detections and the text classification results.

The following image visualizes a sample output of the system. Green boxes are detections classified as dishes. Blue boxes are detections classified as prices. Yellow boxes are detections classified as menu section titles. Purple boxes are predicted menu section bounding boxes.

The following code is the structured output:

[ { "title":{ "text":"Shrimp Dishes" }, "dishes":[ { "text":"Shrimp Masala", "price":{ "text":"140" } }, { "text":"Shrimp Biryani", "price":{ "text":"170" } }, { "text":"Shrimp Pulav", "price":{ "text":"160" } } ] }, ...
]

Conclusion

We built a system that uses ML to digitize menus without any human input required. This system will improve user experience by powering new features such as advanced dish search and review highlight verification. Our content team will also use it to accelerate creating menus for online ordering.

To explore these capabilities of Amazon Textract and Amazon SageMaker in more depth, see Automatically extract text and structured data from documents with Amazon Textract and Amazon SageMaker Automatic Model Tuning: Using Machine Learning for Machine Learning.

The Amazon ML Solutions Lab helped us accelerate our use of ML by pairing our team with ML experts. The ML Solutions Lab brings to every customer engagement learnings from more than 20 years of Amazon’s ML innovations in areas such as fulfillment and logistics, personalization and recommendations, computer vision and translation, fraud prevention, forecasting, and supply chain optimization. To learn more about the AWS ML Solutions Lab, contact your account manager or visit Amazon Machine Learning Solutions Lab.


About the Authors

Chiranjeev Ghai is a Machine Learning Engineer. In his current role, he has been aiding automation at zomato by leveraging a wide variety of ML optimisations ranging from Image Classification, Product Recommendation, and Text Detection. When not building models, he likes to spend his time playing video games at home.

Ryan Cheng is a Deep Learning Architect in the Amazon ML Solutions Lab. He has worked on a wide range of ML use cases from sports analytics to optical character recognition. In his spare time, Ryan enjoys cooking.

Andrew Ang is a Deep Learning Architect at the Amazon ML Solutions Lab, where he helps AWS customers identify and build AI/ML solutions to address their business problems.

Vinayak Arannil is a Data Scientist at the Amazon Machine Learning Solutions Lab. He has worked on various domains of data science like computer vision, natural language processing, recommendation systems, etc.

Source: https://aws.amazon.com/blogs/machine-learning/zomato-digitizes-menus-using-amazon-textract-and-amazon-sagemaker/

Continue Reading

AI

zomato digitizes menus using Amazon Textract and Amazon SageMaker

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India. Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing […]

Published

on

This post is co-written by Chiranjeev Ghai, ML Engineer at zomato. zomato is a global food-tech company based in India.

Are you the kind of person who has very specific cravings? Maybe when the mood hits, you don’t want just any kind of Indian food—you want Chicken Chettinad with a side of paratha, and nothing else will hit the spot! To help picky eaters satisfy their cravings, we at zomato have recently added enhanced search engine capabilities to our restaurant aggregation and food delivery platform. These capabilities enable us to recommend restaurants to zomato users based on searches for specific dishes.

We power this functionality with machine learning (ML), using it to extract and structure text data from menu images. To develop this menu digitization technology, we partnered with Amazon ML Solutions Lab to explore the capabilities of the AWS ML Stack. This post summarizes how we used Amazon Textract and Amazon SageMaker to develop a customized menu digitization solution.

Extracting raw text from menus with Amazon Textract

The first component of this solution was to accurately extract all the text in the menu image. This process is known as optical character recognition (OCR). For our use case, we experimented with both in-house and commercial OCR solutions.

We first created an in-house OCR solution by stacking a pre-trained text detection model and a pre-trained text recognition model. The challenge with these models was that they were trained on a standard text dataset that didn’t match the eclectic fonts found in restaurant menus. To improve system performance, we fine-tuned these models by generating a dataset of 1.5 million synthetic text images that were more representative of text in menus.

After evaluating our in-house solution and several commercial OCR solutions, we found that Amazon Textract offers the best text recognition precision and recall. Restaurants often get creative when designing their menus, so OCR robustness was crucial for this use case. Amazon Textract particularly differentiated itself when processing menus with unique fonts, background images, and low image resolutions. Using it is as simple as making an API call:

#Python 3.6
import boto3
textract_client = boto3.client( 'textract', region_name = '' #insert the AWS region you're working in
)
textract_response = textract_client.detect_document_text( Document={ 'S3Object': { 'Bucket': '', #insert the name of the S3 bucket containing your image 'Name': '' #insert the S3 key of your image } }
) print(textract_response)

The following code is the Amazon Textract output for a sample image:

{'DocumentMetadata': {'Pages': 1}, 'Blocks': [{'BlockType': 'PAGE', 'Geometry': {'BoundingBox': {'Width': 1.0, 'Height': 1.0, 'Left': 0.0, 'Top': 0.0}, ... {'BlockType': 'WORD', 'Text': 'Dim', 'Geometry': {'BoundingBox': {'Width': 0.10242128372192383, 'Height': 0. 048968635499477386, 'Left': 0. 24052166938781738, 'Top': 0. 02556285448372364},
... 

The raw outputs are visualized by overlaying them on top of the image. The following image visualizes the preceding raw output. The black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Note the unconventional fonts, colors, and images on this menu.

The following image visualizes Amazon Textract outputs for a menu with a different design. Black boxes are the text-detection bounding boxes provided by Amazon Textract. Extracted text is displayed on the right. Again, this menu has unconventional fonts, colors, and images.

Using Amazon SageMaker to build a menu structure detector

The next component of this solution was to group the detections from Amazon Textract by menu section. This enabled our search engine to distinguish between entrees, desserts, beverages, and so on. We framed this as a computer vision problem—object detection, to be precise—and used Amazon SageMaker Ground Truth to collect training data. Ground Truth accelerated this process by providing a fully managed annotation tool that we customized to ask human annotators to draw bounding boxes around every menu section in the image. We used an annotation workforce from AWS Marketplace because this was a niche labeling task, and public labelers from Amazon Mechanical Turk didn’t perform well. With Ground Truth, it took just a few days and approximately $1,400 to label 4,086 images with triplicate redundancy.

With labeled data in hand, we faced a paradox of choice when selecting model-building approaches because object detection is such a thoroughly studied problem. Our choices included:

  • Removing low-confidence labels from the labeled dataset – Because even human annotators can make mistakes, Ground Truth calculates confidence scores for labels by having multiple annotators (for this use case, three) label the same image. Setting a higher confidence threshold for labels can decrease the noise in the training data at the expense of having less training data.
  • Data augmentation – Techniques for image data augmentation include horizontal flipping, cropping, shearing, and rotation. Data augmentation can make models more robust by increasing the amount of training data. However, excessive data augmentation may result in poor model convergence.
  • Feature engineering – From our experience in applying computer vision to processing menus, we had a variety of techniques in mind to emphasize or de-emphasize various aspects of the input images. For example, see the following images.

The following is the original image of a menu.

The following image shows the redacted image (overlay white boxes on a black background where text detections were found).

The following is a text cropped image. On a black background, the image has overlay crops from the original image where text detections were found.

The following is a single channel and text cropped image. The image is encoded as a single RGB channel (for this image, green). You can apply this with other transformations, in this case text cropping.

 

We also had the following additional model-building methods to choose from:

  • Model architectures like YOLO, SSD, and RCNN, with VGG or ResNet backbones – Each architecture has different trade-offs of model accuracy, inference time, model size, and more. For this use case, model accuracy was the most important metric because menu images were batch processed.
  • Using a model pre-trained on a general object detection task or starting from scratch – Transfer learning can be helpful when training complex models on small datasets. However, the task of detecting menu sections is very different from a general object detection task (for example, PASCAL VOC), so the pre-training may not be relevant.
  • Optimizer parameters – These include learning rate, momentum, regularization coefficients, and early stopping configuration.

With so many hyperparameters to consider, we turned to the automatic tuning feature of Amazon SageMaker to coordinate a massive tuning job across all these variables. The following code is an example of tuning a single model architecture and input data configuration:

import sagemaker
import boto3
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.estimator import Estimator
from sagemaker.tuner import HyperparameterTuner, IntegerParameter, CategoricalParameter, ContinuousParameter
import itertools
from time import sleep #set to the region you're working in
REGION_NAME = ''
#set a S3 path for SageMaker to store the outputs of the training jobs S3_OUTPUT_PATH = ''
#set a S3 location for your training dataset, #assumed to be an augmented manifest file
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest.html
TRAIN_DATA_LOCATION = ''
#set a S3 location for your validation data, #assumed to be an augmented manifest file
VAL_DATA_LOCATION = ''
#specify which fields in the augmented manifest file are relevant for training
DATA_ATTRIBUTE_NAMES = [,]
#specify image shape
IMAGE_SHAPE = #specify label width
LABEL_WIDTH = #specify number of samples in the training dataset
NUM_TRAINING_SAMPLES = sgm_role = sagemaker.get_execution_role()
boto_session = boto3.session.Session( region_name = REGION_NAME
)
sgm_session = sagemaker.Session( boto_session = boto_session
)
training_image = get_image_uri( region_name = REGION_NAME, repo_name = 'object-detection', repo_version = 'latest'
) #set training job configuration
object_detection_estimator = Estimator( image_name = training_image, role = sgm_role, train_instance_count = 1, train_instance_type = 'ml.p3.2xlarge', train_volume_size = 50, train_max_run = 360000, input_mode = 'Pipe', output_path = S3_OUTPUT_PATH, sagemaker_session = sgm_session
) #set input data configuration
train_data = sagemaker.session.s3_input( s3_data = TRAIN_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) val_data = sagemaker.session.s3_input( s3_data = VAL_DATA_LOCATION, distribution = 'FullyReplicated', record_wrapping = 'RecordIO', s3_data_type = 'AugmentedManifestFile', attribute_names = DATA_ATTRIBUTE_NAMES
) data_channels = { 'train': train_data, 'validation' : val_data
} #set static hyperparameters
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-api-config.html
static_hyperparameters = { 'num_classes' : 1, 'epochs' : 100, 'lr_scheduler_step' : '15,30', 'lr_scheduler_factor' : 0.1, 'overlap_threshold' : 0.5, 'nms_threshold' : 0.45, 'image_shape' : IMAGE_SHAPE, 'label_width' : LABEL_WIDTH, 'num_training_samples' : NUM_TRAINING_SAMPLES, 'early_stopping' : True, 'early_stopping_min_epochs' : 5, 'early_stopping_patience' : 1, 'early_stopping_tolerance' : 0.05,
} #set ranges for tunable hyperparameters
hyperparameter_ranges = { 'learning_rate': ContinuousParameter( min_value = 1e-5, max_value = 1e-2, scaling_type = 'Auto' ), 'mini_batch_size': IntegerParameter( min_value = 8, max_value = 64, scaling_type = 'Auto' )
} #Not all hyperparameters are feasible to tune directly
#see: https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html
#For these we run model tuning jobs in parallel using a for loop
#We take this approach for tuning over different model architectures #and different feature engineering configurations
use_pretrained_options = [0, 1]
base_network_options = ['resnet-50', 'vgg-16'] for use_pretrained, base_network in itertools.product(use_pretrained_options, base_network_options): static_hyperparameter_configuration = { **static_hyperparameters, 'use_pretrained_model' : use_pretrained, 'base_network' : base_network } object_detection_estimator.set_hyperparameters( **static_hyperparameter_configuration ) tuner = HyperparameterTuner( estimator = object_detection_estimator, objective_metric_name = 'validation:mAP', strategy = 'Bayesian', hyperparameter_ranges = hyperparameter_ranges, max_jobs = 24, max_parallel_jobs = 2, early_stopping_type = 'Auto', ) tuner.fit( inputs = data_channels ) print(f'Started tuning job: {tuner.latest_tuning_job.name}') #wait a bit before starting next job so auto generated names don't conflict sleep(60)

This code uses version 1.72.0 of the Amazon SageMaker Python SDK, which is the default version installed in Amazon SageMaker notebook instances. Version 2.X introduces breaking changes. For more information, see Use Version 2.x of the SageMaker Python SDK.

We used powerful GPU hardware (p3.2xlarge instances), and it took us just 1 week and approximately $1,500 to explore 455 unique parameter configurations. Of these configurations, Amazon SageMaker found that a fine-tuned Faster R-CNN model with text cropping performed the best, with a mean average precision score of 0.93. This aligned with results from our prior work in this space, which found that two-stage detectors generally outperform single-stage detectors in processing menus.

The following is an example of how the object detection model processed a menu. In this image, the purple boxes are the predicted bounding boxes from the menu section detection model. Black boxes are the text detection bounding boxes provided by Amazon Textract.

Using Amazon SageMaker to build rule- and ML-based text classifiers

The final component in the solution was a layer of text classification. To enable our enhanced search functionality, we had to know if each detection within a menu section was the menu section title, name of a dish, price of a dish, or something else (such as a description of a dish or the name of the restaurant). To this end, we developed a hybrid rule- and ML-based text classification system.

The first step of the classification was to use a rule to determine if a detection was a price or not. This rule simply calculated the proportion of numeric characters in the detection. If the proportion was greater than 40%, the detection was classified as a price. Although simple, this classifier worked well in practice. We used Amazon SageMaker notebook instances as a convenient interactive environment to develop this and other rules.

After the prices were filtered out, the remaining detections were classified as dish or not dish. From our experience in processing menus, we intuitively knew that in many cases, the location of prices was sufficient to do this classification. For these menus, dishes and prices are listed side by side, so simply classifying detections located to the left of prices as dishes worked well.

The following example shows how the rules-based text classification system processed a menu. Green boxes are detections classified as dishes (by the price location rule). Red boxes are detections classified as not dishes (by the price location rule). Blue boxes are detections classified as prices. Final dish detections are on the right.

Some menus might include lengthy dish descriptions or may not list prices next to individual dishes. These menus violate the assumptions of the price location rules, so we turned to model-based text classification. We used Amazon SageMaker training jobs to experiment with many modeling approaches in parallel, including an XGBoost model trained on hashed word count vectors. In the end, we found that a fine-tuned BERT model from GluonNLP achieved the best performance with an AUROC score of 0.86.

The following image is an example of how the model-based text classification system processed a menu. Green boxes are detections classified as dishes (by the BERT model). Red boxes are detections classified as not dishes (by the BERT model). Blue boxes are detections classified as prices. The final dish detections are on the right.

Of the remaining detections (those not classified as prices or dishes), a final round of classification identified menu section titles. We created features that captured the font size of the detection, the location of the detection on the menu, and the length of the words within the detection. We used these features as inputs to a logistic regression model that predicted if a detection is a menu section title or not.

Key features of Amazon SageMaker

In the end, we found that doing OCR was as simple as making an API call to Amazon Textract. However, our use case required additional customization. We selected Amazon SageMaker as an ML platform to develop this customization because it offered several key features:

  • Amazon SageMaker Notebooks made it easy to spin up Jupyter notebook environments for prototyping and testing rules and models.
  • Ground Truth helped us build and deploy a custom image annotation tool with no front-end experience required.
  • Amazon SageMaker automatic tuning enabled us to run massive hyperparameter tuning jobs on powerful hardware, and included an intuitive interface for tracking the results of hundreds of experiments. You can implement tuning jobs with early stopping conditions, which makes experimentation cost-effective.

Amazon SageMaker offers additional integration benefits from including all the preceding features in a single platform:

  • Amazon SageMaker Notebooks come pre-installed with all the dependencies needed to build models that can be optimized with automatic tuning.
  • Ground Truth offers easy access to labelers from Mechanical Turk or AWS Marketplace.
  • Automatic tuning can directly ingest the manifest files created by Amazon SageMaker Ground Truth.

Putting it all together

Our menu digitization system can extract text from images of menus, group it by menu section, extract the title of the section, extract the dishes within each section, and pair each dish with its price. The following is a visualization of the end-to-end solution.

The workflow contains the following steps:

  1. The input is an image of a menu.
  2. Amazon Textract performs OCR on the input image.
  3. An ML-based computer vision model predicts bounding boxes for menu sections in the menu image.
  4. A rules-based classifier classifies Amazon Textract detections as price or not price.
  5. A rules-based classifier (5a) attempts to use the location of price detections to classify the not price detections as dish or not dish. If this rule doesn’t successfully classify most of the detections on the page, an ML-based classifier is used instead (5b).
  6. The ML-based classifier uses hand-crafted features to classify not dish detections as menu section title or not menu section title.
  7.  The menu text is structured by combining the menu section detections and the text classification results.

The following image visualizes a sample output of the system. Green boxes are detections classified as dishes. Blue boxes are detections classified as prices. Yellow boxes are detections classified as menu section titles. Purple boxes are predicted menu section bounding boxes.

The following code is the structured output:

[ { "title":{ "text":"Shrimp Dishes" }, "dishes":[ { "text":"Shrimp Masala", "price":{ "text":"140" } }, { "text":"Shrimp Biryani", "price":{ "text":"170" } }, { "text":"Shrimp Pulav", "price":{ "text":"160" } } ] }, ...
]

Conclusion

We built a system that uses ML to digitize menus without any human input required. This system will improve user experience by powering new features such as advanced dish search and review highlight verification. Our content team will also use it to accelerate creating menus for online ordering.

To explore these capabilities of Amazon Textract and Amazon SageMaker in more depth, see Automatically extract text and structured data from documents with Amazon Textract and Amazon SageMaker Automatic Model Tuning: Using Machine Learning for Machine Learning.

The Amazon ML Solutions Lab helped us accelerate our use of ML by pairing our team with ML experts. The ML Solutions Lab brings to every customer engagement learnings from more than 20 years of Amazon’s ML innovations in areas such as fulfillment and logistics, personalization and recommendations, computer vision and translation, fraud prevention, forecasting, and supply chain optimization. To learn more about the AWS ML Solutions Lab, contact your account manager or visit Amazon Machine Learning Solutions Lab.


About the Authors

Chiranjeev Ghai is a Machine Learning Engineer. In his current role, he has been aiding automation at zomato by leveraging a wide variety of ML optimisations ranging from Image Classification, Product Recommendation, and Text Detection. When not building models, he likes to spend his time playing video games at home.

Ryan Cheng is a Deep Learning Architect in the Amazon ML Solutions Lab. He has worked on a wide range of ML use cases from sports analytics to optical character recognition. In his spare time, Ryan enjoys cooking.

Andrew Ang is a Deep Learning Architect at the Amazon ML Solutions Lab, where he helps AWS customers identify and build AI/ML solutions to address their business problems.

Vinayak Arannil is a Data Scientist at the Amazon Machine Learning Solutions Lab. He has worked on various domains of data science like computer vision, natural language processing, recommendation systems, etc.

Source: https://aws.amazon.com/blogs/machine-learning/zomato-digitizes-menus-using-amazon-textract-and-amazon-sagemaker/

Continue Reading
AI20 mins ago

5 Work From Home Office Essentials

AI31 mins ago

zomato digitizes menus using Amazon Textract and Amazon SageMaker

AI31 mins ago

zomato digitizes menus using Amazon Textract and Amazon SageMaker

AI1 hour ago

How to Improve Your Supply Chain With Deep Reinforcement Learning

AI1 hour ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI1 hour ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI1 hour ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI4 hours ago

Conversation Designers: who are they and what do they do?

AI4 hours ago

Automating Bot Testing at Haptik

AI6 hours ago

Why Facebook’s New Machine Translation Model is a Great Step for AI

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

Trending