Connect with us

AI

Streamlining data labeling for YOLO object detection in Amazon SageMaker Ground Truth

Object detection is a common task in computer vision (CV), and the YOLOv3 model is state-of-the-art in terms of accuracy and speed. In transfer learning, you obtain a model trained on a large but generic dataset and retrain the model on your custom dataset. One of the most time-consuming parts in transfer learning is collecting […]

Published

on

Object detection is a common task in computer vision (CV), and the YOLOv3 model is state-of-the-art in terms of accuracy and speed. In transfer learning, you obtain a model trained on a large but generic dataset and retrain the model on your custom dataset. One of the most time-consuming parts in transfer learning is collecting and labeling image data to generate a custom training dataset. This post explores how to do this in Amazon SageMaker Ground Truth.

Ground Truth offers a comprehensive platform for annotating the most common data labeling jobs in CV: image classification, object detection, semantic segmentation, and instance segmentation. You can perform labeling using Amazon Mechanical Turk or create your own private team to label collaboratively. You can also use one of the third-party data labeling service providers listed on the AWS Marketplace. Ground Truth offers an intuitive interface that is easy to work with. You can communicate with labelers about specific needs for your particular task using examples and notes through the interface.

Labeling data is already hard work. Creating training data for a CV modeling task requires data collection and storage, setting up labeling jobs, and post-processing the labeled data. Moreover, not all object detection models expect the data in the same format. For example, the Faster RCNN model expects the data in the popular Pascal VOC format, which the YOLO models can’t work with. These associated steps are part of any machine learning pipeline for CV. You sometimes need to run the pipeline multiple times to improve the model incrementally. This post shows how to perform these steps efficiently by using Python scripts and get to model training as quickly as possible. This post uses the YOLO format for its use case, but the steps are mostly independent of the data format.

The image labeling step of a training data generation task is inherently manual. This post shows how to create a reusable framework to create training data for model building efficiently. Specifically, you can do the following:

  • Create the required directory structure in Amazon S3 before starting a Ground Truth job
  • Create a private team of annotators and start a Ground Truth job
  • Collect the annotations when labeling is complete and save it in a pandas dataframe
  • Post-process the dataset for model training

You can download the code presented in this post from this GitHub repo. This post demonstrates how to run the code from the AWS CLI on a local machine that can access an AWS account. For more information about setting up AWS CLI, see What Is the AWS Command Line Interface? Make sure that you configure it to access the S3 buckets in this post. Alternatively, you can run it in AWS Cloud9 or by spinning up an Amazon EC2 instance. You can also run the code blocks in an Amazon SageMaker notebook.

If you’re using an Amazon SageMaker notebook, you can still access the Linux shell of the underlying EC2 instance and follow along by opening a new terminal from the Jupyter main page and running the scripts from the /home/ec2-user/SageMaker folder.

Setting up your S3 bucket

The first thing you need to do is to upload the training images to an S3 bucket. Name the bucket ground-truth-data-labeling. You want each labeling task to have its own self-contained folder under this bucket. If you start labeling a small set of images that you keep in the first folder, but find that the model performed poorly after the first round because the data was insufficient, you can upload more images to a different folder under the same bucket and start another labeling task.

For the first labeling task, create the folder bounding_box and the following three subfolders under it:

  • images – You upload all the images in the Ground Truth labeling job to this subfolder.
  • ground_truth_annots – This subfolder starts empty; the Ground Truth job populates it automatically, and you retrieve the final annotations from here.
  • yolo_annot_files – This subfolder also starts empty, but eventually holds the annotation files ready for model training. The script populates it automatically.

If your images are in .jpeg format and available in the current working directory, you can upload the images with the following code:

aws s3 sync . s3://ground-truth-data-labeling/bounding_box/images/ --exclude "*" --include "*.jpg" 

For this use case, you use five images. There are two types of objects in the images—pencil and pen. You need to draw bounding boxes around each object in the images. The following images are examples of what you need to label. All images are available in the GitHub repo.

Creating the manifest file

A Ground Truth job requires a manifest file in JSON format that contains the Amazon S3 paths of all the images to label. You need to create this file before you can start the first Ground Truth job. The format of this file is simple:

{"source-ref": < S3 path to image1 >}
{"source-ref": < S3 path to image2 >}
...

However, creating the manifest file by hand would be tedious for a large number of images. Therefore, you can automate the process by running a script. You first need to create a file holding the parameters required for the scripts. Create a file input.json in your local file system with the following content:

{ "s3_bucket":"ground-truth-data-labeling", "job_id":"bounding_box", "ground_truth_job_name":"yolo-bbox", "yolo_output_dir":"yolo_annot_files"
}

Save the following code block in a file called prep_gt_job.py:

import boto3
import json def create_manifest(job_path): """ Creates the manifest file for the Ground Truth job Input: job_path: Full path of the folder in S3 for GT job Returns: manifest_file: The manifest file required for GT job """ s3_rec = boto3.resource("s3") s3_bucket = job_path.split("/")[0] prefix = job_path.replace(s3_bucket, "")[1:] image_folder = f"{prefix}/images" print(f"using images from ... {image_folder} \n") bucket = s3_rec.Bucket(s3_bucket) objs = list(bucket.objects.filter(Prefix=image_folder)) img_files = objs[1:] # first item is the folder name n_imgs = len(img_files) print(f"there are {n_imgs} images \n") TOKEN = "source-ref" manifest_file = "/tmp/manifest.json" with open(manifest_file, "w") as fout: for img_file in img_files: fname = f"s3://{s3_bucket}/{img_file.key}" fout.write(f'{{"{TOKEN}": "{fname}"}}\n') return manifest_file def upload_manifest(job_path, manifest_file): """ Uploads the manifest file into S3 Input: job_path: Full path of the folder in S3 for GT job manifest_file: Path to the local copy of the manifest file """ s3_rec = boto3.resource("s3") s3_bucket = job_path.split("/")[0] source = manifest_file.split("/")[-1] prefix = job_path.replace(s3_bucket, "")[1:] destination = f"{prefix}/{source}" print(f"uploading manifest file to {destination} \n") s3_rec.meta.client.upload_file(manifest_file, s3_bucket, destination) def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Collects image names from S3 and creates the manifest file for GT 3. Uploads the manifest file to S3 """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_path = f"{s3_bucket}/{job_id}" man_file = create_manifest(gt_job_path) upload_manifest(gt_job_path, man_file) if __name__ == "__main__": main()

Run the following script:

python prep_gt_job.py

This script reads the S3 bucket and job names from the input file, creates a list of images available in the images folder, creates the manifest.json file, and uploads the manifest file to the S3 bucket at s3://ground-truth-data-labeling/bounding_box/.

This method illustrates a programmatic control of the process, but you can also create the file from the Ground Truth API. For instructions, see Create a Manifest File.

At this point, the folder structure in the S3 bucket should look like the following:

ground-truth-data-labeling |-- bounding_box |-- ground_truth_annots |-- images |-- yolo_annot_files |-- manifest.json

Creating the Ground Truth job

You’re now ready to create your Ground Truth job. You need to specify the job details and task type, and create your team of labelers and labeling task details. Then you can sign in to begin the labeling job.

Specifying the job details

To specify the job details, complete the following steps:

  1. On the Amazon SageMaker console, under Ground Truth, choose Labeling jobs.

  1. On the Labeling jobs page, choose Create labeling job.

  1. In the Job overview section, for Job name, enter yolo-bbox. It should be the name you defined in the input.json file earlier.
  2. Pick Manual Data Setup under Input Data Setup.
  3. For Input dataset location, enter s3://ground-truth-data-labeling/bounding_box/manifest.json.
  4. For Output dataset location, enter s3://ground-truth-data-labeling/bounding_box/ground_truth_annots.

  1. In the Create an IAM role section, first select Create a new role from the drop down menu and then select Specific S3 buckets.
  2. Enter ground-truth-data-labeling.

  1. Choose Create.

Specifying the task type

To specify the task type, complete the following steps:

  1. In the Task selection section, from the Task Category drop-down menu, choose Image.
  2. Select Bounding box.

  1. Don’t change Enable enhanced image access, which is selected by default. It enables Cross-Origin Resource Sharing (CORS) that may be required for some workers to complete the annotation task.
  2. Choose Next.

Creating a team of labelers

To create your team of labelers, complete the following steps:

  1. In the Workers section, select Private.
  2. Follow the instructions to create a new team.

Each member of the team receives a notification email titled, “You’re invited to work on a labeling project” that has initial sign-in credentials. For this use case, create a team with just yourself as a member.

Specifying labeling task details

In the Bounding box labeling tool section, you should see the images you uploaded to Amazon S3. You should check that the paths are correct in the previous steps. To specify your task details, complete the following steps:

  1. In the text box, enter a brief description of the task.

This is critical if the data labeling team has more than one members and you want to make sure everyone follows the same rule when drawing the boxes. Any inconsistency in bounding box creation may end up confusing your object detection model. For example, if you’re labeling beverage cans and want to create a tight bounding box only around the visible logo, instead of the entire can, you should specify that to get consistent labeling from all the workers. For this use case, you can enter Please enter a tight bounding box around the entire object.

  1. Optionally, you can upload examples of a good and a bad bounding box.

You can make sure your team is consistent in their labels by providing good and bad examples.

  1. Under Labels, enter the names of the labels you’re using to identify each bounding box; in this case, pencil and pen.

A color is assigned to each label automatically, which helps to visualize the boxes created for overlapping objects.

  1. To run a final sanity check, choose Preview.

  1. Choose Create job.

Job creation can take up to a few minutes. When it’s complete, you should see a job titled yolo-bbox on the Ground Truth Labeling jobs page with In progress as the status.

  1. To view the job details, select the job.

This is a good time to verify the paths are correct; the scripts don’t run if there’s any inconsistency in names.

For more information about providing labeling instructions, see Create high-quality instructions for Amazon SageMaker Ground Truth labeling jobs.

Sign in and start labeling

After you receive the initial credentials to register as a labeler for this job, follow the link to reset the password and start labeling.

If you need to interrupt your labeling session, you can resume labeling by choosing Labeling workforces under Ground Truth on the SageMaker console.

You can find the link to the labeling portal on the Private tab. The page also lists the teams and individuals involved in this private labeling task.

After you sign in, start labeling by choosing Start working.

Because you only have five images in the dataset to label, you can finish the entire task in a single session. For larger datasets, you can pause the task by choosing Stop working and return to the task later to finish it.

Checking job status

After the labeling is complete, the status of the labeling job changes to Complete and a new JSON file called output.manifest containing the annotations appears at s3://ground-truth-data-labeling/bounding_box/ground_truth_annots/yolo-bbox/manifests/output /output.manifest.

Parsing Ground Truth annotations

You can now parse through the annotations and perform the necessary post-processing steps to make it ready for model training. Start by running the following code block:

from io import StringIO
import json
import s3fs
import boto3
import pandas as pd def parse_gt_output(manifest_path, job_name): """ Captures the json Ground Truth bounding box annotations into a pandas dataframe Input: manifest_path: S3 path to the annotation file job_name: name of the Ground Truth job Returns: df_bbox: pandas dataframe with bounding box coordinates for each item in every image """ filesys = s3fs.S3FileSystem() with filesys.open(manifest_path) as fin: annot_list = [] for line in fin.readlines(): record = json.loads(line) if job_name in record.keys(): # is it necessary? image_file_path = record["source-ref"] image_file_name = image_file_path.split("/")[-1] class_maps = record[f"{job_name}-metadata"]["class-map"] imsize_list = record[job_name]["image_size"] assert len(imsize_list) == 1 image_width = imsize_list[0]["width"] image_height = imsize_list[0]["height"] for annot in record[job_name]["annotations"]: left = annot["left"] top = annot["top"] height = annot["height"] width = annot["width"] class_name = class_maps[f'{annot["class_id"]}'] annot_list.append( [ image_file_name, class_name, left, top, height, width, image_width, image_height, ] ) df_bbox = pd.DataFrame( annot_list, columns=[ "img_file", "category", "box_left", "box_top", "box_height", "box_width", "img_width", "img_height", ], ) return df_bbox def save_df_to_s3(df_local, s3_bucket, destination): """ Saves a pandas dataframe to S3 Input: df_local: Dataframe to save s3_bucket: Bucket name destination: Prefix """ csv_buffer = StringIO() s3_resource = boto3.resource("s3") df_local.to_csv(csv_buffer, index=False) s3_resource.Object(s3_bucket, destination).put(Body=csv_buffer.getvalue()) def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Parses the Ground Truth annotations and creates a dataframe 3. Saves the dataframe to S3 """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_name = input_dict["ground_truth_job_name"] mani_path = f"s3://{s3_bucket}/{job_id}/ground_truth_annots/{gt_job_name}/manifests/output/output.manifest" df_annot = parse_gt_output(mani_path, gt_job_name) dest = f"{job_id}/ground_truth_annots/{gt_job_name}/annot.csv" save_df_to_s3(df_annot, s3_bucket, dest) if __name__ == "__main__": main()

From the AWS CLI, save the preceding code block in the file parse_annot.py and run:

python parse_annot.py

Ground Truth returns the bounding box information using the following four numbers: x and y coordinates, and its height and width. The procedure parse_gt_output scans through the output.manifest file and stores the information for every bounding box for each image in a pandas dataframe. The procedure save_df_to_s3 saves it in a tabular format as annot.csv to the S3 bucket for further processing.

The creation of the dataframe is useful for a few reasons. JSON files are hard to read and the output.manifest file contains more information, like label metadata, than you need for the next step. The dataframe contains only the relevant information and you can visualize it easily to make sure everything looks fine.

To grab the annot.csv file from Amazon S3 and save a local copy, run the following:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/ground_truth_annots/yolo-bbox/annot.csv 

You can read it back into a pandas dataframe and inspect the first few lines. See the following code:

import pandas as pd
df_ann = pd.read_csv('annot.csv')
df_ann.head()

The following screenshot shows the results.

You also capture the size of the image through img_width and img_height. This is necessary because the object detection models need to know the location of each bounding box within the image. In this case, you can see that images in the dataset were captured with a 4608×3456 pixel resolution.

There are quite a few reasons why it is a good idea to save the annotation information into a dataframe:

  • In a subsequent step, you need to rescale the bounding box coordinates into a YOLO-readable format. You can do this operation easily in a dataframe.
  • If you decide to capture and label more images in the future to augment the existing dataset, all you need to do is join the newly created dataframe with the existing one. Again, you can perform this easily using a dataframe.
  • As of this writing, Ground Truth doesn’t allow through the console more than 30 different categories to label in the same job. If you have more categories in your dataset, you have to label them under multiple Ground Truth jobs and combine them. Ground Truth associates each bounding box to an integer index in the output.manifest file. Therefore, the integer labels are different across multiple Ground Truth jobs if you have more than 30 categories. Having the annotations as dataframes makes the task of combining them easier and takes care of the conflict of category names across multiple jobs. In the preceding screenshot, you can see that you used the actual names under the category column instead of the integer index.

Generating YOLO annotations

You’re now ready to reformat the bounding box coordinates Ground Truth provided into a format the YOLO model accepts.

In the YOLO format, each bounding box is described by the center coordinates of the box and its width and height. Each number is scaled by the dimensions of the image; therefore, they all range between 0 and 1. Instead of category names, YOLO models expect the corresponding integer categories.

Therefore, you need to map each name in the category column of the dataframe into a unique integer. Moreover, the official Darknet implementation of YOLOv3 needs to have the name of the image match the annotation text file name. For example, if the image file is pic01.jpg, the corresponding annotation file should be named pic01.txt.

The following code block performs all these tasks:

import os
import json
from io import StringIO
import boto3
import s3fs
import pandas as pd def annot_yolo(annot_file, cats): """ Prepares the annotation in YOLO format Input: annot_file: csv file containing Ground Truth annotations ordered_cats: List of object categories in proper order for model training Returns: df_ann: pandas dataframe with the following columns img_file int_category box_center_w box_center_h box_width box_height Note: YOLO data format: <object-class> <x_center> <y_center> <width> <height> """ df_ann = pd.read_csv(annot_file) df_ann["int_category"] = df_ann["category"].apply(lambda x: cats.index(x)) df_ann["box_center_w"] = df_ann["box_left"] + df_ann["box_width"] / 2 df_ann["box_center_h"] = df_ann["box_top"] + df_ann["box_height"] / 2 # scale box dimensions by image dimensions df_ann["box_center_w"] = df_ann["box_center_w"] / df_ann["img_width"] df_ann["box_center_h"] = df_ann["box_center_h"] / df_ann["img_height"] df_ann["box_width"] = df_ann["box_width"] / df_ann["img_width"] df_ann["box_height"] = df_ann["box_height"] / df_ann["img_height"] return df_ann def save_annots_to_s3(s3_bucket, prefix, df_local): """ For every image in the dataset, save a text file with annotation in YOLO format Input: s3_bucket: S3 bucket name prefix: Folder name under s3_bucket where files will be written df_local: pandas dataframe with the following columns img_file int_category box_center_w box_center_h box_width box_height """ unique_images = df_local["img_file"].unique() s3_resource = boto3.resource("s3") for image_file in unique_images: df_single_img_annots = df_local.loc[df_local.img_file == image_file] annot_txt_file = image_file.split(".")[0] + ".txt" destination = f"{prefix}/{annot_txt_file}" csv_buffer = StringIO() df_single_img_annots.to_csv( csv_buffer, index=False, header=False, sep=" ", float_format="%.4f", columns=[ "int_category", "box_center_w", "box_center_h", "box_width", "box_height", ], ) s3_resource.Object(s3_bucket, destination).put(Body=csv_buffer.getvalue()) def get_cats(json_file): """ Makes a list of the category names in proper order Input: json_file: s3 path of the json file containing the category information Returns: cats: List of category names """ filesys = s3fs.S3FileSystem() with filesys.open(json_file) as fin: line = fin.readline() record = json.loads(line) labels = [item["label"] for item in record["labels"]] return labels def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Collect the category names from the Ground Truth job 3. Creates a dataframe with annotaion in YOLO format 4. Saves a text file in S3 with YOLO annotations for each of the labeled images """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_name = input_dict["ground_truth_job_name"] yolo_output = input_dict["yolo_output_dir"] s3_path_cats = ( f"s3://{s3_bucket}/{job_id}/ground_truth_annots/{gt_job_name}/annotation-tool/data.json" ) categories = get_cats(s3_path_cats) print("\n labels used in Ground Truth job: ") print(categories, "\n") gt_annot_file = "annot.csv" s3_dir = f"{job_id}/{yolo_output}" print(f"annotation files saved in = ", s3_dir) df_annot = annot_yolo(gt_annot_file, categories) save_annots_to_s3(s3_bucket, s3_dir, df_annot) if __name__ == "__main__": main()

From the AWS CLI, save the preceding code block in a file create_annot.py and run:

python create_annot.py

The annot_yolo procedure transforms the dataframe you created by rescaling the box coordinates by the image size, and the save_annots_to_s3 procedure saves the annotations corresponding to each image into a text file and stores it in Amazon S3.

You can now inspect a couple of images and their corresponding annotations to make sure they’re properly formatted for model training. However, you first need to write a procedure to draw YOLO formatted bounding boxes on an image. Save the following code block in visualize.py:

import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.colors as mcolors
import argparse def visualize_bbox(img_file, yolo_ann_file, label_dict, figure_size=(6, 8)): """ Plots bounding boxes on images Input: img_file : numpy.array yolo_ann_file: Text file containing annotations in YOLO format label_dict: Dictionary of image categories figure_size: Figure size """ img = mpimg.imread(img_file) fig, ax = plt.subplots(1, 1, figsize=figure_size) ax.imshow(img) im_height, im_width, _ = img.shape palette = mcolors.TABLEAU_COLORS colors = [c for c in palette.keys()] with open(yolo_ann_file, "r") as fin: for line in fin: cat, center_w, center_h, width, height = line.split() cat = int(cat) category_name = label_dict[cat] left = (float(center_w) - float(width) / 2) * im_width top = (float(center_h) - float(height) / 2) * im_height width = float(width) * im_width height = float(height) * im_height rect = plt.Rectangle( (left, top), width, height, fill=False, linewidth=2, edgecolor=colors[cat], ) ax.add_patch(rect) props = dict(boxstyle="round", facecolor=colors[cat], alpha=0.5) ax.text( left, top, category_name, fontsize=14, verticalalignment="top", bbox=props, ) plt.show() def main(): """ Plots bounding boxes """ labels = {0: "pen", 1: "pencil"} parser = argparse.ArgumentParser() parser.add_argument("img", help="image file") args = parser.parse_args() img_file = args.img ann_file = img_file.split(".")[0] + ".txt" visualize_bbox(img_file, ann_file, labels, figure_size=(6, 8)) if __name__ == "__main__": main()

Download an image and the corresponding annotation file from Amazon S3. See the following code:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/yolo_annot_files/IMG_20200816_205004.txt .

aws s3 cp s3://ground-truth-data-
labeling/bounding_box/images/IMG_20200816_205004.jpg .

To display the correct label of each bounding box, you need to specify the names of the objects you labeled in a dictionary and pass it to visualize_bbox. For this use case, you only have two items in the list. However, the order of the labels is important—it should match the order you used while creating the Ground Truth labeling job. If you can’t remember the order, you can access the information from the s3://data-labeling-ground-truth/bounding_box/ground_truth_annots/bbox-yolo/annotation-tool/data.json

file in Amazon S3, which the Ground Truth job creates automatically.

The contents of the data.json file the task look like the following code:

{"document-version":"2018-11-28","labels":[{"label":"pencil"},{"label":"pen"}]}

Therefore, a dictionary with the labels as follows was created in visualize.py:

labels = {0: 'pencil', 1: 'pen'}

Now run the following to visualize the image:

python visualize.py IMG_20200816_205004.jpg

The following screenshot shows the bounding boxes correctly drawn around two pens.

To plot an image with a mix of pens and pencils, get the image and the corresponding annotation text from Amazon S3. See the following code:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/yolo_annot_files/IMG_20200816_205029.txt . aws s3 cp s3://ground-truth-data-
labeling/bounding_box/images/IMG_20200816_205029.jpg .

Override the default image size in the  visualize_bbox  procedure to (10, 12) and run the following:

python visualize.py IMG_20200816_205029.jpg

The following screenshot shows three bounding boxes correctly drawn around two types of objects.

Conclusion

This post described how to create an efficient, end-to-end data-gathering pipeline in Amazon Ground Truth for an object detection model. Try out this process yourself next time you are creating an object detection model. You can modify the post-processing annotations to produce labeled data in the Pascal VOC format, which is required for models like Faster RCNN. You can also adopt the basic framework to other data-labeling pipelines with job-specific modifications. For example, you can rewrite the annotation post-processing procedures to adopt the framework for an instance segmentation task, in which an object is labeled at the pixel level instead of drawing a rectangle around the object. Amazon Ground Truth gets regularly updated with enhanced capabilities. Therefore, check  the documentation for the most up to date features.


About the Author

Arkajyoti Misra is a Data Scientist working in AWS Professional Services. He loves to dig into Machine Learning algorithms and enjoys reading about new frontiers in Deep Learning.

Source: https://aws.amazon.com/blogs/machine-learning/streamlining-data-labeling-for-yolo-object-detection-in-amazon-sagemaker-ground-truth/

AI

How does it know?! Some beginner chatbot tech for newbies.

Published

on

Wouter S. Sligter

Most people will know by now what a chatbot or conversational AI is. But how does one design and build an intelligent chatbot? Let’s investigate some essential concepts in bot design: intents, context, flows and pages.

I like using Google’s Dialogflow platform for my intelligent assistants. Dialogflow has a very accurate NLP engine at a cost structure that is extremely competitive. In Dialogflow there are roughly two ways to build the bot tech. One is through intents and context, the other is by means of flows and pages. Both of these design approaches have their own version of Dialogflow: “ES” and “CX”.

Dialogflow ES is the older version of the Dialogflow platform which works with intents, context and entities. Slot filling and fulfillment also help manage the conversation flow. Here are Google’s docs on these concepts: https://cloud.google.com/dialogflow/es/docs/concepts

Context is what distinguishes ES from CX. It’s a way to understand where the conversation is headed. Here’s a diagram that may help understand how context works. Each phrase that you type triggers an intent in Dialogflow. Each response by the bot happens after your message has triggered the most likely intent. It’s Dialogflow’s NLP engine that decides which intent best matches your message.

Wouter Sligter, 2020

What’s funny is that even though you typed ‘yes’ in exactly the same way twice, the bot gave you different answers. There are two intents that have been programmed to respond to ‘yes’, but only one of them is selected. This is how we control the flow of a conversation by using context in Dialogflow ES.

Unfortunately the way we program context into a bot on Dialogflow ES is not supported by any visual tools like the diagram above. Instead we need to type this context in each intent without seeing the connection to other intents. This makes the creation of complex bots quite tedious and that’s why we map out the design of our bots in other tools before we start building in ES.

The newer Dialogflow CX allows for a more advanced way of managing the conversation. By adding flows and pages as additional control tools we can now visualize and control conversations easily within the CX platform.

source: https://cloud.google.com/dialogflow/cx/docs/basics

This entire diagram is a ‘flow’ and the blue blocks are ‘pages’. This visualization shows how we create bots in Dialogflow CX. It’s immediately clear how the different pages are related and how the user will move between parts of the conversation. Visuals like this are completely absent in Dialogflow ES.

It then makes sense to use different flows for different conversation paths. A possible distinction in flows might be “ordering” (as seen here), “FAQs” and “promotions”. Structuring bots through flows and pages is a great way to handle complex bots and the visual UI in CX makes it even better.

At the time of writing (October 2020) Dialogflow CX only supports English NLP and its pricing model is surprisingly steep compared to ES. But bots are becoming critical tech for an increasing number of companies and the cost reductions and quality of conversations are enormous. Building and managing bots is in many cases an ongoing task rather than a single, rounded-off project. For these reasons it makes total sense to invest in a tool that can handle increasing complexity in an easy-to-use UI such as Dialogflow CX.

This article aims to give insight into the tech behind bot creation and Dialogflow is used merely as an example. To understand how I can help you build or manage your conversational assistant on the platform of your choice, please contact me on LinkedIn.

Source: https://chatbotslife.com/how-does-it-know-some-beginner-chatbot-tech-for-newbies-fa75ff59651f?source=rss—-a49517e4c30b—4

Continue Reading

AI

Who is chatbot Eliza?

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Discover the whole story.

Published

on


Frédéric Pierron

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Its creator, Joseph Weizenbaum was a researcher at the famous Artificial Intelligence Laboratory of the MIT (Massachusetts Institute of Technology). His goal was to enable a conversation between a computer and a human user. More precisely, the program simulates a conversation with a Rogérian psychoanalyst, whose method consists in reformulating the patient’s words to let him explore his thoughts himself.

Joseph Weizenbaum (Professor emeritus of computer science at MIT). Location: Balcony of his apartment in Berlin, Germany. By Ulrich Hansen, Germany (Journalist) / Wikipedia.

The program was rather rudimentary at the time. It consists in recognizing key words or expressions and displaying in return questions constructed from these key words. When the program does not have an answer available, it displays a “I understand” that is quite effective, albeit laconic.

Weizenbaum explains that his primary intention was to show the superficiality of communication between a human and a machine. He was very surprised when he realized that many users were getting caught up in the game, completely forgetting that the program was without real intelligence and devoid of any feelings and emotions. He even said that his secretary would discreetly consult Eliza to solve his personal problems, forcing the researcher to unplug the program.

Conversing with a computer thinking it is a human being is one of the criteria of Turing’s famous test. Artificial intelligence is said to exist when a human cannot discern whether or not the interlocutor is human. Eliza, in this sense, passes the test brilliantly according to its users.
Eliza thus opened the way (or the voice!) to what has been called chatbots, an abbreviation of chatterbot, itself an abbreviation of chatter robot, literally “talking robot”.

Source: https://chatbotslife.com/who-is-chatbot-eliza-bfeef79df804?source=rss—-a49517e4c30b—4

Continue Reading

AI

FermiNet: Quantum Physics and Chemistry from First Principles

Weve developed a new neural network architecture, the Fermionic Neural Network or FermiNet, which is well-suited to modeling the quantum state of large collections of electrons, the fundamental building blocks of chemical bonds.

Published

on

Unfortunately, 0.5% error still isn’t enough to be useful to the working chemist. The energy in molecular bonds is just a tiny fraction of the total energy of a system, and correctly predicting whether a molecule is stable can often depend on just 0.001% of the total energy of a system, or about 0.2% of the remaining “correlation” energy. For instance, while the total energy of the electrons in a butadiene molecule is almost 100,000 kilocalories per mole, the difference in energy between different possible shapes of the molecule is just 1 kilocalorie per mole. That means that if you want to correctly predict butadiene’s natural shape, then the same level of precision is needed as measuring the width of a football field down to the millimeter.

With the advent of digital computing after World War II, scientists developed a whole menagerie of computational methods that went beyond this mean field description of electrons. While these methods come in a bewildering alphabet soup of abbreviations, they all generally fall somewhere on an axis that trades off accuracy with efficiency. At one extreme, there are methods that are essentially exact, but scale worse than exponentially with the number of electrons, making them impractical for all but the smallest molecules. At the other extreme are methods that scale linearly, but are not very accurate. These computational methods have had an enormous impact on the practice of chemistry – the 1998 Nobel Prize in chemistry was awarded to the originators of many of these algorithms.

Fermionic Neural Networks

Despite the breadth of existing computational quantum mechanical tools, we felt a new method was needed to address the problem of efficient representation. There’s a reason that the largest quantum chemical calculations only run into the tens of thousands of electrons for even the most approximate methods, while classical chemical calculation techniques like molecular dynamics can handle millions of atoms. The state of a classical system can be described easily – we just have to track the position and momentum of each particle. Representing the state of a quantum system is far more challenging. A probability has to be assigned to every possible configuration of electron positions. This is encoded in the wavefunction, which assigns a positive or negative number to every configuration of electrons, and the wavefunction squared gives the probability of finding the system in that configuration. The space of all possible configurations is enormous – if you tried to represent it as a grid with 100 points along each dimension, then the number of possible electron configurations for the silicon atom would be larger than the number of atoms in the universe!

This is exactly where we thought deep neural networks could help. In the last several years, there have been huge advances in representing complex, high-dimensional probability distributions with neural networks. We now know how to train these networks efficiently and scalably. We surmised that, given these networks have already proven their mettle at fitting high-dimensional functions in artificial intelligence problems, maybe they could be used to represent quantum wavefunctions as well. We were not the first people to think of this – researchers such as Giuseppe Carleo and Matthias Troyer and others have shown how modern deep learning could be used for solving idealised quantum problems. We wanted to use deep neural networks to tackle more realistic problems in chemistry and condensed matter physics, and that meant including electrons in our calculations.

There is just one wrinkle when dealing with electrons. Electrons must obey the Pauli exclusion principle, which means that they can’t be in the same space at the same time. This is because electrons are a type of particle known as fermions, which include the building blocks of most matter – protons, neutrons, quarks, neutrinos, etc. Their wavefunction must be antisymmetric – if you swap the position of two electrons, the wavefunction gets multiplied by -1. That means that if two electrons are on top of each other, the wavefunction (and the probability of that configuration) will be zero.

This meant we had to develop a new type of neural network that was antisymmetric with respect to its inputs, which we have dubbed the Fermionic Neural Network, or FermiNet. In most quantum chemistry methods, antisymmetry is introduced using a function called the determinant. The determinant of a matrix has the property that if you swap two rows, the output gets multiplied by -1, just like a wavefunction for fermions. So you can take a bunch of single-electron functions, evaluate them for every electron in your system, and pack all of the results into one matrix. The determinant of that matrix is then a properly antisymmetric wavefunction. The major limitation of this approach is that the resulting function – known as a Slater determinant – is not very general. Wavefunctions of real systems are usually far more complicated. The typical way to improve on this is to take a large linear combination of Slater determinants – sometimes millions or more – and add some simple corrections based on pairs of electrons. Even then, this may not be enough to accurately compute energies.

Source: https://deepmind.com/blog/article/FermiNet

Continue Reading
AI13 hours ago

How does it know?! Some beginner chatbot tech for newbies.

AI13 hours ago

Who is chatbot Eliza?

AI1 day ago

FermiNet: Quantum Physics and Chemistry from First Principles

AI1 day ago

How to take S3 backups with DejaDup on Ubuntu 20.10

AI3 days ago

How banks and finance enterprises can strengthen their support with AI-powered customer service…

AI3 days ago

GBoard Introducing Voice — Smooth Texting and Typing

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

Trending