Connect with us

AI

Streamlining data labeling for YOLO object detection in Amazon SageMaker Ground Truth

Object detection is a common task in computer vision (CV), and the YOLOv3 model is state-of-the-art in terms of accuracy and speed. In transfer learning, you obtain a model trained on a large but generic dataset and retrain the model on your custom dataset. One of the most time-consuming parts in transfer learning is collecting […]

Published

on

Object detection is a common task in computer vision (CV), and the YOLOv3 model is state-of-the-art in terms of accuracy and speed. In transfer learning, you obtain a model trained on a large but generic dataset and retrain the model on your custom dataset. One of the most time-consuming parts in transfer learning is collecting and labeling image data to generate a custom training dataset. This post explores how to do this in Amazon SageMaker Ground Truth.

Ground Truth offers a comprehensive platform for annotating the most common data labeling jobs in CV: image classification, object detection, semantic segmentation, and instance segmentation. You can perform labeling using Amazon Mechanical Turk or create your own private team to label collaboratively. You can also use one of the third-party data labeling service providers listed on the AWS Marketplace. Ground Truth offers an intuitive interface that is easy to work with. You can communicate with labelers about specific needs for your particular task using examples and notes through the interface.

Labeling data is already hard work. Creating training data for a CV modeling task requires data collection and storage, setting up labeling jobs, and post-processing the labeled data. Moreover, not all object detection models expect the data in the same format. For example, the Faster RCNN model expects the data in the popular Pascal VOC format, which the YOLO models can’t work with. These associated steps are part of any machine learning pipeline for CV. You sometimes need to run the pipeline multiple times to improve the model incrementally. This post shows how to perform these steps efficiently by using Python scripts and get to model training as quickly as possible. This post uses the YOLO format for its use case, but the steps are mostly independent of the data format.

The image labeling step of a training data generation task is inherently manual. This post shows how to create a reusable framework to create training data for model building efficiently. Specifically, you can do the following:

  • Create the required directory structure in Amazon S3 before starting a Ground Truth job
  • Create a private team of annotators and start a Ground Truth job
  • Collect the annotations when labeling is complete and save it in a pandas dataframe
  • Post-process the dataset for model training

You can download the code presented in this post from this GitHub repo. This post demonstrates how to run the code from the AWS CLI on a local machine that can access an AWS account. For more information about setting up AWS CLI, see What Is the AWS Command Line Interface? Make sure that you configure it to access the S3 buckets in this post. Alternatively, you can run it in AWS Cloud9 or by spinning up an Amazon EC2 instance. You can also run the code blocks in an Amazon SageMaker notebook.

If you’re using an Amazon SageMaker notebook, you can still access the Linux shell of the underlying EC2 instance and follow along by opening a new terminal from the Jupyter main page and running the scripts from the /home/ec2-user/SageMaker folder.

Setting up your S3 bucket

The first thing you need to do is to upload the training images to an S3 bucket. Name the bucket ground-truth-data-labeling. You want each labeling task to have its own self-contained folder under this bucket. If you start labeling a small set of images that you keep in the first folder, but find that the model performed poorly after the first round because the data was insufficient, you can upload more images to a different folder under the same bucket and start another labeling task.

For the first labeling task, create the folder bounding_box and the following three subfolders under it:

  • images – You upload all the images in the Ground Truth labeling job to this subfolder.
  • ground_truth_annots – This subfolder starts empty; the Ground Truth job populates it automatically, and you retrieve the final annotations from here.
  • yolo_annot_files – This subfolder also starts empty, but eventually holds the annotation files ready for model training. The script populates it automatically.

If your images are in .jpeg format and available in the current working directory, you can upload the images with the following code:

aws s3 sync . s3://ground-truth-data-labeling/bounding_box/images/ --exclude "*" --include "*.jpg" 

For this use case, you use five images. There are two types of objects in the images—pencil and pen. You need to draw bounding boxes around each object in the images. The following images are examples of what you need to label. All images are available in the GitHub repo.

Creating the manifest file

A Ground Truth job requires a manifest file in JSON format that contains the Amazon S3 paths of all the images to label. You need to create this file before you can start the first Ground Truth job. The format of this file is simple:

{"source-ref": < S3 path to image1 >}
{"source-ref": < S3 path to image2 >}
...

However, creating the manifest file by hand would be tedious for a large number of images. Therefore, you can automate the process by running a script. You first need to create a file holding the parameters required for the scripts. Create a file input.json in your local file system with the following content:

{ "s3_bucket":"ground-truth-data-labeling", "job_id":"bounding_box", "ground_truth_job_name":"yolo-bbox", "yolo_output_dir":"yolo_annot_files"
}

Save the following code block in a file called prep_gt_job.py:

import boto3
import json def create_manifest(job_path): """ Creates the manifest file for the Ground Truth job Input: job_path: Full path of the folder in S3 for GT job Returns: manifest_file: The manifest file required for GT job """ s3_rec = boto3.resource("s3") s3_bucket = job_path.split("/")[0] prefix = job_path.replace(s3_bucket, "")[1:] image_folder = f"{prefix}/images" print(f"using images from ... {image_folder} \n") bucket = s3_rec.Bucket(s3_bucket) objs = list(bucket.objects.filter(Prefix=image_folder)) img_files = objs[1:] # first item is the folder name n_imgs = len(img_files) print(f"there are {n_imgs} images \n") TOKEN = "source-ref" manifest_file = "/tmp/manifest.json" with open(manifest_file, "w") as fout: for img_file in img_files: fname = f"s3://{s3_bucket}/{img_file.key}" fout.write(f'{{"{TOKEN}": "{fname}"}}\n') return manifest_file def upload_manifest(job_path, manifest_file): """ Uploads the manifest file into S3 Input: job_path: Full path of the folder in S3 for GT job manifest_file: Path to the local copy of the manifest file """ s3_rec = boto3.resource("s3") s3_bucket = job_path.split("/")[0] source = manifest_file.split("/")[-1] prefix = job_path.replace(s3_bucket, "")[1:] destination = f"{prefix}/{source}" print(f"uploading manifest file to {destination} \n") s3_rec.meta.client.upload_file(manifest_file, s3_bucket, destination) def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Collects image names from S3 and creates the manifest file for GT 3. Uploads the manifest file to S3 """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_path = f"{s3_bucket}/{job_id}" man_file = create_manifest(gt_job_path) upload_manifest(gt_job_path, man_file) if __name__ == "__main__": main()

Run the following script:

python prep_gt_job.py

This script reads the S3 bucket and job names from the input file, creates a list of images available in the images folder, creates the manifest.json file, and uploads the manifest file to the S3 bucket at s3://ground-truth-data-labeling/bounding_box/.

This method illustrates a programmatic control of the process, but you can also create the file from the Ground Truth API. For instructions, see Create a Manifest File.

At this point, the folder structure in the S3 bucket should look like the following:

ground-truth-data-labeling |-- bounding_box |-- ground_truth_annots |-- images |-- yolo_annot_files |-- manifest.json

Creating the Ground Truth job

You’re now ready to create your Ground Truth job. You need to specify the job details and task type, and create your team of labelers and labeling task details. Then you can sign in to begin the labeling job.

Specifying the job details

To specify the job details, complete the following steps:

  1. On the Amazon SageMaker console, under Ground Truth, choose Labeling jobs.

  1. On the Labeling jobs page, choose Create labeling job.

  1. In the Job overview section, for Job name, enter yolo-bbox. It should be the name you defined in the input.json file earlier.
  2. Pick Manual Data Setup under Input Data Setup.
  3. For Input dataset location, enter s3://ground-truth-data-labeling/bounding_box/manifest.json.
  4. For Output dataset location, enter s3://ground-truth-data-labeling/bounding_box/ground_truth_annots.

  1. In the Create an IAM role section, first select Create a new role from the drop down menu and then select Specific S3 buckets.
  2. Enter ground-truth-data-labeling.

  1. Choose Create.

Specifying the task type

To specify the task type, complete the following steps:

  1. In the Task selection section, from the Task Category drop-down menu, choose Image.
  2. Select Bounding box.

  1. Don’t change Enable enhanced image access, which is selected by default. It enables Cross-Origin Resource Sharing (CORS) that may be required for some workers to complete the annotation task.
  2. Choose Next.

Creating a team of labelers

To create your team of labelers, complete the following steps:

  1. In the Workers section, select Private.
  2. Follow the instructions to create a new team.

Each member of the team receives a notification email titled, “You’re invited to work on a labeling project” that has initial sign-in credentials. For this use case, create a team with just yourself as a member.

Specifying labeling task details

In the Bounding box labeling tool section, you should see the images you uploaded to Amazon S3. You should check that the paths are correct in the previous steps. To specify your task details, complete the following steps:

  1. In the text box, enter a brief description of the task.

This is critical if the data labeling team has more than one members and you want to make sure everyone follows the same rule when drawing the boxes. Any inconsistency in bounding box creation may end up confusing your object detection model. For example, if you’re labeling beverage cans and want to create a tight bounding box only around the visible logo, instead of the entire can, you should specify that to get consistent labeling from all the workers. For this use case, you can enter Please enter a tight bounding box around the entire object.

  1. Optionally, you can upload examples of a good and a bad bounding box.

You can make sure your team is consistent in their labels by providing good and bad examples.

  1. Under Labels, enter the names of the labels you’re using to identify each bounding box; in this case, pencil and pen.

A color is assigned to each label automatically, which helps to visualize the boxes created for overlapping objects.

  1. To run a final sanity check, choose Preview.

  1. Choose Create job.

Job creation can take up to a few minutes. When it’s complete, you should see a job titled yolo-bbox on the Ground Truth Labeling jobs page with In progress as the status.

  1. To view the job details, select the job.

This is a good time to verify the paths are correct; the scripts don’t run if there’s any inconsistency in names.

For more information about providing labeling instructions, see Create high-quality instructions for Amazon SageMaker Ground Truth labeling jobs.

Sign in and start labeling

After you receive the initial credentials to register as a labeler for this job, follow the link to reset the password and start labeling.

If you need to interrupt your labeling session, you can resume labeling by choosing Labeling workforces under Ground Truth on the SageMaker console.

You can find the link to the labeling portal on the Private tab. The page also lists the teams and individuals involved in this private labeling task.

After you sign in, start labeling by choosing Start working.

Because you only have five images in the dataset to label, you can finish the entire task in a single session. For larger datasets, you can pause the task by choosing Stop working and return to the task later to finish it.

Checking job status

After the labeling is complete, the status of the labeling job changes to Complete and a new JSON file called output.manifest containing the annotations appears at s3://ground-truth-data-labeling/bounding_box/ground_truth_annots/yolo-bbox/manifests/output /output.manifest.

Parsing Ground Truth annotations

You can now parse through the annotations and perform the necessary post-processing steps to make it ready for model training. Start by running the following code block:

from io import StringIO
import json
import s3fs
import boto3
import pandas as pd def parse_gt_output(manifest_path, job_name): """ Captures the json Ground Truth bounding box annotations into a pandas dataframe Input: manifest_path: S3 path to the annotation file job_name: name of the Ground Truth job Returns: df_bbox: pandas dataframe with bounding box coordinates for each item in every image """ filesys = s3fs.S3FileSystem() with filesys.open(manifest_path) as fin: annot_list = [] for line in fin.readlines(): record = json.loads(line) if job_name in record.keys(): # is it necessary? image_file_path = record["source-ref"] image_file_name = image_file_path.split("/")[-1] class_maps = record[f"{job_name}-metadata"]["class-map"] imsize_list = record[job_name]["image_size"] assert len(imsize_list) == 1 image_width = imsize_list[0]["width"] image_height = imsize_list[0]["height"] for annot in record[job_name]["annotations"]: left = annot["left"] top = annot["top"] height = annot["height"] width = annot["width"] class_name = class_maps[f'{annot["class_id"]}'] annot_list.append( [ image_file_name, class_name, left, top, height, width, image_width, image_height, ] ) df_bbox = pd.DataFrame( annot_list, columns=[ "img_file", "category", "box_left", "box_top", "box_height", "box_width", "img_width", "img_height", ], ) return df_bbox def save_df_to_s3(df_local, s3_bucket, destination): """ Saves a pandas dataframe to S3 Input: df_local: Dataframe to save s3_bucket: Bucket name destination: Prefix """ csv_buffer = StringIO() s3_resource = boto3.resource("s3") df_local.to_csv(csv_buffer, index=False) s3_resource.Object(s3_bucket, destination).put(Body=csv_buffer.getvalue()) def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Parses the Ground Truth annotations and creates a dataframe 3. Saves the dataframe to S3 """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_name = input_dict["ground_truth_job_name"] mani_path = f"s3://{s3_bucket}/{job_id}/ground_truth_annots/{gt_job_name}/manifests/output/output.manifest" df_annot = parse_gt_output(mani_path, gt_job_name) dest = f"{job_id}/ground_truth_annots/{gt_job_name}/annot.csv" save_df_to_s3(df_annot, s3_bucket, dest) if __name__ == "__main__": main()

From the AWS CLI, save the preceding code block in the file parse_annot.py and run:

python parse_annot.py

Ground Truth returns the bounding box information using the following four numbers: x and y coordinates, and its height and width. The procedure parse_gt_output scans through the output.manifest file and stores the information for every bounding box for each image in a pandas dataframe. The procedure save_df_to_s3 saves it in a tabular format as annot.csv to the S3 bucket for further processing.

The creation of the dataframe is useful for a few reasons. JSON files are hard to read and the output.manifest file contains more information, like label metadata, than you need for the next step. The dataframe contains only the relevant information and you can visualize it easily to make sure everything looks fine.

To grab the annot.csv file from Amazon S3 and save a local copy, run the following:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/ground_truth_annots/yolo-bbox/annot.csv 

You can read it back into a pandas dataframe and inspect the first few lines. See the following code:

import pandas as pd
df_ann = pd.read_csv('annot.csv')
df_ann.head()

The following screenshot shows the results.

You also capture the size of the image through img_width and img_height. This is necessary because the object detection models need to know the location of each bounding box within the image. In this case, you can see that images in the dataset were captured with a 4608×3456 pixel resolution.

There are quite a few reasons why it is a good idea to save the annotation information into a dataframe:

  • In a subsequent step, you need to rescale the bounding box coordinates into a YOLO-readable format. You can do this operation easily in a dataframe.
  • If you decide to capture and label more images in the future to augment the existing dataset, all you need to do is join the newly created dataframe with the existing one. Again, you can perform this easily using a dataframe.
  • As of this writing, Ground Truth doesn’t allow through the console more than 30 different categories to label in the same job. If you have more categories in your dataset, you have to label them under multiple Ground Truth jobs and combine them. Ground Truth associates each bounding box to an integer index in the output.manifest file. Therefore, the integer labels are different across multiple Ground Truth jobs if you have more than 30 categories. Having the annotations as dataframes makes the task of combining them easier and takes care of the conflict of category names across multiple jobs. In the preceding screenshot, you can see that you used the actual names under the category column instead of the integer index.

Generating YOLO annotations

You’re now ready to reformat the bounding box coordinates Ground Truth provided into a format the YOLO model accepts.

In the YOLO format, each bounding box is described by the center coordinates of the box and its width and height. Each number is scaled by the dimensions of the image; therefore, they all range between 0 and 1. Instead of category names, YOLO models expect the corresponding integer categories.

Therefore, you need to map each name in the category column of the dataframe into a unique integer. Moreover, the official Darknet implementation of YOLOv3 needs to have the name of the image match the annotation text file name. For example, if the image file is pic01.jpg, the corresponding annotation file should be named pic01.txt.

The following code block performs all these tasks:

import os
import json
from io import StringIO
import boto3
import s3fs
import pandas as pd def annot_yolo(annot_file, cats): """ Prepares the annotation in YOLO format Input: annot_file: csv file containing Ground Truth annotations ordered_cats: List of object categories in proper order for model training Returns: df_ann: pandas dataframe with the following columns img_file int_category box_center_w box_center_h box_width box_height Note: YOLO data format: <object-class> <x_center> <y_center> <width> <height> """ df_ann = pd.read_csv(annot_file) df_ann["int_category"] = df_ann["category"].apply(lambda x: cats.index(x)) df_ann["box_center_w"] = df_ann["box_left"] + df_ann["box_width"] / 2 df_ann["box_center_h"] = df_ann["box_top"] + df_ann["box_height"] / 2 # scale box dimensions by image dimensions df_ann["box_center_w"] = df_ann["box_center_w"] / df_ann["img_width"] df_ann["box_center_h"] = df_ann["box_center_h"] / df_ann["img_height"] df_ann["box_width"] = df_ann["box_width"] / df_ann["img_width"] df_ann["box_height"] = df_ann["box_height"] / df_ann["img_height"] return df_ann def save_annots_to_s3(s3_bucket, prefix, df_local): """ For every image in the dataset, save a text file with annotation in YOLO format Input: s3_bucket: S3 bucket name prefix: Folder name under s3_bucket where files will be written df_local: pandas dataframe with the following columns img_file int_category box_center_w box_center_h box_width box_height """ unique_images = df_local["img_file"].unique() s3_resource = boto3.resource("s3") for image_file in unique_images: df_single_img_annots = df_local.loc[df_local.img_file == image_file] annot_txt_file = image_file.split(".")[0] + ".txt" destination = f"{prefix}/{annot_txt_file}" csv_buffer = StringIO() df_single_img_annots.to_csv( csv_buffer, index=False, header=False, sep=" ", float_format="%.4f", columns=[ "int_category", "box_center_w", "box_center_h", "box_width", "box_height", ], ) s3_resource.Object(s3_bucket, destination).put(Body=csv_buffer.getvalue()) def get_cats(json_file): """ Makes a list of the category names in proper order Input: json_file: s3 path of the json file containing the category information Returns: cats: List of category names """ filesys = s3fs.S3FileSystem() with filesys.open(json_file) as fin: line = fin.readline() record = json.loads(line) labels = [item["label"] for item in record["labels"]] return labels def main(): """ Performs the following tasks: 1. Reads input from 'input.json' 2. Collect the category names from the Ground Truth job 3. Creates a dataframe with annotaion in YOLO format 4. Saves a text file in S3 with YOLO annotations for each of the labeled images """ with open("input.json") as fjson: input_dict = json.load(fjson) s3_bucket = input_dict["s3_bucket"] job_id = input_dict["job_id"] gt_job_name = input_dict["ground_truth_job_name"] yolo_output = input_dict["yolo_output_dir"] s3_path_cats = ( f"s3://{s3_bucket}/{job_id}/ground_truth_annots/{gt_job_name}/annotation-tool/data.json" ) categories = get_cats(s3_path_cats) print("\n labels used in Ground Truth job: ") print(categories, "\n") gt_annot_file = "annot.csv" s3_dir = f"{job_id}/{yolo_output}" print(f"annotation files saved in = ", s3_dir) df_annot = annot_yolo(gt_annot_file, categories) save_annots_to_s3(s3_bucket, s3_dir, df_annot) if __name__ == "__main__": main()

From the AWS CLI, save the preceding code block in a file create_annot.py and run:

python create_annot.py

The annot_yolo procedure transforms the dataframe you created by rescaling the box coordinates by the image size, and the save_annots_to_s3 procedure saves the annotations corresponding to each image into a text file and stores it in Amazon S3.

You can now inspect a couple of images and their corresponding annotations to make sure they’re properly formatted for model training. However, you first need to write a procedure to draw YOLO formatted bounding boxes on an image. Save the following code block in visualize.py:

import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.colors as mcolors
import argparse def visualize_bbox(img_file, yolo_ann_file, label_dict, figure_size=(6, 8)): """ Plots bounding boxes on images Input: img_file : numpy.array yolo_ann_file: Text file containing annotations in YOLO format label_dict: Dictionary of image categories figure_size: Figure size """ img = mpimg.imread(img_file) fig, ax = plt.subplots(1, 1, figsize=figure_size) ax.imshow(img) im_height, im_width, _ = img.shape palette = mcolors.TABLEAU_COLORS colors = [c for c in palette.keys()] with open(yolo_ann_file, "r") as fin: for line in fin: cat, center_w, center_h, width, height = line.split() cat = int(cat) category_name = label_dict[cat] left = (float(center_w) - float(width) / 2) * im_width top = (float(center_h) - float(height) / 2) * im_height width = float(width) * im_width height = float(height) * im_height rect = plt.Rectangle( (left, top), width, height, fill=False, linewidth=2, edgecolor=colors[cat], ) ax.add_patch(rect) props = dict(boxstyle="round", facecolor=colors[cat], alpha=0.5) ax.text( left, top, category_name, fontsize=14, verticalalignment="top", bbox=props, ) plt.show() def main(): """ Plots bounding boxes """ labels = {0: "pen", 1: "pencil"} parser = argparse.ArgumentParser() parser.add_argument("img", help="image file") args = parser.parse_args() img_file = args.img ann_file = img_file.split(".")[0] + ".txt" visualize_bbox(img_file, ann_file, labels, figure_size=(6, 8)) if __name__ == "__main__": main()

Download an image and the corresponding annotation file from Amazon S3. See the following code:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/yolo_annot_files/IMG_20200816_205004.txt .

aws s3 cp s3://ground-truth-data-
labeling/bounding_box/images/IMG_20200816_205004.jpg .

To display the correct label of each bounding box, you need to specify the names of the objects you labeled in a dictionary and pass it to visualize_bbox. For this use case, you only have two items in the list. However, the order of the labels is important—it should match the order you used while creating the Ground Truth labeling job. If you can’t remember the order, you can access the information from the s3://data-labeling-ground-truth/bounding_box/ground_truth_annots/bbox-yolo/annotation-tool/data.json

file in Amazon S3, which the Ground Truth job creates automatically.

The contents of the data.json file the task look like the following code:

{"document-version":"2018-11-28","labels":[{"label":"pencil"},{"label":"pen"}]}

Therefore, a dictionary with the labels as follows was created in visualize.py:

labels = {0: 'pencil', 1: 'pen'}

Now run the following to visualize the image:

python visualize.py IMG_20200816_205004.jpg

The following screenshot shows the bounding boxes correctly drawn around two pens.

To plot an image with a mix of pens and pencils, get the image and the corresponding annotation text from Amazon S3. See the following code:

aws s3 cp s3://ground-truth-data-labeling/bounding_box/yolo_annot_files/IMG_20200816_205029.txt . aws s3 cp s3://ground-truth-data-
labeling/bounding_box/images/IMG_20200816_205029.jpg .

Override the default image size in the  visualize_bbox  procedure to (10, 12) and run the following:

python visualize.py IMG_20200816_205029.jpg

The following screenshot shows three bounding boxes correctly drawn around two types of objects.

Conclusion

This post described how to create an efficient, end-to-end data-gathering pipeline in Amazon Ground Truth for an object detection model. Try out this process yourself next time you are creating an object detection model. You can modify the post-processing annotations to produce labeled data in the Pascal VOC format, which is required for models like Faster RCNN. You can also adopt the basic framework to other data-labeling pipelines with job-specific modifications. For example, you can rewrite the annotation post-processing procedures to adopt the framework for an instance segmentation task, in which an object is labeled at the pixel level instead of drawing a rectangle around the object. Amazon Ground Truth gets regularly updated with enhanced capabilities. Therefore, check  the documentation for the most up to date features.


About the Author

Arkajyoti Misra is a Data Scientist working in AWS Professional Services. He loves to dig into Machine Learning algorithms and enjoys reading about new frontiers in Deep Learning.

Source: https://aws.amazon.com/blogs/machine-learning/streamlining-data-labeling-for-yolo-object-detection-in-amazon-sagemaker-ground-truth/

AI

Things to Know about Free Form Templates

A single file that includes numerous supporting files is commonly known as a form template. Some files will define or show the controls to appear on the free form templates or design. The collections of these supporting files or templates are also called form files. While designing free form templates, users should be able to […]

The post Things to Know about Free Form Templates appeared first on 1redDrop.

Published

on

A single file that includes numerous supporting files is commonly known as a form template. Some files will define or show the controls to appear on the free form templates or design. The collections of these supporting files or templates are also called form files. While designing free form templates, users should be able to view and also work with the form files. 

It will create a new free form template by copying and storing those files within a folder. A form template (.XSN) file designing or creation of a single file will include various supporting files. Users may fill out the online form by accessing the .XML form file, which is a form template.

Designing Free Form Templates

There are numerous processes that define free form template design, and are as follows:

  • Designing the form’s appearance – the instructional text, labels, and controls
  • Controls will assist with user interaction behavior on the form template. You can design a specific section to appear or disappear when the user chooses a particular option
  • Whether the form template may include some additional views. For a permit application form design, for example, you have to provide different views for each person. One view especially for the electrical contractor, next for the receiving agent, and finally, the investigator. He or she will deny or approve the permit application
  • Next, you need to know how & where to store the form data. Designing free from templates will allow users to submit their data within the database either online or direct access. If not, they can also store the same in any specific shared folder
  • It is essential to design the other elements, colors, and fonts within the form template
  • Users must be able to personalize the form. Allowing users to include various rows within the optional section, repeating section, or a repeating table
  • Users should receive a notification when they forget to input a mandatory field or make mistakes within the form
  • After completing the free form templates design, you can publish the same online using a .XSN file format

Club Signup Form

A simple registration form can help your Club Signup Form creation process go smoother. This signup form could be an ideal solution for a new club membership registration for any organization or club.

Application Form

Application form templates are much easier to use & set-up to streamline your application process. You can customize this online form and utilize the same for numerous applications. Make use of this application form as a job application form, volunteer applications, contest entries, or high school scholarship applications. It is an ideal solution for scholarship programs, nonprofit organizations, business owners, and many such users and use cases.

Scheduling Form

Scheduling form templates are handy and can be used for numerous appointment booking requirements. A scheduling form is also utilized for various appointment scheduling or online reservations and booking purposes. Regardless of your business requirement, it is easy to customize the form template.

Concept Testing Survey

While testing a new design or concept, it is essential to gather the responses quickly. Freeform templates for a concept testing survey make it much easier to gather product feedback and reach the target audience. It is essential to conduct market research while planning to release a new product. A mobile-friendly form will allow you to utilize the survey questions for collecting the product’s consumer input quickly.

Credit Card Order Form

It is not always a complex process to provide an online credit card payment form for the customers. This form template will allow you to access numerous services or products for collecting card payment information. You can utilize this yet-another endless and simple payment form.

Employment Application Form

The employment application form for recruitment will assist the HR team to gather the required information from candidates. During the interview or application process, you can easily remove any expensive follow-ups. Some of the fields are contact information, employment history, useful information, etc. as well as an outline of the job description, consent for background checks, military service record, anticipated start date, any special skills, and many more. It is optional to enable notifications for the form owners to receive an alert or email when a new employment application is submitted.

Source: https://1reddrop.com/2020/10/24/things-to-know-about-free-form-templates/?utm_source=rss&utm_medium=rss&utm_campaign=things-to-know-about-free-form-templates

Continue Reading

AI

Are Chatbots Vulnerable? Best Practices to Ensure Chatbots Security

Published

on

Rebecca James
credit IT Security Guru

A simple answer is a Yes! Chatbots are vulnerable. Some specific threats and vulnerabilities risk chatbots security and prove them a wrong choice for usage. With the advancement in technology, hackers can now easily target the hidden infrastructure of a chatbot.

The chatbot’s framework has an opportunity for the attackers ready to inject the malicious codes or commands that might unlock the secured data of the customers and your business. However, the extent of the attack’s complexity and success might depend on the messaging platform’s security.

Are you thinking about how chatbots are being exposed to attacks? Well! Hackers are now highly advanced. They attack the chatbots in two ways, i.e., either by social engineering attack or by technical attacks.

  • An evil bot can impersonate a legal user by using backup data of the possibly targeted victims by social engineering attack. All such data is collected from various sources like the dark web and social media platforms. Sometimes they use both sources to gain access to some other user’s data by a bot providing such services.
  • The second attack is technical. Here also attackers can turn themself into evil bots who exchange messages with the other bots. The purpose is to look for some vulnerabilities in the target’s profile that can be later exploited. It can eventually lead to the compromise of the entire framework that protects the data and can ultimately lead to data theft.

To ensure chatbots security, the bot creators must ensure that all the security processes are in place and are responsible for restoring the architecture. The data flow via the chatbot system should also be encrypted both in transit and rest.

To further aid you in chatbot security, this article discusses five best practices to ensure chatbots security. So, let’s read on.

The following mentioned below are some of the best practices to ensure the security of chatbots.

It’s always feared that data in transit can be spoofed or tampered with the sophistication of cybercriminals’ technology and smartness. It’s essential to implement end-to-end encryption to ensure that your entire conversation remains secured. It means that by encryption, you can prevent any third person other than the sender and the receiver from peeping into your messages.

Encryption importance can’t be neglected in the cyber world, and undoubtedly the chatbot designers are adapting this method to make sure that their chatbot security is right on the point. For more robust encryption, consider using business VPNs that encrypt your internet traffic and messages. With a VPN, you can also prevent the threats and vulnerabilities associated with chatbots.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

Moreover, it’s a crucial feature of other chat services like WhatsApp and other giant tech developers. They are anxious to guarantee security via encryption even when there’s strict surveillance by the government. Such encryption is to fulfill the legal principles of the GDPR that says that companies should adopt measures to encrypt the users’ data.

User identity authentication is a process that verifies if the user is having secure and valid credentials like the username and password. The login credentials are exchanged for having a secure authentication token used during the complete user session. If you haven’t, then you should try out this method for boosting user security.

Authentication timeouts are another way to ensure your chatbots security. This method is more common in banks as the token can be used for the predetermined time.

Moreover, two-factor authentication is yet another method to prove user identity. Users are asked to verify identity either by a text message or email, depending on the way they’ve chosen. It also helps in the authorization process as it permits access to the right person and ensures that information isn’t mishandled or breached.

The self-destructive message features open another way for enhancing chatbot security. This option comes in handy when the user provides their personally identifiable information. Such information can pose a serious threat to user privacy and should be destroyed or deleted within a set period. This method is handier when you’re associated with backing or any other financial chatbots.

By using secure protocols, you can also ensure chatbots security. Every security system, by default, has the HTTPS protocol installed in it. If you aren’t an IT specialist, you can also identify it when you view the search bar’s URL. As long as your data is being transferred via HTTPS protocol and encrypted connections, TLS and SSL, your data is secured from vulnerabilities and different types of cyber-attacks.

Thus, make sure to use secure protocols for enhanced security. Remember that when Chatbots are new, the coding and system used to protect it is the same as the existing HIMs. They interconnect with their security systems and have more than one encryption layer to protect their users’ security.

Do you know what the most significant security vulnerability that’s challenging to combat is? Wondering? Well! It’s none other than human error. User behavior must be resolved using commercial applications because they might continue to believe that the systems are flawed.

No doubt that an unprecedented number of users label the significance of digital security, but still, humans are the most vulnerable in the system. Chatbot security continues to be a real big problem until the problem of user errors comes to an end. And this needs education on various forms of digital technology, including chatbots.

Here the customers aren’t the ones who are to be blamed. Like customers, employees can make a mistake, and they do make most of the time. To prevent this, the chatbot developers should form a defined strategy, including the IT experts, and train them on the system’s safe use. Doing so enhances the team’s skillset and allows them to engage with the chatbot system confidently.

However, clients can’t be educated like the employees. But at least you can provide them a detailed road map of securely interacting with the system. It might involve other professionals who can successfully engage customers and educate them on the right way to interact with the chatbots.

Several emerging technologies are keen to play a vital role in protecting the chatbots against threats and vulnerabilities in the upcoming time, among all the most potent method behavior analytics and Artificial Intelligence developments.

  • User Behavioral Analytics: It’s a process that uses applications to study the patterns of user behavior. It enables them to implement complex algorithms and statistical analysis to detect any abnormal behavior that possibly represents a security threat. Analytical tools are quite common and powerful; thus, this methodology can become a fundamental component of the chatbot system.
  • Developments in AI: Artificial technology is a two-end sword that offers benefits and threats simultaneously. But, as AI is predicted to fulfill its potential, it will provide an extra security level to the systems. It is mainly because of its ability to wipe a large amount of data for abnormalities that recognizes security breaches and threats.

The Bottom Line

Security concerns have always been there with new technologies and bring new threats and vulnerabilities with them. Although chatbots are an emerging technology, the security practices that stand behind them are present for a long time and are effective. Chatbots are the innovative development of the current era, and emerging technologies like AI will transform the way businesses might interact with the customers and ensure their security.

Source: https://chatbotslife.com/are-chatbots-vulnerable-best-practices-to-ensure-chatbots-security-d301b9f6ce17?source=rss—-a49517e4c30b—4

Continue Reading

AI

Best Technology Stacks For Mobile App Development

Published

on

What’s the Best Tech Stack for Mobile App Development? Read To Know

Which is the Best Tech Stack for Mobile Application Development? Kotlin, React Native, Ionic, Xamarin, Objective-C, Swift, JAVA… Which One?

Image Source: Google

Technology Stack for smartphones is like what blood is for the human body. Without a technology stack, it is hard even to imagine smartphones. Having a smartphone in uncountable hands is rising exponentially. For tech pundits, this is one unmissable aspect of our digital experience wherein tech stack is as critical as ROI.

The riveting experience for a successful mobile app predominantly depends on technology stacks.

The unbiased selection of mobile apps development language facilitates developers to build smooth, functional, efficient apps. They help businesses tone down the costs, focus on revenue-generation opportunities. Most importantly, it provides customers with jaw-dropping amazement, giving a reason to have it installed on the indispensable gadget in present times.

In today’s time, when there are over 5 million apps globally, and by all conscience, these are whopping no.s and going to push the smartphone industry further. But now you could see mobile app development every ‘nook and corner.’ But the fact is not who provides what but understanding the behavioural pattern of users.

So the pertinent question is, which is the ideal tech stack to use for mobile app development?

In native mobile app development, all toolkits, mobile apps development language, and the SDK are supported and provided by operating system vendors. Native app development thus allows developers to build apps compatible with specific OS environments; it is suitable for device-specific hardware and software. Hence it renders optimized performance using the latest technology. However, since Android & iOS imparts — — a unique platform for development, businesses have to develop multiple mobile apps for each platform.

1. Waz

2. Pokemon Go

3. Lyft

1.Java: The popularity of JAVA still makes it one of the official programming languages for android app development until the introduction of Kotlin. Java itself is at the core of the Android OS. Many of us even see the logo of Java when the device reboots. However, contradictions with Oracle (which owns the license to Java) made Google shift to open-source Java SDK for versions starting from Android 7.0 Nougat

2.Kotlin: According to Google I/O conference in 2019- Kotlin is the officially supported language for Android app development. It is entirely based on Java but has a few additions which make it simpler and easier to work.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

It’s my gut feeling like other developers to say that Kotlin is simply better. It has a leaner, more straightforward and concise code than open-cell Java, and several other advantages about handling null-pointer exceptions and more productive coding.

HERE’S A Programming Illustration Defining the CONCISENESS OF KOTLIN CODE

public class Address {

private String street;

private int streetNumber;

private String postCode;

private String city;

private Country country;

public Address(String street, int streetNumber, String postCode, String city, Country country) {

this.street = street;

this.streetNumber = streetNumber;

this.postCode = postCode;

this.city = city;

this.country = country;

}

@Override

public boolean equals(Object o) {

if (this == o) return true;

if (o == null || getClass() != o.getClass()) return false;

Address address = (Address) o;

if (streetNumber != address.streetNumber) return false;

if (!street.equals(address.street)) return false;

if (!postCode.equals(address.postCode)) return false;

if (!city.equals(address.city)) return false;

return country == address.country;

}

@Override

public int hashCode() {

int result = street.hashCode();

result = 31 * result + streetNumber;

result = 31 * result + postCode.hashCode();

result = 31 * result + city.hashCode();

result = 31 * result + (country != null ? country.hashCode() : 0);

return result;

}

@Override

public String toString() {

return “Address{“ +

“street=’” + street + ‘\’’ +

“, streetNumber=” + streetNumber +

“, postCode=’” + postCode + ‘\’’ +

“, city=’” + city + ‘\’’ +

“, country=” + country +

‘}’;

}

public String getStreet() {

return street;

}

public void setStreet(String street) {

this.street = street;

}

public int getStreetNumber() {

return streetNumber;

}

public void setStreetNumber(int streetNumber) {

this.streetNumber = streetNumber;

}

public String getPostCode() {

return postCode;

}

public void setPostCode(String postCode) {

this.postCode = postCode;

}

public String getCity() {

return city;

}

public void setCity(String city) {

this.city = city;

}

public Country getCountry() {

return country;

}

public void setCountry(Country country) {

this.country = country;

}

}

class Address(street:String, streetNumber:Int, postCode:String, city:String, country:Country) {

var street: String

var streetNumber:Int = 0

var postCode:String

var city: String

var country:Country

init{

this.street = street

this.streetNumber = streetNumber

this.postCode = postCode

this.city = city

this.country = country

}

public override fun equals(o:Any):Boolean {

if (this === o) return true

if (o == null || javaClass != o.javaClass) return false

Val address = o as Address

if (streetNumber != address.streetNumber) return false

if (street != address.street) return false

if (postCode != address.postCode) return false

if (city != address.city) return false

return country === address.country

}

public override fun hashCode():Int {

val result = street.hashCode()

result = 31 * result + streetNumber

result = 31 * result + postCode.hashCode()

result = 31 * result + city.hashCode()

result = 31 * result + (if (country != null) country.hashCode() else 0)

return result

}

public override fun toString():String {

return (“Address{“ +

“street=’” + street + ‘\’’.toString() +

“, streetNumber=” + streetNumber +

“, postCode=’” + postCode + ‘\’’.toString() +

“, city=’” + city + ‘\’’.toString() +

“, country=” + country +

‘}’.toString())

}

}

I’d say KOTLIN IS THE BEST FIND FOR ANDROID APP DEVELOPMENT.Google has dug deeper with some plans ahead since announcing it as an official language. Moreover, it signals Google’s first steps in moving away from the Java ecosystem, which is imminent, considering its recent adventures with Flutter and the upcoming Fuchsia OS.

Objective C is the same for iOS what Java is for Android. Objective-C, a superset of the C programming language( with objective -oriented capabilities and dynamic run time) initially used to build the core of iOS operating system across the Apple devices. However, Apple soon started using swift, which diminishes the importance of Objective -C in comparison to previous compilations.

Apple introduced Swift as an alternative to Objective-C in late 2015, and it has since been continued to be the primary language for iOS app development.Swift is more functional than Objective-C, less prone to errors, dynamic libraries help reduce the size and app without ever compromising performance.

Now, you would remember the comparison we’ve done with Java and kotlin. In iOS, objective-C is much older than swift with much more complicated syntax. Giving cringeworthy feel to beginners to get started with Objective-C.

Image Source: Google

THIS IS WHAT YOU DO WHEN INITIALIZING AN ARRAY IN OBJECTIVE-C:

NSMutableArray * array =[[NSMutableArray alloc] init];

NOW LOOK AT HOW THE SAME THING IS DONE IN SWIFT:

var array =[Int]()

SWIFT IS MUCH MORE ` WHAT WE’VE COVERED HERE.

In cross-platform app development, developers build a single mobile app that can be used on multiple OS platforms. It is made possible by creating an app with a shared common codebase, adapted to various platforms.

Image Source: Google

Popular Cross-platform apps:

  1. Instagram
  2. Skype
  3. LinkedIN

React Native is a mobile app development framework based on JavaScript. It is used and supported by one of the biggest social media platforms- Facebook. In cross-platform apps built using React Native, the application logic is coded in JavaScript, whereas its UI is entirely native. This blog about building a React Native app is worth reading if you want to know why its stakes are higher.

Xamarin is a Microsoft-supported cross-platform mobile app development tool that uses the C# programming language. Using Xamarin, developers can build mobile apps for multiple platforms, sharing over 90% of the same code.

TypeScript is a superset of JavaScript, and is a statically-typed programming language supported by Microsoft. TypeScript can be used along with the React Native framework to make full use of its error detection features when writing code for react components.

In Hybrid mobile app development, developers build web apps using HTML, CSS & JavaScript and then wrap the code in a native shell. It allows the app to be deployed as a regular app, with functionality at a level between a fully native app and a website rendered(web browser).

Image Source: Google
  1. Untappd
  2. Amazon App Store
  3. Evernote

Apache Cordova is an open-source hybrid mobile app development framework that uses JavaScript for logic operations and while HTML5 & CSS3 for rendering. PhoneGap is a commercialized, free, and open-source distribution of Apache Cordova owned by Adobe. The PhoneGap platform was developed to deliver non-proprietary, free, and open-source app development solutions powered by the web.

Ionic is a hybrid app development framework based on AngularJS. Similar to other hybrid platforms, it uses HTML, CSS & JavaScript to build mobile apps. Ionic is primarily focused on the front-end UI experience and integrates well with frameworks such as Angular, Vue, and ReactJS.

To summarize, there are 3 types of mobile apps- Native mobile apps, Cross-platform mobile apps, and Hybrid mobile apps; each offers unique technologies, frameworks, and tools of their own. I have enlisted here the best mobile app technology stacks you could use for mobile app development.

The technologies, tools, and frameworks mentioned here are used in some of the most successful apps. With support from an expert, a well-established mobile app development company, that may give much-needed impetus in the dynamic mobile app development world.

Source: https://chatbotslife.com/best-technology-stacks-for-mobile-app-development-6fed70b62778?source=rss—-a49517e4c30b—4

Continue Reading
AI12 hours ago

Things to Know about Free Form Templates

AI24 hours ago

Are Chatbots Vulnerable? Best Practices to Ensure Chatbots Security

AI1 day ago

Best Technology Stacks For Mobile App Development

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

AI2 days ago

Arcanum makes Hungarian heritage accessible with Amazon Rekognition

Trending