Connect with us

AI

Using log analysis to drive experiments and win the AWS DeepRacer F1 ProAm Race

This is a guest post by Ray Goh, a tech executive at DBS Bank.  AWS DeepRacer is an autonomous 1/18th scale race car powered by reinforcement learning, and the AWS DeepRacer League is the world’s first global autonomous racing league. It’s a fun and easy way to get started with machine learning (ML), regardless of […]

Published

on

This is a guest post by Ray Goh, a tech executive at DBS Bank. 

AWS DeepRacer is an autonomous 1/18th scale race car powered by reinforcement learning, and the AWS DeepRacer League is the world’s first global autonomous racing league. It’s a fun and easy way to get started with machine learning (ML), regardless of skill or background. For companies, it’s also a powerful platform to facilitate teaching ML to employees at the enterprise level.

As part of our digital transformation journey at DBS Bank, we’re taking innovative steps to future-proof our workforce. We’ve partnered with AWS to bring the AWS DeepRacer League to DBS to train over 3,000 employees in AI and ML by the end of 2020. Thanks to the AWS DeepRacer virtual simulation and training environment, our employees can upgrade their skills and pick up new knowledge, even when they aren’t physically in the office. The ability to run private races also allows us to create our own racing league, where our employees can put their newly learned skills to the test.

Winning the F1 ProAm Race in May 2020

As an individual racer, I’ve been active in the AWS DeepRacer League since 2019. In May 2020, racers from around the world had the unique opportunity to pit their ML skills against F1 professionals in the AWS DeepRacer F1 ProAm Race. We trained our models on a replica of the F1 Spanish Grand Prix track, and the top 10 racers from the month-long, head-to-head qualifying race faced off against F1 professional drivers Daniel Ricciardo and Tatiana Calderon in a Grand Prix-style race. Watch the AWS DeepRacer ProAm series here.

After a challenging month of racing, I emerged as the champion in the F1 ProAm Race, beating fellow racers and the pro F1 drivers to the checkered flag! Looking back now, I attribute my win to having performed many experiments throughout the month of racing. Those experiments allowed me to continuously tweak and improve my model leading up to the final race. Behind those experiments are ideas that arose from data-driven insights through log analysis.

What is log analysis?

Log analysis is using a Jupyter notebook to analyze and debug models based on log data generated from the AWS DeepRacer simulation and training environment. With snippets of Python code, you can plot and visualize your model’s training performance through various graphs and heatmaps. I created several unique visualizations that ultimately helped me train a model that was fast and stable enough to win the F1 ProAm Race.

Figure 1 Log analysis visualizations

In this post, I share some of the visualizations I created and show how you can use Amazon SageMaker to spin up a notebook instance to perform log analysis using DeepRacer model training data.

If you’re already familiar with opening notebooks in a JupyterLab notebook application, you can simply clone my log analysis repository and skip directly to the log analysis section.

Amazon SageMaker notebook instances

An Amazon SageMaker notebook instance is a managed ML compute instance running the Jupyter notebook application. Amazon SageMaker manages the creation of the instance and its related resources, so we can focus on analyzing the data collected during training without worrying about provisioning Amazon Elastic Compute Cloud (Amazon EC2) or storage resources directly.

Using an Amazon SageMaker notebook instance for log analysis

One of the greatest benefits of using an Amazon SageMaker notebook instance to perform AWS DeepRacer log analysis is that Amazon SageMaker automatically installs Anaconda packages and libraries for common deep learning platforms on our behalf, including TensorFlow deep learning libraries. It also automatically attaches an ML storage volume to our notebook instance, which we can use as a persistent working storage to perform log analysis and retain our analysis artifacts.

Creating a notebook instance

To get started, create a notebook instance on the Amazon SageMaker console.

  1. On the Amazon SageMaker console, under Notebook, choose Notebook instances.
  2. Choose Create notebook instance.

  1. For Notebook instance name, enter a name (for example, DeepRacer-Log-Analysis).
  2. For Notebook instance type¸ choose your instance.

For AWS DeepRacer log analysis, the smallest instance type (ml.t2.medium) is usually sufficient.

  1. For Volume size in GB, enter your storage volume size. For this post, we enter 5.

When the notebook instance shows an InService status, we can open JupyterLab, the IDE for Jupyter notebooks.

  1. Locate your notebook instance and choose Open JupyterLab.

Cloning the log analysis repo from JupyterLab

From the JupyterLab IDE, we can easily clone a Git repository to use log analysis notebooks shared by the community. For example, I can clone my log analysis repository in seconds, using https://github.com/TheRayG/deepracer-log-analysis.git as the Clone URI.

After cloning the repository, we should see it appear in the folder structure on the left side of the JupyterLab IDE.

Downloading logs from the AWS DeepRacer console

To prepare the data that we want to analyze, we have to download our model training logs from the AWS DeepRacer console.

  1. On the AWS DeepRacer console, under Reinforcement learning, choose Your models.
  2. Choose the model to analyze.
  3. In the Training section, under Resources, choose Download Logs.

This downloads the training log files, which are packaged in a .tar.gz file.

Extracting the required log files for analysis

In this step, we complete the final configurations.

  1. Extract the RoboMaker and Amazon SageMaker log files from the .tar.gz package (found in the logs/training/ subdirectory).

  1. Upload the two log files into the /deepracer-log-analysis/logs folder in the JupyterLab IDE.

We’re now ready to open up our log analysis notebook to work its magic!

  1. Navigate to the /deepracer-log-analysis folder on the left side of the IDE and choose the .ipynb file to open the notebook.
  2. When opening the notebook, you may be prompted to provide a kernel. Choose a kernel that uses Python 3, such as conda_tensorflow_p36.

  1. Wait until the kernel status changes from Starting to Idle.
  2. Edit the notebook to specify the path and names of the two log files that we just uploaded.

To perform our visualizations, we use the simulation trace data from the RoboMaker log file and policy update data from the Amazon SageMaker log file. We parse the data in the notebook using pandas dataframes, which are two-dimensional labeled data structures like spreadsheets or SQL tables.

For the RoboMaker log file, we aggregate important information, such as minimum, maximum, and average progress and lap completion ratios for each iteration of training episodes.

For the Amazon SageMaker log file, we calculate the average entropy per epoch in each policy update iteration.

Performing visualizations

We can now run the notebook by choosing Run and Run All Cells in JupyterLab. My log analysis notebook contains numerous markdown descriptions and comments to explain what each cell does. In this section, I highlight some of the visualizations from that notebook and explain some of the thought processes behind them.

Visualizing the performance envelope of the model

A common question asked by beginners of AWS DeepRacer is, “If two models are trained for the same amount of time using the same reward function and hyperparameters, why do they have different lap times when I evaluate them?”

The following visualization is a great way to explain it; it shows the frequency of performance to lap time in seconds.

I use this to illustrate the performance envelope of my model. We can show the relative probability of the model achieving various lap times by plotting a histogram of lap times achieved by the model during training. We can also work out statistically the average and best-case lap times that we can expect from the model. I’ve noticed that the lap times of the model during training resembles a normal distribution, so I use the -2 and -3 Std Dev markers to show the potential best-case lap times for the model, albeit with just 2.275% (-2 SD) and 0.135% (-3 SD) chance of occurring respectively. By understanding the likelihood of the model achieving a given lap time and comparing that to leaderboard times, I can gauge if I should continue cloning and tweaking the model, or abandon it and start fresh with a different approach.

Identifying potential model checkpoints for race submission

When training many different models for a race, racers commonly ask, “Which model would give me the highest chance of winning a virtual race?”

To answer that question, I plot the top quartile (p25) lap times vs. iterations from the training data, which identifies potential models for race submission. This scatter plot also allows me to identify potential trade-offs between speed (dots with very fast lap times) and stability (dense cluster of dots for a particular iteration). From the following diagram, I would choose models from the three highlighted iterations for race submission.

Identifying convergence and gauging consistency

As racers gain experience with model training, they start paying attention to convergence in their models. Simply put, convergence in the AWS DeepRacer context is when a model is performing close to its best (in terms of average lap progress), and further training may harm its performance or make it overfit, such that it only does well for that track in a very specific simulation environment, but not in other tracks or in a physical AWS DeepRacer car. That begs the following questions: “How do I tell when the model has converged?” and “How consistent is my model after it has converged?”

To aid in visualizing convergence, I overlay the entropy information from the Amazon SageMaker policy training logs over the usual plots for rewards and progress.

Entropy is a measure of the amount of randomness in our reinforcement learning neural network. At the beginning of model training, entropy is high, because our neural network is updated mostly based on random actions as the car explores the track.

Over time, with more experiences gained from actions and rewards at various parts of the track, the car starts to exploit this information and takes less random actions.

The thinking behind this is that, as rewards and progress increase, the entropy value should decrease. When rewards and progress plateau, the entropy loss should also flatten out. Therefore, I use entropy as an additional indicator for convergence.

To gauge the consistency of my model, I also plot the percentage of lap completions per iteration during training. When the model is capable of completing laps, the percentage of completed laps should creep up in subsequent iterations, until around the point of convergence, when the percentage value should plateau too. See the following plot.

The model training process is probabilistic because the reinforcement learning agent incorporates entropy to explore the environment. To smooth out the effects of the probabilistic model in my visualization, I use a simple moving average over three iterations for each of my plotted metrics.

Identifying inefficiencies in driving behavior

When racers have a competitive model, they may start to wonder, “Are there sections of the track where the car is driving inefficiently? What are the sections where I can encourage the car to speed up?”

In pursuit of answering these questions, I designed a visualization that shows the average speed and steering angle of the car measured at every waypoint along the track. This allows me to see how the model is negotiating the track, because from this plot, you can see the rate at which the model is speeding up or slowing down as it travels through the waypoints. The following visualization shows the deviation of the optimal racing line (orange) from the track centerline (blue).

You can also see how the model adjusts its steering angle as it negotiates turns. What I love about the following visualization is that it allows me to see clearly at which point after a long straight the model starts to brake before entering into a turn. It also helps me visualize if a model is accelerating quickly enough upon exiting a turn.

Identifying track sections to adjust actions and rewards

Although speed is the primary performance criteria in a time trial race, stability is also important in an object avoidance or head-to-head race. Because time penalties for going off-track impact race position, it’s very important to find the right balance between speed and stability. Even if the model can negotiate the track well, top racers are also asking, “Is the car over- or under-steering at any of the turns? Which turn should I focus on optimizing in subsequent experiments?”

By plotting a heatmap of rewards over the track, you can easily see how consistently we reward the model at various parts of the track. A thin band in the heatmap reflects very consistent rewards, while a sparse scattering of dots brings attention to the parts of the track where the model has trouble getting rewards. For my reward function, this usually highlights the turns at which the model is over- or under-steering.

For example, in the highlighted parts of the preceding plot, the model isn’t consistently going around those turns according to the racing line that I’m rewarding for. It’s actually over-steering as it exits Turn 3 (around waypoint 62), and under-steering around the other two highlighted turns. Tweaking the action space may help (in the case of under-steering, lowering the speed at high steering angles). Interestingly, the lap completion rate of the model can increase substantially with such minor tweaks, without sacrificing lap times!

Experiment, Experiment, Experiment

For the F1 ProAm Race that in May 2020, I planned to do two experiments per day (at least 60 experiments total) to try out different reward strategies and racing lines. I could iterate quickly while focusing on incremental improvements by using log analysis to surface insights from the training data.

For example, the following plot helped me answer the question “Is the car going to go as fast as possible through the entire lap?” by showing where the car uses 0-degree and highest speeds.

Cleaning up

To save on ML compute costs, when you’re done with log analysis, you can stop the notebook instance without deleting it. The notebook, data, and log files are still retained as long as you don’t delete the notebook instance. A stopped instance still incurs cost for the provisioned ML storage. But you can always restart the instance later to continue working on the notebook.

When you no longer need the notebook or data, you can permanently delete the instance, which also deletes the attached ML storage volume, so that you no longer incur its related ML storage cost.

For pricing details for Amazon SageMaker notebook instances, see Amazon SageMaker Pricing.

Conclusion

The visualizations I shared with you in this post helped me win the May 2020 F1 ProAm Race against other top racers and F1 pros, so it’s my hope that by sharing these ideas with the community, others can benefit and learn from them too.

Together as a community of practice, we can help to accelerate learning for everyone and raise the bar for the AI/ML community in general!

You can start training your own model and improve it through log analysis by signing in to the AWS DeepRacer console.


About the Author

Ray Goh is a Tech executive who leads Agile Teams in the delivery of FX Trading & Digital Solutions at DBS Bank. He is a passionate Cloud advocate with deep interest in Voice and Serverless technology, and has 8 AWS Certifications under his belt. He is also active in the DeepRacer (a Machine Learning autonomous model car) community. Obsessed with home automation, he owns close to 20 Alexa-enabled devices at home and in the car.

Source: https://aws.amazon.com/blogs/machine-learning/using-log-analysis-to-drive-experiments-and-win-the-aws-deepracer-f1-proam-race/

AI

Graph Convolutional Networks (GCN)

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it. Why Graphs? Many problems are graphs in true nature. In our world, we see many data are graphs, […]

The post Graph Convolutional Networks (GCN) appeared first on TOPBOTS.

Published

on

graph convolutional networks

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it.

Why Graphs?

Many problems are graphs in true nature. In our world, we see many data are graphs, such as molecules, social networks, and paper citations networks.

Tasks on Graphs

  • Node classification: Predict a type of a given node
  • Link prediction: Predict whether two nodes are linked
  • Community detection: Identify densely linked clusters of nodes
  • Network similarity: How similar are two (sub)networks

Machine Learning Lifecycle

In the graph, we have node features (the data of nodes) and the structure of the graph (how nodes are connected).

For the former, we can easily get the data from each node. But when it comes to the structure, it is not trivial to extract useful information from it. For example, if 2 nodes are close to one another, should we treat them differently to other pairs? How about high and low degree nodes? In fact, each specific task can consume a lot of time and effort just for Feature Engineering, i.e., to distill the structure into our features.

graph convolutional network
Feature engineering on graphs. (Picture from [1])

It would be much better to somehow get both the node features and the structure as the input, and let the machine to figure out what information is useful by itself.

That’s why we need Graph Representation Learning.

graph convolutional network
We want the graph can learn the “feature engineering” by itself. (Picture from [1])

If this in-depth educational content on convolutional neural networks is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Graph Convolutional Networks (GCNs)

Paper: Semi-supervised Classification with Graph Convolutional Networks (2017) [3]

GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information.

it solves the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes (semi-supervised learning).

graph convolutional network
Example of Semi-supervised learning on Graphs. Some nodes dont have labels (unknown nodes).

Main Ideas

As the name “Convolutional” suggests, the idea was from Images and then brought to Graphs. However, when Images have a fixed structure, Graphs are much more complex.

graph convolutional network
Convolution idea from images to graphs. (Picture from [1])

The general idea of GCN: For each node, we get the feature information from all its neighbors and of course, the feature of itself. Assume we use the average() function. We will do the same for all the nodes. Finally, we feed these average values into a neural network.

In the following figure, we have a simple example with a citation network. Each node represents a research paper, while edges are the citations. We have a pre-process step here. Instead of using the raw papers as features, we convert the papers into vectors (by using NLP embedding, e.g., tf–idf).

Let’s consider the green node. First off, we get all the feature values of its neighbors, including itself, then take the average. The result will be passed through a neural network to return a resulting vector.

graph convolutional network
The main idea of GCN. Consider the green node. First, we take the average of all its neighbors, including itself. After that, the average value is passed through a neural network. Note that, in GCN, we simply use a fully connected layer. In this example, we get 2-dimension vectors as the output (2 nodes at the fully connected layer).

In practice, we can use more sophisticated aggregate functions rather than the average function. We can also stack more layers on top of each other to get a deeper GCN. The output of a layer will be treated as the input for the next layer.

graph convolutional network
Example of 2-layer GCN: The output of the first layer is the input of the second layer. Again, note that the neural network in GCN is simply a fully connected layer (Picture from [2])

Let’s take a closer look at the maths to see how it really works.

Intuition and the Maths behind

First, we need some notations

Let’s consider a graph G as below.

graph convolutional network
From the graph G, we have an adjacency matrix A and a Degree matrix D. We also have feature matrix X.

How can we get all the feature values from neighbors for each node? The solution lies in the multiplication of A and X.

Take a look at the first row of the adjacency matrix, we see that node A has a connection to E. The first row of the resulting matrix is the feature vector of E, which A connects to (Figure below). Similarly, the second row of the resulting matrix is the sum of feature vectors of D and E. By doing this, we can get the sum of all neighbors’ vectors.

graph convolutional network
Calculate the first row of the “sum vector matrix” AX
  • There are still some things that need to improve here.
  1. We miss the feature of the node itself. For example, the first row of the result matrix should contain features of node A too.
  2. Instead of sum() function, we need to take the average, or even better, the weighted average of neighbors’ feature vectors. Why don’t we use the sum() function? The reason is that when using the sum() function, high-degree nodes are likely to have huge v vectors, while low-degree nodes tend to get small aggregate vectors, which may later cause exploding or vanishing gradients (e.g., when using sigmoid). Besides, Neural networks seem to be sensitive to the scale of input data. Thus, we need to normalize these vectors to get rid of the potential issues.

In Problem (1), we can fix by adding an Identity matrix I to A to get a new adjacency matrix Ã.

Pick lambda = 1 (the feature of the node itself is just important as its neighbors), we have Ã = A + I. Note that we can treat lambda as a trainable parameter, but for now, just assign the lambda to 1, and even in the paper, lambda is just simply assigned to 1.

By adding a self-loop to each node, we have the new adjacency matrix

Problem (2)For matrix scaling, we usually multiply the matrix by a diagonal matrix. In this case, we want to take the average of the sum feature, or mathematically, to scale the sum vector matrix ÃX according to the node degrees. The gut feeling tells us that our diagonal matrix used to scale here is something related to the Degree matrix D̃ (Why , not D? Because we’re considering Degree matrix  of new adjacency matrix Ã, not A anymore).

The problem now becomes how we want to scale/normalize the sum vectors? In other words:

How we pass the information from neighbors to a specific node?

We would start with our old friend average. In this case, D̃ inverse (i.e., D̃^{-1}) comes into play. Basically, each element in D̃ inverse is the reciprocal of its corresponding term of the diagonal matrix D.

For example, node A has a degree of 2, so we multiple the sum vectors of node A by 1/2, while node E has a degree of 5, we should multiple the sum vector of E by 1/5, and so on.

Thus, by taking the multiplication of D̃ inverse and X, we can take the average of all neighbors’ feature vectors (including itself).

So far so good. But you may ask How about the weighted average()?. Intuitively, it should be better if we treat high and low degree nodes differently.

We’re just scaling by rows, but ignoring their corresponding columns (dash boxes)
Add a new scaler for columns.

The new scaler gives us the “weighted” average. What are we doing here is to put more weights on the nodes that have low-degree and reduce the impact of high-degree nodes. The idea of this weighted average is that we assume low-degree nodes would have bigger impacts on their neighbors, whereas high-degree nodes generate lower impacts as they scatter their influence at too many neighbors.

graph convolutional network
When aggregating feature at node B, we assign the biggest weight for node B itself (degree of 3), and the lowest weight for node E (degree of 5)
Because we normalize twice, we change “-1” to “-1/2”

For example, we have a multi-classification problem with 10 classes, F will be set to 10. After having the 10-dimension vectors at layer 2, we pass these vectors through a softmax function for the prediction.

The Loss function is simply calculated by the cross-entropy error over all labeled examples, where Y_{l} is the set of node indices that have labels.

The number of layers

The meaning of #layers

The number of layers is the farthest distance that node features can travel. For example, with 1 layer GCN, each node can only get the information from its neighbors. The gathering information process takes place independentlyat the same time for all the nodes.

When stacking another layer on top of the first one, we repeat the gathering info process, but this time, the neighbors already have information about their own neighbors (from the previous step). It makes the number of layers as the maximum number of hops that each node can travel. So, depends on how far we think a node should get information from the networks, we can config a proper number for #layers. But again, in the graph, normally we don’t want to go too far. With 6–7 hops, we almost get the entire graph which makes the aggregation less meaningful.

graph convolutional network
Example: Gathering info process with 2 layers of target node i

How many layers should we stack the GCN?

In the paper, the authors also conducted some experiments with shallow and deep GCNs. From the figure below, we see that the best results are obtained with a 2- or 3-layer model. Besides, with a deep GCN (more than 7 layers), it tends to get bad performances (dashed blue line). One solution is to use the residual connections between hidden layers (purple line).

graph convolutional network
Performance over #layers. Picture from the paper [3]

Take home notes

  • GCNs are used for semi-supervised learning on the graph.
  • GCNs use both node features and the structure for the training
  • The main idea of the GCN is to take the weighted average of all neighbors’ node features (including itself): Lower-degree nodes get larger weights. Then, we pass the resulting feature vectors through a neural network for training.
  • We can stack more layers to make GCNs deeper. Consider residual connections for deep GCNs. Normally, we go for 2 or 3-layer GCN.
  • Maths Note: When seeing a diagonal matrix, think of matrix scaling.
  • A demo for GCN with StellarGraph library here [5]. The library also provides many other algorithms for GNNs.

Note from the authors of the paper: The framework is currently limited to undirected graphs (weighted or unweighted). However, it is possible to handle both directed edges and edge features by representing the original directed graph as an undirected bipartite graph with additional nodes that represent edges in the original graph.

What’s next?

With GCNs, it seems we can make use of both the node features and the structure of the graph. However, what if the edges have different types? Should we treat each relationship differently? How to aggregate neighbors in this case? What are the advanced approaches recently?

In the next post of the graph topic, we will look into some more sophisticated methods.

graph convolutional network
How to deal with different relationships on the edges (brother, friend,….)?

REFERENCES

[1] Excellent slides on Graph Representation Learning by Jure Leskovec (Stanford):  https://drive.google.com/file/d/1By3udbOt10moIcSEgUQ0TR9twQX9Aq0G/view?usp=sharing

[2] Video Graph Convolutional Networks (GCNs) made simple: https://www.youtube.com/watch?v=2KRAOZIULzw

[3] Paper Semi-supervised Classification with Graph Convolutional Networks (2017): https://arxiv.org/pdf/1609.02907.pdf

[4] GCN source code: https://github.com/tkipf/gcn

[5] Demo with StellarGraph library: https://stellargraph.readthedocs.io/en/stable/demos/node-classification/gcn-node-classification.html

This article was originally published on Medium and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more computer vision updates.

We’ll let you know when we release more technical education.

Continue Reading

AI

Microsoft BOT Framework — Loops

Published

on

Loops is one of the basic programming structure in any programming language. In this article, I would demonstrate Loops within Microsoft BOT framework.

To follow this article clearly, please have a quick read on the basics of the Microsoft BOT framework. I wrote a couple of articles sometime back and the links are below:

Let’s Get Started.

I would be using the example of a TaxiBot described in one of my previous article. The BOT asks some general questions and books a Taxi for the user. In this article, I would be providing an option to the user to choose there preferred cars for the ride. The flow will look like below:

Create a new Dialog Class for Loops

We would need 2 Dialog classes to be able to achieve this task:

  1. SuperTaxiBotDialog.cs: This would be the main dialog class. The waterfall will contains all the steps as defined in the previous article.
  2. ChooseCarDialog.cs: A new dialog class will be created which would allow the user to pick preferred cars. The loop will be defined in this class.

The water fall steps for both the classes could be visualized as:

The complete code base is present on the Github page.

Important Technical Aspects

  • Link between the Dialogs: In the constructor initialization of SuperTaxiBotDialog, add a dialog for ChooseCarDialog by adding the line:
AddDialog(new ChooseCarDialog());

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

  • Call ChooseCarDialog from SuperTaxiBotDialog: SuperTaxiBotDialog calls ChooseCarDialog from the step SetPreferredCars, hence the return statement of the step should be like:
await stepContext.BeginDialogAsync(nameof(ChooseCarDialog), null, cancellationToken);
  • Return the flow back from ChooseCarDialog to SuperTaxiBotDialog: Once the user has selected 2 cars, the flow has to be sent back to SuperTaxiBotDialog from the step LoopCarAsync. This should be achieved by ending the ChooseCarDialog in the step LoopCarAsync.
return await stepContext.EndDialogAsync(carsSelected, cancellationToken);

The complete code base is present on the Github page.

Once the project is executed using BOT Framework Emulator, the output would look like:

Hopefully, this article will help the readers in implementing a loop with Microsoft BOT framework. For questions: Hit me.

Regards

Tarun

Source: https://chatbotslife.com/microsoft-bot-framework-loops-fe415f0e7ca1?source=rss—-a49517e4c30b—4

Continue Reading

AI

The Bleeding Edge of Voice

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling…

Published

on

Tapaan Chauhan

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling together an impressive roster of start-ups and voice tech companies intending to uncover the next big ideas and start-ups in voice.

While voice tech has been around for a while, as the accuracy of speech recognition improves, it moves into its prime. “As speech recognition moves from 85% to 95% accuracy, who will use a keyboard anymore?” says Voice Launch organizer Eric Sauve. “And that new, more natural way to interact with our devices will usher in a series of technological advances,” he added.

Voice technology is something that has been dreamt of and worked on for decades all over the world. Why? Well, the answer is very straightforward. Voice recognition allows consumers to multitask by merely speaking to their Google Home, Amazon Alexa, Siri, etc. Digital voice recording works by recording a voice sample of a person’s speech and quickly converting it into written texts using machine language and sophisticated algorithms. Voice input is just the more efficient form of computing, says Mary Meeker in her ‘Annual Internet Trends Report.’ As a matter of fact, according to ComScore, 50% of all searches will be done by voice by 2020, and 30% of searches will be done without even a screen, according to Gartner. As voice becomes a part of things we use every day like our cars, phones, etc. it will become the new “norm.”

The event includes a number of inspiration sessions meant to help start-ups and founders pick the best strategies. Companies presenting here include industry leaders like Google and Amazon and less known hyper-growth voice tech companies like Deepgram and Balto and VCs like OMERS Ventures and Techstars.

But the focus of the event is the voice tech start-ups themselves, and this year’s event has some interesting participants. Start-ups will pitch their ideas, and the audience will vote to select the winners. The event is a cross between a standard pitchfest and Britain’s Got Talent.

Source: https://chatbotslife.com/the-bleeding-edge-of-voice-67538bd859a9?source=rss—-a49517e4c30b—4

Continue Reading
AI5 hours ago

Graph Convolutional Networks (GCN)

AI6 hours ago

Microsoft BOT Framework — Loops

AI6 hours ago

The Bleeding Edge of Voice

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI21 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

Trending