Connect with us


Knowing The Difference Between Strong AI and Weak AI Is Useful And Applies To AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   Strong versus weak AI. Or, if you prefer, you can state it as weak versus strong AI (it’s Okay to be listed in either order, yet still has the same spice, as it were). If you’ve read much about AI in the popular press, the odds are that you’ve […]



We are a long, long, long, long way from crafting AI systems that can exhibit human-level intelligence in any genuine meaning of the range, scope, and depth of human intelligence. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

Strong versus weak AI. Or, if you prefer, you can state it as weak versus strong AI (it’s Okay to be listed in either order, yet still has the same spice, as it were). If you’ve read much about AI in the popular press, the odds are that you’ve seen references to so-called strong AI and so-called weak AI, and yet both of those phrases are used wrongly and offer misleading and confounding impressions. 

Time to set the record straight. 

First, let’s consider what is being incorrectly stated. Some speak of weak AI as though it is AI that is wimpy and not up to the same capabilities as strong AI, including that weak AI is decidedly slower, or much less optimized, or otherwise inevitably and unarguably feebler in its AI capacities. 

No, that’s not it. 

Another form of distortion is to use “narrow” AI, which generally refers to AI that will only work in a narrowly-defined domain such as in a specific medical use or in a particular financial analysis use, and equate it with weak AI, while presumably strong AI is broader and more all-encompassing. 

No, that’s not it either. 

For my framework about AI autonomous cars, see the link here: 

Why this is a moonshot effort, see my explanation here: 

For more about the levels as a type of Richter scale, see my discussion here: 

For the argument about bifurcating the levels, see my explanation here: 

Meaning Of Strong AI And Weak AI   

Hark back to an earlier era of AI, around the late 1970s and early 1980s, a period of time that was characterized as the first era of AI flourishing, which you might know as a time when Knowledge-Based Systems (KBS) and Expert Systems (ES) were popular. 

The latest era, today, which some consider the second era of AI flourishing, seems to have become known as the time of Machine Learning (ML) and Deep Learning (DL). 

Using a season-oriented metaphor, the current era is depicted as the AI Spring, while the period between the first era and this now existent second era has been called the AI Winter (doing so to suggest that things were either dormant or slowed-down like how a winter season can clamp down via snow and other dampening weather conditions). 

The first era consisted of quite a bit of hand wringing about whether AI was going to become sentient and if so, how would we get there. 

Even during this second era, there are still similar discussions and debates taking place now, though the first era really seemed to fully take the matter in-hand and slews of philosophers joined onto the AI bandwagon as to what the future might hold and how AI could be or might not become truly intelligent.   

Into that fray came the birth of the monikers of weak AI and strong AI. 

Most would agree that the verbiage originated or at least was solidified in a paper by philosopher John Searle entitled “Minds, Brains, And Programs” (see link: 

What was the weak AI and what was the strong AI? 

They are philosophical differences about how AI might ultimately be achieved, assuming that you agree as to what it means to achieve AI (more on this in a moment).  

Let’s see what Searle said about defining the terminology of weak AI: “According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.” 

And, furthermore, he indicated this about strong AI: “But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” 

With this added clarification: “In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.” 

The rest of his famous (now infamous) paper then proceeds to indicate that he has “no objection to the claims of weak AI,” and thus he doesn’t tackle particularly the weak AI side of things, and instead his focus goes mainly toward the portent of strong AI. 

In short, he doesn’t have much faith or belief that strong AI is anything worth writing home about either. He says this: “On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.” 

Here’s what that signifies, at least as has been interpreted by some. 

Conventional AI is presumably doomed in trying to reach true AI if you stick with using “computer programs” since those programs aren’t ever going to cut it, and lack the needed capabilities to embody those things we associate with thinking and sentience. 

Humans and animals have a kind of intentionality, somehow arising from the use of our brains, and for those that believe true AI requires that intentionality, you are barking up the wrong tree via the pursuit of “computer programs” (they are the wrong stuff and can’t go that high up the intelligence ladder). 

All of this presupposes two key assumptions or propositions that Seale lays out: 

  1. “Intentionality in human beings (and animals) is a product of causal features of the brain…” 
  2. “Instantiating a computer program is never by itself a sufficient condition of intentionality.”   

If your goal then is to devise a computer program that can think, you are on a fool’s errand and won’t ever get there, though, it isn’t completely foolish because you might well learn a lot along the way and could have some cool results and insights, but it isn’t going to be a thinker. 

I believe it is self-evident that this is a deeply intriguing philosophical consideration, one worthy of scholars and others pontificating about. 

Does this make a difference for everyday AI work that those making AI-based systems such as Alexa or Siri or robots that function on a manufacturing line are going to be worrying about and losing sleep over?   


To clarify, we are a long, long, long, long way from crafting AI systems that can exhibit human-level intelligence in any genuine meaning of the range, scope, and depth of human intelligence.  

That’s a shocker to some that keep hearing about AI systems that are as adept as humans. 

Take a slow and measured breath and keep reading herein. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: 

To be wary of fake news about self-driving cars, see my tips here: 

The ethical implications of AI driving systems are significant, see my indication here: 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: 

Achieving True AI Is The Hearty Question 

I had earlier mentioned narrow AI. 

Some AI applications do seemingly well in narrow domains, though maybe they should have a Surgeon General type small print that identifies the numerous caveats and limitations about what that AI can do.   

AI systems today cannot undertake or showcase common-sense reasoning, which I believe we all agree that humans generally have (for those snickering about humans having or not having common-sense reasoning, yes, there are people that we know that seems to at times lack common-sense, but that’s not the same as what overall is considered common-sense reasoning and don’t conflate those two into meaninglessness).   

To insiders of AI, today’s AI applications are narrow AI, and not yet AGI (Artificial General Intelligence) systems, which is yet another term that is being used to get around the fact that “AI” has been watered down as terminology and used for anything that people want to say is AI, meanwhile, others are striving mightily to get to the purists’ version of AI, which would be AGI. 

The debate about weak AI and strong AI is aimed at those that wonder whether we will be able to someday achieve true AI. 

True AI is a loaded term that needs some clarification. 

One version of true AI is an AI system that can pass the Turing Test, a simple yet telling kind of test that involves asking an AI system questions and asking a human being questions. They are essentially two distinct players in a game of wielding intelligence, of sorts, and if you cannot tell which is which, presumably the AI is the “equivalent” of human intelligence since it was indistinguishable from a human exhibiting intelligence. 

Though the Turing Test is handy, and a frequently invoked tool for judging AI’s efforts to become true AI, it does have its downsides and problematic considerations (see my analysis at: 

Anyway, how can we craft AI to succeed at the Turing Test, and have AI be ostensibly indistinguishable from human intelligence? 

One belief is that we’ll need to embody into the AI system the same kind of intentionality, casualty, thinking, and essence of sentience that exists in humans (and to some extent, in animals). 

As a side note, the day that we reach AI sentience is often referred to as the singularity, and some believe that it will inevitably be reached and we’ll have then the equivalent of human intelligence, whilst others believe that the AI will exceed human intelligence, and we will arrive at a form of AI super-intelligence.   

Keep in mind that not everyone agrees with the precondition of needing to discover and re-invent artificial intentionality, asserting that we can nonetheless arrive at AI that exhibits human intelligence yet do so without tossing into the cart this squishy stuff referred to as intentionality and its variants. 

Anyway, setting aside that last aspect, the other big question is whether “computer programs” will be the appropriate tool to get us there (whatever the there might be).   

This brings up another definitional consideration. What do you mean by computer programs? 

At the time when this debate first flourished, computer programs generally meant hand-crafted coding using both conventional and somewhat unconventional programming languages, exemplified by programs such as ELIZA by Weizenbaum and SHRDLU by Winograd. 

Today, we are using Machine Learning and Deep Learning, so the obvious question on the minds of those that are still mulling over weak AI and strong AI would be whether the use of ML/DL constitutes “computer programs” or not. 

Have we progressed past the old-time computer programs and advanced into whatever ML/DL is, such that we no longer seemingly have this albatross around our neck that computer programs aren’t the rocket ship that can get us to this desired moon? 

Well, that opens another can of worms, though it is pretty much the case that most would agree that ML/DL is still a “computer program” in the meaning of even the 1980s expression, so, if you buy into the argument that any use of or a variant of computer programs is insufficient to arrive at thinking AI, we are still in the doom-and-gloom state of affairs. 

Searle though does cover the ML/DL topic to some degree since he mentions that a man-made machine could think if it: 

“Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obvious, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.”  

Please be aware that today’s ML/DL is a far cry from being the same as human neurons and a human brain.   

At best, it is a crude and extremely simplified simulation, usually deploying Artificial Neural Networks (ANNs), way below anything approaching a human biological equivalent. We might someday get closer and indeed some believe we will achieve the equivalent but don’t be holding your breath for now. 

Bringing us home to the argument about weak and strong AI, no matter what you do in either the case of weak AI or strong AI, here’s where you’ll land as per Searle: “But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?”   

And his clear-cut answer is: “This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.” 

Ouch! That smarts.

There is nonetheless a glimmer of hope for strong AI, as it could be potentially turned into something that could achieve the thinking brand of AI (says Searle): “Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.”   

For more details about ODDs, see my indication at this link here: 

On the topic of off-road self-driving cars, here’s my details elicitation: 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: 

Practical Significance For Today 

I hope it is obvious that the original meaning associated with weak and strong AI is far afield of what the popular press tends to use those catchy phrases for today. When trying to point out to people that their use of weak AI and strong AI is not aligned with the original meanings, they usually get huffy and tell you to not be such a stickler. Or, they tell you to knock the cobwebs out of your mind and become hipper with the present age. 

Fine, I suppose, you can change up the meaning if you want, just please be aware that it is not the same as the original.

This comes up in numerous applied uses of AI. For example, consider the emergence of AI-based true self-driving cars. True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.  

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out). 

Some media describe the semi-autonomous ADAS as weak AI, while the autonomous AI to be strong AI.  Well, that’s not aligned with the original definitions of weak AI and strong AI. You have to be willing to put to the side the original definitions if you are seeking to use those terms in that manner. 

Personally, I don’t like it. Similarly, I don’t like it when the weak AI and strong AI are used to characterize the difference between autonomous AI. 

For example, some say that Level 4 is weak AI, while Level 5 is strong AI, but this once again is nonsensical in the nature of what those terms were intended to signify. 

If you genuinely want to try and apply the argument to true self-driving cars, there is an ongoing dispute as to whether driverless cars will need to exhibit “intentionality” to be sufficiently safe for our public roadways. 

In other words, can we craft AI without any seeming embodiment of intentionality and yet nonetheless have that AI be good enough to trust AI-based self-driving cars cruising around on our highways, byways, and everyday streets? 

It’s a complex debate and no one yet knows whether the driving domain can be considered limited enough in scope that such intentionality is not a necessity, plus, the question within a question is what might be rated as safe or safe enough for society to accept self-driving cars as fellow drivers.   


For those of you wanting to get further into the weeds on this topic, you’ll also want to get introduced to the Chinese Room Argument (CRA), a foil used in Searle’s argument and something that has become a storied punching bag in the halls of AI and philosophy. 

That’s a story for another day.   

Practitioners of AI might see this whole discussion about weak AI and strong AI as academic and much ado about nothing. 

Use those phrases whatever way you want, some say. 

Hold your horses. 

Perhaps we ought to heed the words of William Shakespeare: “Words without thoughts never to heaven go.”   

The words we use do matter, and especially in the high stakes aims and outcomes of AI. 

 Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: and his podcast:] 



Graph Convolutional Networks (GCN)

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it. Why Graphs? Many problems are graphs in true nature. In our world, we see many data are graphs, […]

The post Graph Convolutional Networks (GCN) appeared first on TOPBOTS.



graph convolutional networks

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it.

Why Graphs?

Many problems are graphs in true nature. In our world, we see many data are graphs, such as molecules, social networks, and paper citations networks.

Tasks on Graphs

  • Node classification: Predict a type of a given node
  • Link prediction: Predict whether two nodes are linked
  • Community detection: Identify densely linked clusters of nodes
  • Network similarity: How similar are two (sub)networks

Machine Learning Lifecycle

In the graph, we have node features (the data of nodes) and the structure of the graph (how nodes are connected).

For the former, we can easily get the data from each node. But when it comes to the structure, it is not trivial to extract useful information from it. For example, if 2 nodes are close to one another, should we treat them differently to other pairs? How about high and low degree nodes? In fact, each specific task can consume a lot of time and effort just for Feature Engineering, i.e., to distill the structure into our features.

graph convolutional network
Feature engineering on graphs. (Picture from [1])

It would be much better to somehow get both the node features and the structure as the input, and let the machine to figure out what information is useful by itself.

That’s why we need Graph Representation Learning.

graph convolutional network
We want the graph can learn the “feature engineering” by itself. (Picture from [1])

If this in-depth educational content on convolutional neural networks is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Graph Convolutional Networks (GCNs)

Paper: Semi-supervised Classification with Graph Convolutional Networks (2017) [3]

GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information.

it solves the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes (semi-supervised learning).

graph convolutional network
Example of Semi-supervised learning on Graphs. Some nodes dont have labels (unknown nodes).

Main Ideas

As the name “Convolutional” suggests, the idea was from Images and then brought to Graphs. However, when Images have a fixed structure, Graphs are much more complex.

graph convolutional network
Convolution idea from images to graphs. (Picture from [1])

The general idea of GCN: For each node, we get the feature information from all its neighbors and of course, the feature of itself. Assume we use the average() function. We will do the same for all the nodes. Finally, we feed these average values into a neural network.

In the following figure, we have a simple example with a citation network. Each node represents a research paper, while edges are the citations. We have a pre-process step here. Instead of using the raw papers as features, we convert the papers into vectors (by using NLP embedding, e.g., tf–idf).

Let’s consider the green node. First off, we get all the feature values of its neighbors, including itself, then take the average. The result will be passed through a neural network to return a resulting vector.

graph convolutional network
The main idea of GCN. Consider the green node. First, we take the average of all its neighbors, including itself. After that, the average value is passed through a neural network. Note that, in GCN, we simply use a fully connected layer. In this example, we get 2-dimension vectors as the output (2 nodes at the fully connected layer).

In practice, we can use more sophisticated aggregate functions rather than the average function. We can also stack more layers on top of each other to get a deeper GCN. The output of a layer will be treated as the input for the next layer.

graph convolutional network
Example of 2-layer GCN: The output of the first layer is the input of the second layer. Again, note that the neural network in GCN is simply a fully connected layer (Picture from [2])

Let’s take a closer look at the maths to see how it really works.

Intuition and the Maths behind

First, we need some notations

Let’s consider a graph G as below.

graph convolutional network
From the graph G, we have an adjacency matrix A and a Degree matrix D. We also have feature matrix X.

How can we get all the feature values from neighbors for each node? The solution lies in the multiplication of A and X.

Take a look at the first row of the adjacency matrix, we see that node A has a connection to E. The first row of the resulting matrix is the feature vector of E, which A connects to (Figure below). Similarly, the second row of the resulting matrix is the sum of feature vectors of D and E. By doing this, we can get the sum of all neighbors’ vectors.

graph convolutional network
Calculate the first row of the “sum vector matrix” AX
  • There are still some things that need to improve here.
  1. We miss the feature of the node itself. For example, the first row of the result matrix should contain features of node A too.
  2. Instead of sum() function, we need to take the average, or even better, the weighted average of neighbors’ feature vectors. Why don’t we use the sum() function? The reason is that when using the sum() function, high-degree nodes are likely to have huge v vectors, while low-degree nodes tend to get small aggregate vectors, which may later cause exploding or vanishing gradients (e.g., when using sigmoid). Besides, Neural networks seem to be sensitive to the scale of input data. Thus, we need to normalize these vectors to get rid of the potential issues.

In Problem (1), we can fix by adding an Identity matrix I to A to get a new adjacency matrix Ã.

Pick lambda = 1 (the feature of the node itself is just important as its neighbors), we have Ã = A + I. Note that we can treat lambda as a trainable parameter, but for now, just assign the lambda to 1, and even in the paper, lambda is just simply assigned to 1.

By adding a self-loop to each node, we have the new adjacency matrix

Problem (2)For matrix scaling, we usually multiply the matrix by a diagonal matrix. In this case, we want to take the average of the sum feature, or mathematically, to scale the sum vector matrix ÃX according to the node degrees. The gut feeling tells us that our diagonal matrix used to scale here is something related to the Degree matrix D̃ (Why , not D? Because we’re considering Degree matrix  of new adjacency matrix Ã, not A anymore).

The problem now becomes how we want to scale/normalize the sum vectors? In other words:

How we pass the information from neighbors to a specific node?

We would start with our old friend average. In this case, D̃ inverse (i.e., D̃^{-1}) comes into play. Basically, each element in D̃ inverse is the reciprocal of its corresponding term of the diagonal matrix D.

For example, node A has a degree of 2, so we multiple the sum vectors of node A by 1/2, while node E has a degree of 5, we should multiple the sum vector of E by 1/5, and so on.

Thus, by taking the multiplication of D̃ inverse and X, we can take the average of all neighbors’ feature vectors (including itself).

So far so good. But you may ask How about the weighted average()?. Intuitively, it should be better if we treat high and low degree nodes differently.

We’re just scaling by rows, but ignoring their corresponding columns (dash boxes)
Add a new scaler for columns.

The new scaler gives us the “weighted” average. What are we doing here is to put more weights on the nodes that have low-degree and reduce the impact of high-degree nodes. The idea of this weighted average is that we assume low-degree nodes would have bigger impacts on their neighbors, whereas high-degree nodes generate lower impacts as they scatter their influence at too many neighbors.

graph convolutional network
When aggregating feature at node B, we assign the biggest weight for node B itself (degree of 3), and the lowest weight for node E (degree of 5)
Because we normalize twice, we change “-1” to “-1/2”

For example, we have a multi-classification problem with 10 classes, F will be set to 10. After having the 10-dimension vectors at layer 2, we pass these vectors through a softmax function for the prediction.

The Loss function is simply calculated by the cross-entropy error over all labeled examples, where Y_{l} is the set of node indices that have labels.

The number of layers

The meaning of #layers

The number of layers is the farthest distance that node features can travel. For example, with 1 layer GCN, each node can only get the information from its neighbors. The gathering information process takes place independentlyat the same time for all the nodes.

When stacking another layer on top of the first one, we repeat the gathering info process, but this time, the neighbors already have information about their own neighbors (from the previous step). It makes the number of layers as the maximum number of hops that each node can travel. So, depends on how far we think a node should get information from the networks, we can config a proper number for #layers. But again, in the graph, normally we don’t want to go too far. With 6–7 hops, we almost get the entire graph which makes the aggregation less meaningful.

graph convolutional network
Example: Gathering info process with 2 layers of target node i

How many layers should we stack the GCN?

In the paper, the authors also conducted some experiments with shallow and deep GCNs. From the figure below, we see that the best results are obtained with a 2- or 3-layer model. Besides, with a deep GCN (more than 7 layers), it tends to get bad performances (dashed blue line). One solution is to use the residual connections between hidden layers (purple line).

graph convolutional network
Performance over #layers. Picture from the paper [3]

Take home notes

  • GCNs are used for semi-supervised learning on the graph.
  • GCNs use both node features and the structure for the training
  • The main idea of the GCN is to take the weighted average of all neighbors’ node features (including itself): Lower-degree nodes get larger weights. Then, we pass the resulting feature vectors through a neural network for training.
  • We can stack more layers to make GCNs deeper. Consider residual connections for deep GCNs. Normally, we go for 2 or 3-layer GCN.
  • Maths Note: When seeing a diagonal matrix, think of matrix scaling.
  • A demo for GCN with StellarGraph library here [5]. The library also provides many other algorithms for GNNs.

Note from the authors of the paper: The framework is currently limited to undirected graphs (weighted or unweighted). However, it is possible to handle both directed edges and edge features by representing the original directed graph as an undirected bipartite graph with additional nodes that represent edges in the original graph.

What’s next?

With GCNs, it seems we can make use of both the node features and the structure of the graph. However, what if the edges have different types? Should we treat each relationship differently? How to aggregate neighbors in this case? What are the advanced approaches recently?

In the next post of the graph topic, we will look into some more sophisticated methods.

graph convolutional network
How to deal with different relationships on the edges (brother, friend,….)?


[1] Excellent slides on Graph Representation Learning by Jure Leskovec (Stanford):

[2] Video Graph Convolutional Networks (GCNs) made simple:

[3] Paper Semi-supervised Classification with Graph Convolutional Networks (2017):

[4] GCN source code:

[5] Demo with StellarGraph library:

This article was originally published on Medium and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more computer vision updates.

We’ll let you know when we release more technical education.

Continue Reading


Microsoft BOT Framework — Loops



Loops is one of the basic programming structure in any programming language. In this article, I would demonstrate Loops within Microsoft BOT framework.

To follow this article clearly, please have a quick read on the basics of the Microsoft BOT framework. I wrote a couple of articles sometime back and the links are below:

Let’s Get Started.

I would be using the example of a TaxiBot described in one of my previous article. The BOT asks some general questions and books a Taxi for the user. In this article, I would be providing an option to the user to choose there preferred cars for the ride. The flow will look like below:

Create a new Dialog Class for Loops

We would need 2 Dialog classes to be able to achieve this task:

  1. SuperTaxiBotDialog.cs: This would be the main dialog class. The waterfall will contains all the steps as defined in the previous article.
  2. ChooseCarDialog.cs: A new dialog class will be created which would allow the user to pick preferred cars. The loop will be defined in this class.

The water fall steps for both the classes could be visualized as:

The complete code base is present on the Github page.

Important Technical Aspects

  • Link between the Dialogs: In the constructor initialization of SuperTaxiBotDialog, add a dialog for ChooseCarDialog by adding the line:
AddDialog(new ChooseCarDialog());

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

  • Call ChooseCarDialog from SuperTaxiBotDialog: SuperTaxiBotDialog calls ChooseCarDialog from the step SetPreferredCars, hence the return statement of the step should be like:
await stepContext.BeginDialogAsync(nameof(ChooseCarDialog), null, cancellationToken);
  • Return the flow back from ChooseCarDialog to SuperTaxiBotDialog: Once the user has selected 2 cars, the flow has to be sent back to SuperTaxiBotDialog from the step LoopCarAsync. This should be achieved by ending the ChooseCarDialog in the step LoopCarAsync.
return await stepContext.EndDialogAsync(carsSelected, cancellationToken);

The complete code base is present on the Github page.

Once the project is executed using BOT Framework Emulator, the output would look like:

Hopefully, this article will help the readers in implementing a loop with Microsoft BOT framework. For questions: Hit me.




Continue Reading


The Bleeding Edge of Voice

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling…



Tapaan Chauhan

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling together an impressive roster of start-ups and voice tech companies intending to uncover the next big ideas and start-ups in voice.

While voice tech has been around for a while, as the accuracy of speech recognition improves, it moves into its prime. “As speech recognition moves from 85% to 95% accuracy, who will use a keyboard anymore?” says Voice Launch organizer Eric Sauve. “And that new, more natural way to interact with our devices will usher in a series of technological advances,” he added.

Voice technology is something that has been dreamt of and worked on for decades all over the world. Why? Well, the answer is very straightforward. Voice recognition allows consumers to multitask by merely speaking to their Google Home, Amazon Alexa, Siri, etc. Digital voice recording works by recording a voice sample of a person’s speech and quickly converting it into written texts using machine language and sophisticated algorithms. Voice input is just the more efficient form of computing, says Mary Meeker in her ‘Annual Internet Trends Report.’ As a matter of fact, according to ComScore, 50% of all searches will be done by voice by 2020, and 30% of searches will be done without even a screen, according to Gartner. As voice becomes a part of things we use every day like our cars, phones, etc. it will become the new “norm.”

The event includes a number of inspiration sessions meant to help start-ups and founders pick the best strategies. Companies presenting here include industry leaders like Google and Amazon and less known hyper-growth voice tech companies like Deepgram and Balto and VCs like OMERS Ventures and Techstars.

But the focus of the event is the voice tech start-ups themselves, and this year’s event has some interesting participants. Start-ups will pitch their ideas, and the audience will vote to select the winners. The event is a cross between a standard pitchfest and Britain’s Got Talent.


Continue Reading
AI6 hours ago

Graph Convolutional Networks (GCN)

AI7 hours ago

Microsoft BOT Framework — Loops

AI8 hours ago

The Bleeding Edge of Voice

AI21 hours ago

Build a framework that connects WhatsApp to Watson services

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI23 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints