Connect with us


Coping With A Potential Mobility Frenzy Due To AI Autonomous Cars

By Lance Eliot, the AI Trends Insider Walk or drive? That’s sometimes a daily decision that we all need to make. A colleague the other day drove about a half block down the street from his office, just to get a coffee from his favorite coffee shop. You might assume that foul weather prompted him […]



If true self-driving cars become available, would we become more enamored of using cars to take many more short trips, thus increasing traffic and pollution? (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Walk or drive?

That’s sometimes a daily decision that we all need to make.

A colleague the other day drove about a half block down the street from his office, just to get a coffee from his favorite coffee shop.

You might assume that foul weather prompted him to use his car for the half-block coffee quest rather than hoofing the distance on foot.

Nope, there wasn’t any rain, no snow, no inclement weather of any kind.

Maybe he had a bad leg or other ailments?

No, he’s in perfectly good health and was readily capable of strutting the half-block distance.

Here in California, we are known for our car culture and devotion to using our automobiles for the smallest of distances. Our motto seems to be that you’d be foolhardy to walk when you have a car that can get you to your desired destination, regardless of the distance involved.

Numerous publicly stated concerns have been raised about this kind of mindset.

Driving a car when you could have walked is tantamount to producing excess pollution that could have been otherwise avoided. The driving act also causes the consumption of fuel, along with added wear-and-tear on the car and the roadway infrastructure, all of which seem unnecessary for short walkable trips.

And don’t bring up the obesity topic and how valuable walking can be to your welfare, it’s a point that might bring forth fisticuffs from some drivers that believe fervently in using their car to drive anyplace and all places, whenever they wish.

One aspect that likely factored into his decision was whether there was a place to park his car, since the coffee shop was not a drive thru.

We all know how downright exasperating it can be to find a parking spot.

Suppose that parking never became a problem again.

Suppose that using a car to go a half-block distance was always readily feasible.

Suppose that you could use a car for any driving distance and could potentially even use a car to get from your house to a neighbor’s home just down the street from you.

Some of us, maybe a lot of us, might become tempted to use cars a lot more than we do now.

In the United States, we go about 3.22 trillion miles per year via our cars. That’s though based on various barriers or hurdles involved in opting to make use of a car.

Here’s an intriguing question: If we had true self-driving cars available, ready 24×7 to give you a lift, would we become more enamored of using cars and taking many more short trips?

Think of the zillions of daily short trips that might be done via car use.

Add to that amount the ease of going longer distances than today you might not do, perhaps driving to see your grandma when you normally wouldn’t feel up to the driving task.

The 3.22 trillion miles of car usage could jump dramatically.

It could rise by say 10% or 20%, or maybe double or triple in size.

It could generate an outsized mobility frenzy.

Let’s unpack the matter and explore the implications of this seemingly uncapped explosion of car travel.

For the grand convergence leading to the advent of self-driving cars, see my discussion here:

The emergence of self-driving cars is like trying to achieve a moonshot, here’s my explanation:

There are ways for a self-driving car to look conspicuous, I’ve described them here:

To learn about how self-driving cars will be operated non-stop, see my indication here:

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional cars, so it’s unlikely to have much of an impact on how many miles we opt to travel.

For semi-autonomous cars, it is equally important that I mention a disturbing aspect that’s been arising, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the car, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Distances Traveled

For Level 4 and Level 5 true self-driving cars, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

For those of you that use ridesharing today, you’ll be joined by millions upon millions of other Americans that will be doing the same, except there won’t be a human driver behind the wheel anymore.

Similar to requesting a ridesharing trip of today, we will all merely consult our smartphone and request a lift. The nearest self-driving car will respond to your request and arrive to pick you up.

Some believe that we’ll have so many self-driving cars on our roads that they’ll be quick to reach you.

Furthermore, these driverless cars will be roaming and meandering constantly, awaiting the next request for a pick-up, and thus will be statistically close to you whenever you request a ride.

Nobody is sure what the cost to use self-driving cars will be, but let’s assume for the moment that the cost is less than today’s human-driven ridesharing services. Indeed, assume that the cost is a lot lower, perhaps several times less than a human-driven alternative.

Let’s put two and two together.

Ubiquitous driverless cars, ready to give you a lift, doing so at a minimal cost, and can whisk you to whatever destination you specify.

The AI that’s driving the car won’t berate you for going a half-block.

No need to carry on idle chit chat with the AI.

It’s like going for a ride in a chauffeur-driven car, and you are in full command of saying where you want to go, without any backlash from the driver (the AI isn’t going to whine or complain, though perhaps there will be a mode that you can activate if that’s the kind of driving journey you relish).

This is going to spark induced demand on steroids.

Induced demand refers to suppressed demand for a product or service that can spring forth once that product or service becomes more readily available.

The classic example involves adding a new lane to an existing highway or freeway. We’ve all experienced the circumstance whereby the new lane doesn’t end-up alleviating traffic.

Why not?

Because there is usually suppressed demand that comes out of the woodwork to fill-up the added capacity. People that before were unwilling to get onto the roadway due to the traffic congestion are bound to think that the added lane makes it viable to now do so, yet once they start to use the highway it ends-up with so much traffic that once again the lanes get jammed.

With the advent of driverless cars, and once the availability of using car travel enters a nearly friction-free mode, the logical next step is that people will use car travel abundantly.

All those short trips that might have been costly to take or might have required a lot of waiting time, you’ll now be able to undertake those with ease.

In fact, some believe that self-driving cars could undermine micro-mobility too.

Micro-mobility is the use of electric scooters, shared bikes, and electric skateboards, which today are gradually growing in popularity to go the “last mile” to your destination.

If a driverless car can take you directly to your final destination, no need to bother with some other travel option such as micro-mobility.

How far down this self-driving car rabbit hole might we go?

There could be the emergence of a new cultural norm that you always are expected to use a driverless car, and anyone dumb enough or stubborn enough to walk or ride a bike is considered an oddball or outcast.

Is this what we want?

Could it cause some adverse consequences and spiral out-of-control?

For info about self-driving cars as a form of Personal Rapid Transit (PRT), see my explanation here:

On the use of self-driving cars for family vacations, see my indication:

In terms of idealism about self-driving cars, here’s my analysis:

A significant aspect will be induced demand for AI autonomous cars, which I explain here:

Mobility Frenzy Gets A Backlash

Well, it could be that we are sensible enough that we realize there isn’t a need to always use a driverless car when some alternative option exists.

Even if driverless cars are an easy choice, our society might assert that we should still walk and ride our bikes and scooters.

Since driverless cars are predicted to reduce the number of annual deaths and injuries due to car accidents, people might be more open to riding bikes and scooters, plus pedestrians might be less worried about getting run over by a car.

Futuristic cities and downtown areas might ban any car traffic in their inner core area. Self-driving cars will get you to the outer ring of the inner core, and from that point, you’ll need to walk or use a micro-mobility selection.

From a pollution perspective, using today’s combustion engine cars is replete with lots of tailpipe emissions. The odds are that self-driving cars will be EV’s (Electrical Vehicles), partially due to the need to have such vast amounts of electrical power for the AI and on-board computer processors. As such, the increased use of driverless cars won’t boost pollution on par with gasoline-powered cars.

Nonetheless, there is a carbon footprint associated with the electrical charging of EV’s. We might become sensitive to how much electricity we are consuming by taking so many driverless car trips. This could cause people to think twice before using a self-driving car.


Keep in mind that we are assuming that self-driving cars will be priced so low on a ridesharing basis that everyone will readily be able to afford to use driverless cars.

It could be that the cost is not quite as low as assumed, in which case the cost becomes a mitigating factor to dampen the mobility frenzy.

Another key assumption is that driverless cars will be plentiful and roaming so that they are within a short distance of anyone requesting a ride.

My colleague would have likely walked to the coffee shop in a world of self-driving cars if the driverless car was going to take longer to reach him than the time it would take to just meander over on his own.

And, this future era of mobility-for-all is going to occur many decades from now, since we today have 250 million conventional cars and it will take many years to gradually mothball them and have a new stock of self-driving cars gradually become prevalent.

Are self-driving cars going to be our Utopia, or might it be a Dystopia in which people no longer walk or ride bikes and instead get into their mobility bubbles and hide from their fellow humans while making the shortest of trips?

The frenzy would be of our own making, and hopefully, we could also deal with shaping it to ensure that we are still a society of people and walking, though I’m sure that some will still claim that walking is overrated.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]



Graph Convolutional Networks (GCN)

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it. Why Graphs? Many problems are graphs in true nature. In our world, we see many data are graphs, […]

The post Graph Convolutional Networks (GCN) appeared first on TOPBOTS.



graph convolutional networks

In this post, we’re gonna take a close look at one of the well-known graph neural networks named Graph Convolutional Network (GCN). First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it.

Why Graphs?

Many problems are graphs in true nature. In our world, we see many data are graphs, such as molecules, social networks, and paper citations networks.

Tasks on Graphs

  • Node classification: Predict a type of a given node
  • Link prediction: Predict whether two nodes are linked
  • Community detection: Identify densely linked clusters of nodes
  • Network similarity: How similar are two (sub)networks

Machine Learning Lifecycle

In the graph, we have node features (the data of nodes) and the structure of the graph (how nodes are connected).

For the former, we can easily get the data from each node. But when it comes to the structure, it is not trivial to extract useful information from it. For example, if 2 nodes are close to one another, should we treat them differently to other pairs? How about high and low degree nodes? In fact, each specific task can consume a lot of time and effort just for Feature Engineering, i.e., to distill the structure into our features.

graph convolutional network
Feature engineering on graphs. (Picture from [1])

It would be much better to somehow get both the node features and the structure as the input, and let the machine to figure out what information is useful by itself.

That’s why we need Graph Representation Learning.

graph convolutional network
We want the graph can learn the “feature engineering” by itself. (Picture from [1])

If this in-depth educational content on convolutional neural networks is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Graph Convolutional Networks (GCNs)

Paper: Semi-supervised Classification with Graph Convolutional Networks (2017) [3]

GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information.

it solves the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes (semi-supervised learning).

graph convolutional network
Example of Semi-supervised learning on Graphs. Some nodes dont have labels (unknown nodes).

Main Ideas

As the name “Convolutional” suggests, the idea was from Images and then brought to Graphs. However, when Images have a fixed structure, Graphs are much more complex.

graph convolutional network
Convolution idea from images to graphs. (Picture from [1])

The general idea of GCN: For each node, we get the feature information from all its neighbors and of course, the feature of itself. Assume we use the average() function. We will do the same for all the nodes. Finally, we feed these average values into a neural network.

In the following figure, we have a simple example with a citation network. Each node represents a research paper, while edges are the citations. We have a pre-process step here. Instead of using the raw papers as features, we convert the papers into vectors (by using NLP embedding, e.g., tf–idf).

Let’s consider the green node. First off, we get all the feature values of its neighbors, including itself, then take the average. The result will be passed through a neural network to return a resulting vector.

graph convolutional network
The main idea of GCN. Consider the green node. First, we take the average of all its neighbors, including itself. After that, the average value is passed through a neural network. Note that, in GCN, we simply use a fully connected layer. In this example, we get 2-dimension vectors as the output (2 nodes at the fully connected layer).

In practice, we can use more sophisticated aggregate functions rather than the average function. We can also stack more layers on top of each other to get a deeper GCN. The output of a layer will be treated as the input for the next layer.

graph convolutional network
Example of 2-layer GCN: The output of the first layer is the input of the second layer. Again, note that the neural network in GCN is simply a fully connected layer (Picture from [2])

Let’s take a closer look at the maths to see how it really works.

Intuition and the Maths behind

First, we need some notations

Let’s consider a graph G as below.

graph convolutional network
From the graph G, we have an adjacency matrix A and a Degree matrix D. We also have feature matrix X.

How can we get all the feature values from neighbors for each node? The solution lies in the multiplication of A and X.

Take a look at the first row of the adjacency matrix, we see that node A has a connection to E. The first row of the resulting matrix is the feature vector of E, which A connects to (Figure below). Similarly, the second row of the resulting matrix is the sum of feature vectors of D and E. By doing this, we can get the sum of all neighbors’ vectors.

graph convolutional network
Calculate the first row of the “sum vector matrix” AX
  • There are still some things that need to improve here.
  1. We miss the feature of the node itself. For example, the first row of the result matrix should contain features of node A too.
  2. Instead of sum() function, we need to take the average, or even better, the weighted average of neighbors’ feature vectors. Why don’t we use the sum() function? The reason is that when using the sum() function, high-degree nodes are likely to have huge v vectors, while low-degree nodes tend to get small aggregate vectors, which may later cause exploding or vanishing gradients (e.g., when using sigmoid). Besides, Neural networks seem to be sensitive to the scale of input data. Thus, we need to normalize these vectors to get rid of the potential issues.

In Problem (1), we can fix by adding an Identity matrix I to A to get a new adjacency matrix Ã.

Pick lambda = 1 (the feature of the node itself is just important as its neighbors), we have Ã = A + I. Note that we can treat lambda as a trainable parameter, but for now, just assign the lambda to 1, and even in the paper, lambda is just simply assigned to 1.

By adding a self-loop to each node, we have the new adjacency matrix

Problem (2)For matrix scaling, we usually multiply the matrix by a diagonal matrix. In this case, we want to take the average of the sum feature, or mathematically, to scale the sum vector matrix ÃX according to the node degrees. The gut feeling tells us that our diagonal matrix used to scale here is something related to the Degree matrix D̃ (Why , not D? Because we’re considering Degree matrix  of new adjacency matrix Ã, not A anymore).

The problem now becomes how we want to scale/normalize the sum vectors? In other words:

How we pass the information from neighbors to a specific node?

We would start with our old friend average. In this case, D̃ inverse (i.e., D̃^{-1}) comes into play. Basically, each element in D̃ inverse is the reciprocal of its corresponding term of the diagonal matrix D.

For example, node A has a degree of 2, so we multiple the sum vectors of node A by 1/2, while node E has a degree of 5, we should multiple the sum vector of E by 1/5, and so on.

Thus, by taking the multiplication of D̃ inverse and X, we can take the average of all neighbors’ feature vectors (including itself).

So far so good. But you may ask How about the weighted average()?. Intuitively, it should be better if we treat high and low degree nodes differently.

We’re just scaling by rows, but ignoring their corresponding columns (dash boxes)
Add a new scaler for columns.

The new scaler gives us the “weighted” average. What are we doing here is to put more weights on the nodes that have low-degree and reduce the impact of high-degree nodes. The idea of this weighted average is that we assume low-degree nodes would have bigger impacts on their neighbors, whereas high-degree nodes generate lower impacts as they scatter their influence at too many neighbors.

graph convolutional network
When aggregating feature at node B, we assign the biggest weight for node B itself (degree of 3), and the lowest weight for node E (degree of 5)
Because we normalize twice, we change “-1” to “-1/2”

For example, we have a multi-classification problem with 10 classes, F will be set to 10. After having the 10-dimension vectors at layer 2, we pass these vectors through a softmax function for the prediction.

The Loss function is simply calculated by the cross-entropy error over all labeled examples, where Y_{l} is the set of node indices that have labels.

The number of layers

The meaning of #layers

The number of layers is the farthest distance that node features can travel. For example, with 1 layer GCN, each node can only get the information from its neighbors. The gathering information process takes place independentlyat the same time for all the nodes.

When stacking another layer on top of the first one, we repeat the gathering info process, but this time, the neighbors already have information about their own neighbors (from the previous step). It makes the number of layers as the maximum number of hops that each node can travel. So, depends on how far we think a node should get information from the networks, we can config a proper number for #layers. But again, in the graph, normally we don’t want to go too far. With 6–7 hops, we almost get the entire graph which makes the aggregation less meaningful.

graph convolutional network
Example: Gathering info process with 2 layers of target node i

How many layers should we stack the GCN?

In the paper, the authors also conducted some experiments with shallow and deep GCNs. From the figure below, we see that the best results are obtained with a 2- or 3-layer model. Besides, with a deep GCN (more than 7 layers), it tends to get bad performances (dashed blue line). One solution is to use the residual connections between hidden layers (purple line).

graph convolutional network
Performance over #layers. Picture from the paper [3]

Take home notes

  • GCNs are used for semi-supervised learning on the graph.
  • GCNs use both node features and the structure for the training
  • The main idea of the GCN is to take the weighted average of all neighbors’ node features (including itself): Lower-degree nodes get larger weights. Then, we pass the resulting feature vectors through a neural network for training.
  • We can stack more layers to make GCNs deeper. Consider residual connections for deep GCNs. Normally, we go for 2 or 3-layer GCN.
  • Maths Note: When seeing a diagonal matrix, think of matrix scaling.
  • A demo for GCN with StellarGraph library here [5]. The library also provides many other algorithms for GNNs.

Note from the authors of the paper: The framework is currently limited to undirected graphs (weighted or unweighted). However, it is possible to handle both directed edges and edge features by representing the original directed graph as an undirected bipartite graph with additional nodes that represent edges in the original graph.

What’s next?

With GCNs, it seems we can make use of both the node features and the structure of the graph. However, what if the edges have different types? Should we treat each relationship differently? How to aggregate neighbors in this case? What are the advanced approaches recently?

In the next post of the graph topic, we will look into some more sophisticated methods.

graph convolutional network
How to deal with different relationships on the edges (brother, friend,….)?


[1] Excellent slides on Graph Representation Learning by Jure Leskovec (Stanford):

[2] Video Graph Convolutional Networks (GCNs) made simple:

[3] Paper Semi-supervised Classification with Graph Convolutional Networks (2017):

[4] GCN source code:

[5] Demo with StellarGraph library:

This article was originally published on Medium and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more computer vision updates.

We’ll let you know when we release more technical education.

Continue Reading


Microsoft BOT Framework — Loops



Loops is one of the basic programming structure in any programming language. In this article, I would demonstrate Loops within Microsoft BOT framework.

To follow this article clearly, please have a quick read on the basics of the Microsoft BOT framework. I wrote a couple of articles sometime back and the links are below:

Let’s Get Started.

I would be using the example of a TaxiBot described in one of my previous article. The BOT asks some general questions and books a Taxi for the user. In this article, I would be providing an option to the user to choose there preferred cars for the ride. The flow will look like below:

Create a new Dialog Class for Loops

We would need 2 Dialog classes to be able to achieve this task:

  1. SuperTaxiBotDialog.cs: This would be the main dialog class. The waterfall will contains all the steps as defined in the previous article.
  2. ChooseCarDialog.cs: A new dialog class will be created which would allow the user to pick preferred cars. The loop will be defined in this class.

The water fall steps for both the classes could be visualized as:

The complete code base is present on the Github page.

Important Technical Aspects

  • Link between the Dialogs: In the constructor initialization of SuperTaxiBotDialog, add a dialog for ChooseCarDialog by adding the line:
AddDialog(new ChooseCarDialog());

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

  • Call ChooseCarDialog from SuperTaxiBotDialog: SuperTaxiBotDialog calls ChooseCarDialog from the step SetPreferredCars, hence the return statement of the step should be like:
await stepContext.BeginDialogAsync(nameof(ChooseCarDialog), null, cancellationToken);
  • Return the flow back from ChooseCarDialog to SuperTaxiBotDialog: Once the user has selected 2 cars, the flow has to be sent back to SuperTaxiBotDialog from the step LoopCarAsync. This should be achieved by ending the ChooseCarDialog in the step LoopCarAsync.
return await stepContext.EndDialogAsync(carsSelected, cancellationToken);

The complete code base is present on the Github page.

Once the project is executed using BOT Framework Emulator, the output would look like:

Hopefully, this article will help the readers in implementing a loop with Microsoft BOT framework. For questions: Hit me.




Continue Reading


The Bleeding Edge of Voice

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling…



Tapaan Chauhan

This fall, a little known event is starting to make waves. As COVID dominates the headlines, an event called “Voice Launch” is pulling together an impressive roster of start-ups and voice tech companies intending to uncover the next big ideas and start-ups in voice.

While voice tech has been around for a while, as the accuracy of speech recognition improves, it moves into its prime. “As speech recognition moves from 85% to 95% accuracy, who will use a keyboard anymore?” says Voice Launch organizer Eric Sauve. “And that new, more natural way to interact with our devices will usher in a series of technological advances,” he added.

Voice technology is something that has been dreamt of and worked on for decades all over the world. Why? Well, the answer is very straightforward. Voice recognition allows consumers to multitask by merely speaking to their Google Home, Amazon Alexa, Siri, etc. Digital voice recording works by recording a voice sample of a person’s speech and quickly converting it into written texts using machine language and sophisticated algorithms. Voice input is just the more efficient form of computing, says Mary Meeker in her ‘Annual Internet Trends Report.’ As a matter of fact, according to ComScore, 50% of all searches will be done by voice by 2020, and 30% of searches will be done without even a screen, according to Gartner. As voice becomes a part of things we use every day like our cars, phones, etc. it will become the new “norm.”

The event includes a number of inspiration sessions meant to help start-ups and founders pick the best strategies. Companies presenting here include industry leaders like Google and Amazon and less known hyper-growth voice tech companies like Deepgram and Balto and VCs like OMERS Ventures and Techstars.

But the focus of the event is the voice tech start-ups themselves, and this year’s event has some interesting participants. Start-ups will pitch their ideas, and the audience will vote to select the winners. The event is a cross between a standard pitchfest and Britain’s Got Talent.


Continue Reading
AI3 hours ago

Graph Convolutional Networks (GCN)

AI4 hours ago

Microsoft BOT Framework — Loops

AI5 hours ago

The Bleeding Edge of Voice

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints

AI20 hours ago

Using Amazon SageMaker inference pipelines with multi-model endpoints