Connect with us


The Trolley Problem Undeniably Applies to AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider   The famous or perhaps notably infamous Trolley Problem is considered one of the most controversial and outright fist-fighting topics in the field of AI autonomous self-driving cars. If you mention the Trolley Problem to any industry insider, you’ll likely get one of two reactions. One response is by […]



Lance Eliot says the ethical dilemma posed by the Trolley Problem, a choice between two paths each leading to a dire outcome, will face AI self-driving cars. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

The famous or perhaps notably infamous Trolley Problem is considered one of the most controversial and outright fist-fighting topics in the field of AI autonomous self-driving cars. If you mention the Trolley Problem to any industry insider, you’ll likely get one of two reactions. One response is by those that consider themselves as in-the-know gurus and will immediately discount the Trolley Problem as being entirely hypothetical and obtuse, looking at you askance as though you have naively fallen for some kind of scam or trickery. Others might concede reluctantly that it is an interesting topic for discussion, perhaps even worth seriously pondering, but otherwise not especially relevant to any day-to-day practical matters involving self-driving cars.   

I’d like to see if we can give the matter its serious consideration and proper due. 

Avid readers will realize that I originally covered this topic in 2017, but it seems worthwhile to give the topic some added fresh air and revive it, doing so with additional insights and somewhat tuned-up with more vigorous commentary as a stolid response to various widespread downplaying of the relevancy of the Trolley Problem. For my earlier piece, see the link here: 

To get us all on the same page, the place to start entails clarifying what the Trolley Problem consists of.   

Turns out that it is an ethically-stimulating thought experiment that traces back to the early 1900s. As such, the topic has been around for quite a while and more recently has become generally associated with the advent of self-driving cars. In brief, imagine that a trolley is going down the tracks and there is a fork up ahead. If the trolley continues in its present course, alas there is someone stuck on the tracks further along, and they will get run down and killed. You are standing next to a switch that will allow you to redirect the trolley into the forking rail track and thus avoid killing the person. 

Presumably, obviously, you would invoke the switchover.   

But there is a hideous twist, namely that the forked track also has someone entangled on it, and by diverting the trolley you will kill that person instead. 

This is one of those no-win situations. 

Whichever choice you make, a person is going to be killed. 

You might be tempted to say that you do not have to make a choice and therefore you can readily sidestep the whole matter. Not really, since by doing nothing you are essentially “agreeing” to have the person killed that is on the straight-ahead path. You cannot seemingly avoid your culpability by shrugging your shoulders and opting to do nothing, instead, you are inextricably intertwined into the situation.   

Given this preliminary setup of the Trolley Problem as a lose-lose with one person at stake in your choice of either option, it does not especially spark an ethical dilemma since each outcome is woefully considered the same.   

The matter is usually altered in various ways to try and see how you might respond to a more ethically challenging circumstance. 

For example, suppose you can discern that the straight-ahead track has a child on it, while the forked track has an adult. 

What now? 

Well, you might attempt to justify using the switch to get the trolley to fork onto the track with the adult, doing so under the logic that the adult has already lived some substantive part of their life, while the child is only at the beginning of their life and perhaps ought to be given a chance for a longer existence. 

How does that seem to you? 

Some buy into it, some do not. 

Some might argue that every person has an equal “value” of living and it is untoward to prejudge that the child should live while the adult is to die.   

Some would argue that the adult should be the one that is kept alive since they have already shown that they can survive longer than the child.   

Here’s another variation.   

Both are adults, and the one on the forked path is Einstein. 

Does this change your viewpoint about which way to direct the trolley? 

Some would say that averting the trolley away from Einstein is the “right” choice, saving him and allowing him to live and inevitably offer the tremendous insights that he was destined to provide (we are assuming in this scenario that it is a younger adult moment-in-time, Einstein).   

Not so fast, some might say, and they wonder whether the other adult, the one on the straight-ahead, maybe they are someone that is destined to be equally great or perhaps make even more notable contributions to society (who’s to know?).   

Anyway, I think you can see how the ethical dilemmas can be readily postulated with the Trolley Problem template.   

Usually, the popular variants involve the number of people that are stuck on the tracks. For example, assume there are two people trapped on the straight-ahead path, while only one person is jammed on the forked path.   

Some would say this is an “easy” answered variant since the aspect of two people is presumed to be spared over just saving one person. In that sense, you are willing to consider that lives are somewhat additive, and the more there are, the more ethically favorable is that particular choice.   

Not everyone would concur with that logic. 

In any case, we now have placed on the table herein the crux of the Trolley Problem. 

I realize that your initial reaction likely is that it is a mildly interesting and thought-provoking notion but seems overly abstract and does not offer any practical utility. 

Some object and point out that they do not envision themselves ever coming upon a trolley and perchance finding themselves in this kind of obtuse pickle. 

Shift gears.   

A firefighter has rushed up to a burning building. There is a man in the building that is poking out of a window, acrid smoke billowing around him, and yelling to be saved. What should the firefighter do? 

Well, of course, we would hope that the firefighter would seek to rescue the man. But, wait, there is the sound of a child, screaming uncontrollably, stuck in a bedroom inside the burning building. The firefighter has to choose which to try and rescue, and for which the firefighter will not have time to save both of them. If the firefighter chooses to save the child, the man will perish in the fire. If the firefighter chooses to save the man, the child will succumb to the fire.  

Does this seem familiar? 

The point is that there are potentially real-life related scenarios that exhibit the underlying parameters and the overarching premise of the Trolley Problem.   

Remove the trolley from the problem as stated and look at the structure or elements that underpin the circumstances (we can still refer to the matter as the Trolley Problem for sake of reference, yet remove the trolley and still retain the core essentials).   

We have this: 

  • There are dire circumstances of a life-or-death nature (more like death-or-death) 
  • All outcomes are horrific (even the do-nothing option) and lead to fatality 
  • Time is short and there are urgency and immediacy involved 
  • Options are extremely limited, and a forced-choice is required 

You might try to argue that there is not a “forced choice” since there is the do-nothing option always available in these scenarios, but we are going to assume that the person faced with the predicament is aware of what is taking place and realizes they are making a choice even if they choose to do nothing.   

Obviously, if the person confronted with the choice is unaware of the ramifications of doing nothing, they perhaps could be said to have not been cognizant of the fact that they tacitly made a choice. Likewise, someone that miscomprehends the situation might falsely believe that they do not have to make a choice. 

Assume that the person involved is fully aware of the do-nothing and must choose to do nothing or to not do-nothing (I emphasize this due to the aspect that sometimes people mulling over the Trolley Problem will attempt to weasel out of the setup by saying that the do-nothing is the “right” choice since they then have averted making any decision; the selection of do-nothing is in fact considered a decision in this setup). 

As an aside, in the case of the burning building, if the firefighter does nothing, presumably both the man and the child will die, so this is somewhat kilter of the Trolley Problem as presented, thus, it is perhaps more evident that the firefighter will almost certainly make a choice. It differs from the classic Trolley Problem in that the firefighter has the opportunity to always, later on, point out that the do-nothing was certainly worse than making a choice, no matter which apparent choice was ultimately selected. 

One other point, this is not particularly a so-called Hobson’s choice scenario, which sometimes is misleadingly likened to the Trolley Problem. 

Hobson’s choice is based on an historic story of a horse owner that told those wanting a horse that they could choose either the horse closest to the barn door or take no horse at all. As such, the upside is taking the horse as proffered, while the downside is that you end-up without getting a horse. This is a decision-making scenario of a take-it-or-leave-it style, and decidedly not the same as the Trolley Problem. 

With all of the background setting the stage, we can next consider how this seems to be an issue related to self-driving cars.   

The focus will be on AI-based true self-driving cars, which deserves clarity as to what that phrasing means. 

For my framework about AI autonomous cars, see the link here: 

Why this is a moonshot effort, see my explanation here:   

For more about the levels as a type of Richter scale, see my discussion here: 

For the argument about bifurcating the levels, see my explanation here:   

The Role of AI-Based Self-Driving Cars 

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: 

To be wary of fake news about self-driving cars, see my tips here: 

The ethical implications of AI driving systems are significant, see my indication here: 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms:   

Self-Driving Cars And The Trolley Problem   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. 

The AI is doing the driving.   

Here’s the vexing question: Will the AI of true self-driving cars have to make Trolley Problem decisions during the act of driving the self-driving vehicle? 

The reaction by some insiders is that this is a preposterous idea and utterly miscast, labeling the whole matter as falsehood and something that has no bearing on self-driving cars. 


Start with the first premise that is usually given, which is that there is no such thing as a Trolley Problem in the act of driving a car.   

For anyone trying to use the “never happens” argument (for nearly anything), they find themselves on rather shaky and porous ground, since all it takes is the showing of existence to prove that the “never” is an incorrect statement.   

I can easily provide that existence proof.   

Peruse the news about car crashes, and by doing so, here’s an example of a recent news headline: “Driver who hit pedestrians on sidewalk was veering to avoid crash.” Here’s a link to the story:   

The real-world reporting indicated that a driver was confronted with a pick-up truck that unexpectedly pulled in front of him, and he found himself having to choose whether to ram into the other vehicle or to try and veer away from the vehicle, though he also realized apparently that there were nearby pedestrians and his veering would take him into the pedestrians. 

 Which to choose? 

I trust that you can see that this is very much like the Trolley Problem. 

If he opted to do nothing, he was presumably going to ram into the other vehicle. If he veered away, he was presumably going to potentially hit the pedestrians. Either choice is certainly terrible, yet a choice had to be made.   

Some of you might bellow that this is not a life-or-death choice, and indeed fortunately the pedestrians though injured were not actually killed (at least as stated in the reporting), but I think you are fighting a bit hard to try and reject the Trolley Problem. 

It can be readily argued that death was on the line. 

Anyone of an open mind would agree that there was a horrific choice to be made, involving dire circumstances, and with limited choices, involving a time urgency factor, and otherwise conformed with the Trolley Problem overall (minus the trolley).   

As such, for those in the “never happens” camp, this is one example, of many, for which the word never is blatantly wrong. 

It does happen.   

It is an interesting matter to try and gauge how often this kind of decision-making does take place while driving a car. In the United States alone, there are 3.2 trillion miles driven each year, doing so by about 225 million licensed drivers, and the result is approximately 40,000 deaths and 2.3 million injuries due to car crashes annually.   

We do not know how many of those crashes involved a Trolley Problem scenario, but we do know that reportedly it does occur (as evidenced by news reporting). 

On that aspect of reporting, it is quite interesting that apparently, we should be cautious in interpreting any of the stories and coverage of car crashes, due to a suggested bias by such reporting.   

A study discussed in the Columbia Journalism Review points out that oftentimes the driver is quoted by news reporters, rather than quoting the victims that are harmed by the driving act (this is logically explainable, since the victims are either hard to reach as they are at a hospital and possibly incapacitated, or, sadly, they are dead and thus unable to explain what happened). Here’s a link to the study:   

You might recognize this kind of selective attention as the survivability bias, a type of everyday bias in which we tend to focus on that which is more readily available and neglect or underplay that which is less so available or apparent. 

For the driving of a car and the reporting of car crashes, we need to be mindful of this facet. 

It could be that there are instances involving the Trolley Problem that the surviving participants might not realize had occurred, or are reluctant to state as so, and so on. In that sense, it could be that the Trolley Problem in car crashes is underreported.   

Being fair, we can also question the veracity of those that make a claim that amounts to a Trolley Problem and be cautious in assuming that just because someone says it was, it might not have been. In that sense, we could be mindful of potential overreporting. 

All in all, though, we can reasonably reject the claim that the Trolley Problem does not exist in the act of driving a car. Stated more affirmatively, we can reasonably accept and acknowledge that the Trolley Problem does exist in the act of driving a car. 

There, I said it, and I’m sure some pundits are boiling mad.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here:   

To be wary of fake news about self-driving cars, see my tips here:   

The ethical implications of AI driving systems are significant, see my indication here: 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms:   

Self-Driving Cars And Dealing With The Trolley Problem

Anyway, with that under our belt, we hopefully might agree that human drivers can and do face the Trolley Problem.   

But is it only human drivers that experience this?   

One can assert that an AI-based driving system, which is supposed to drive a car and do so to the same or better capability than human drivers, could very well encounter Trolley Problem situations. 

Let’s tackle this carefully.   

First, notice that this does not suggest that only AI driving systems will encounter a Trolley Problem, which is sometimes confusion that exists.   

Some claim the Trolley Problem will only happen to self-driving cars, but it hopefully is clear-cut that this is something that faces human drivers, and we are extending that known facet to what we assume self-driving cars will encounter too.   

Second, some argue that we will have only and exclusively AI-based true self-driving cars on our roadways, and as such, those vehicles will communicate and coordinate electronically via V2X, doing so in a fashion that will obviate any chance of a Trolley Problem arising. 

Maybe so, but that is a Utopian-like future that we do not know will happen, and meanwhile, there is inarguably going to be a mixture of both human-driven cars and AI-driven cars, for likely a long time to come, at least decades, and we also do not know if people will ever give up their perceived “right” (it’s a privilege). 

This is an important point that many never-Trolley proponents overlook.   

Here’s how they get themselves into a corner. 

The oft refrain is that an AI-based self-driving car has “obviously” been poorly engineered or essentially a lousy job done by the AI developers if the vehicle ever perchance finds itself amid a Trolley Problem. 

Usually, these same claims are associated too with the belief that we will have zero fatalities as a result of self-driving cars. 

As I have exhorted many times, zero fatalities is a zero chance. See my analysis at this link here: 

 It is a lofty goal, and a heartwarming aspiration, but nonetheless a misleading and outright false establishment of expectations.   

The rub is that if a pedestrian darts into the street, and there was no forewarning of the action, and meanwhile a self-driving car is coming down the street at perhaps 35 miles per hour, the physics of stopping in-time cannot be overcome simply because the AI is driving the car.   

The usual retort is that the AI would have always detected the pedestrian beforehand, but this is a falsehood that implies the sensors will always and perfectly be able to detect such matters, and that it will always be done sufficiently in advanced time that the self-driving car can avoid the pedestrian. 

I dare say that a child that runs out from between two parked cars is not going to offer such a chance.   

We are once again into the existence proof, meaning that there are going to be circumstances whereby no matter how good the AI is, and how good the sensors are, there will still be instances of the AI not being able to avoid a car crash.   

Likewise, one can argue in that same vein that the Trolley Problem will be indeed encountered by AI self-driving cars, ones that are on our public streets, and traveling amongst human drivers, and driving near to human pedestrians. 

The news report about the human driver that was cut-off by a pick-up truck could absolutely happen to a self-driving car.   

This seems undebatable.   

If you are now of the mind that the Trolley Problem can occur and can occur too in the case of AI self-driving cars, the next aspect is what will the AI do. 

Suppose the AI jams on the brakes, and slams head-on into that pick-up truck.   

Did the AI consider other options? 

Was the AI even considering veering to the side of the road and up onto the sidewalk (and, into the pedestrians)? 

If you are a self-driving carmaker or automaker, you need to be very, very, very careful about what your answer is going to be.   

I’ll tell you why.   

You might say that the AI was only programmed to do whatever was the obvious thing to do, which was to apply the brakes and attempt to slow down. 

We can likely assume that the AI was proficient enough to calculate that despite the braking, it was going to ram into the pick-up truck. 

So, it “knew” that a car crash was imminent.   

But if you are also saying that the AI did not consider other options, including going up onto the sidewalk, this certainly seems to showcase that the AI was doing an inadequate job of driving the car, and we would have expected a human driver to try and assess alternatives to avoid the car crash.   

In that sense, the AI is presumably deficient and perhaps should not be on our public roadways.   

You are also opening wide your legal liability, which I have repeatedly stated is something that will ultimately be a huge exposure for the automakers and self-driving carmakers. Once self-driving cars are prevalent, and once they get into car crashes, which they will, the lawsuits are going to come flying, and there are lawyers already priming to go after those deep-pocketed billion-dollar funded makers of self-driving tech and self-driving cars. 

Meanwhile, some of you might say that the AI did consider other alternatives, defending the robustness of your AI system, including that it considered going up on the sidewalk, but it then calculated that the pedestrians might be struck and so opted to stay the course and rammed instead into the pick-up truck.   

Whoa, you have just admitted that the AI was entangled into a Trolley Problem scenario.   

Welcome to the fold.   


When a human driver confronts a Trolley Problem, they presumably take into account their potential death or injury, which thusly differs from the classic Trolley Problem since the person throwing the switch for the trolley tracks is not directly imperiled (they might suffer emotional consequences, or maybe even legal repercussions, but not bodily harm).   

We can reasonably assume that the AI of a self-driving car is not concerned about its well-being (I don’t want to detract from this herein discussion and take us onto a tangent, but some argue we might someday ascribe human rights to AI).   

In any case, the self-driving car might have passengers in it, which introduces a third element of consideration for the Trolley Problem.   

This is akin to adding a third track and another fork.   

The complications though somewhat extend beyond the traditional Trolley Problem since the AI must now take into account a potential joint probability or level of uncertainty, involving the facet that in the case of the pick-up truck involves the possible death or injury to the pick-up driver and the self-driving car passengers, versus the possible death or injury to the pedestrians and the self-driving car passengers. 

Maybe that is the Trolley Problem on steroids. 

Time for a wrap-up. 

For those flat earthers that deny the existence of the Trolley Problem in the case of AI-based true self-driving cars, your head-in-the-sand perspective is not only myopic but you are going to be the easiest of the legal targets for lawsuits. 

Why so?  

Because it was a well-known and oft-discussed matter that the Trolley Problem exists, yet you did nothing about it and hid behind the assertion that it does not exist. 

Good luck with that. 

For those of you that are the rare earthers, you acknowledge that the Trolley Problem exists for self-driving cars, but argue that it is a rarity, an edge problem, a corner case. 

Tell that to the people killed when your AI-based true self-driving car hits someone, doing so in that “rare” instance that will indisputably eventually arise.   

Again, it is not going to hold any legal water. 

Then there are the get-round-to-it earthers that acknowledge the Trolley Problem, and lament that you are so busy right now that it is low on the priority list, and pledge that one day, when time permits, you are going to deal with it. 

There is little difference between the rare earthers and the get-round-to-it earthers, and either way, they are going to have quite some explaining to do to a jury and a judge when the time comes.   

Here’s what the automakers and self-driving tech firms should be doing: 

  • Develop a sensible and explicit strategy about the Trolley Problem 
  • Craft a viable plan that entails the development of AI to cope with the Trolley Problem 
  • Undertake appropriate testing of the AI to ascertain the Trolley Problem handling 
  • Rollout when so readied the AI capabilities and monitor for usage 
  • Adjust and enhance the AI as feasible to increasingly improve Trolley Problem handling 

Hopefully, this discussion will awaken the flat earthers, and nudge forward the rare earthers and the get-round-to-it earthers, urging them to put proper and appropriate attention to the Trolley Problem and sufficiently preparing their AI driving systems to cope with these life-or-death matters.   

It is a real problem with real consequences.  

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends. 

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]



How does it know?! Some beginner chatbot tech for newbies.



Wouter S. Sligter

Most people will know by now what a chatbot or conversational AI is. But how does one design and build an intelligent chatbot? Let’s investigate some essential concepts in bot design: intents, context, flows and pages.

I like using Google’s Dialogflow platform for my intelligent assistants. Dialogflow has a very accurate NLP engine at a cost structure that is extremely competitive. In Dialogflow there are roughly two ways to build the bot tech. One is through intents and context, the other is by means of flows and pages. Both of these design approaches have their own version of Dialogflow: “ES” and “CX”.

Dialogflow ES is the older version of the Dialogflow platform which works with intents, context and entities. Slot filling and fulfillment also help manage the conversation flow. Here are Google’s docs on these concepts:

Context is what distinguishes ES from CX. It’s a way to understand where the conversation is headed. Here’s a diagram that may help understand how context works. Each phrase that you type triggers an intent in Dialogflow. Each response by the bot happens after your message has triggered the most likely intent. It’s Dialogflow’s NLP engine that decides which intent best matches your message.

Wouter Sligter, 2020

What’s funny is that even though you typed ‘yes’ in exactly the same way twice, the bot gave you different answers. There are two intents that have been programmed to respond to ‘yes’, but only one of them is selected. This is how we control the flow of a conversation by using context in Dialogflow ES.

Unfortunately the way we program context into a bot on Dialogflow ES is not supported by any visual tools like the diagram above. Instead we need to type this context in each intent without seeing the connection to other intents. This makes the creation of complex bots quite tedious and that’s why we map out the design of our bots in other tools before we start building in ES.

The newer Dialogflow CX allows for a more advanced way of managing the conversation. By adding flows and pages as additional control tools we can now visualize and control conversations easily within the CX platform.


This entire diagram is a ‘flow’ and the blue blocks are ‘pages’. This visualization shows how we create bots in Dialogflow CX. It’s immediately clear how the different pages are related and how the user will move between parts of the conversation. Visuals like this are completely absent in Dialogflow ES.

It then makes sense to use different flows for different conversation paths. A possible distinction in flows might be “ordering” (as seen here), “FAQs” and “promotions”. Structuring bots through flows and pages is a great way to handle complex bots and the visual UI in CX makes it even better.

At the time of writing (October 2020) Dialogflow CX only supports English NLP and its pricing model is surprisingly steep compared to ES. But bots are becoming critical tech for an increasing number of companies and the cost reductions and quality of conversations are enormous. Building and managing bots is in many cases an ongoing task rather than a single, rounded-off project. For these reasons it makes total sense to invest in a tool that can handle increasing complexity in an easy-to-use UI such as Dialogflow CX.

This article aims to give insight into the tech behind bot creation and Dialogflow is used merely as an example. To understand how I can help you build or manage your conversational assistant on the platform of your choice, please contact me on LinkedIn.


Continue Reading


Who is chatbot Eliza?

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Discover the whole story.



Frédéric Pierron

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Its creator, Joseph Weizenbaum was a researcher at the famous Artificial Intelligence Laboratory of the MIT (Massachusetts Institute of Technology). His goal was to enable a conversation between a computer and a human user. More precisely, the program simulates a conversation with a Rogérian psychoanalyst, whose method consists in reformulating the patient’s words to let him explore his thoughts himself.

Joseph Weizenbaum (Professor emeritus of computer science at MIT). Location: Balcony of his apartment in Berlin, Germany. By Ulrich Hansen, Germany (Journalist) / Wikipedia.

The program was rather rudimentary at the time. It consists in recognizing key words or expressions and displaying in return questions constructed from these key words. When the program does not have an answer available, it displays a “I understand” that is quite effective, albeit laconic.

Weizenbaum explains that his primary intention was to show the superficiality of communication between a human and a machine. He was very surprised when he realized that many users were getting caught up in the game, completely forgetting that the program was without real intelligence and devoid of any feelings and emotions. He even said that his secretary would discreetly consult Eliza to solve his personal problems, forcing the researcher to unplug the program.

Conversing with a computer thinking it is a human being is one of the criteria of Turing’s famous test. Artificial intelligence is said to exist when a human cannot discern whether or not the interlocutor is human. Eliza, in this sense, passes the test brilliantly according to its users.
Eliza thus opened the way (or the voice!) to what has been called chatbots, an abbreviation of chatterbot, itself an abbreviation of chatter robot, literally “talking robot”.


Continue Reading


FermiNet: Quantum Physics and Chemistry from First Principles

Weve developed a new neural network architecture, the Fermionic Neural Network or FermiNet, which is well-suited to modeling the quantum state of large collections of electrons, the fundamental building blocks of chemical bonds.



Unfortunately, 0.5% error still isn’t enough to be useful to the working chemist. The energy in molecular bonds is just a tiny fraction of the total energy of a system, and correctly predicting whether a molecule is stable can often depend on just 0.001% of the total energy of a system, or about 0.2% of the remaining “correlation” energy. For instance, while the total energy of the electrons in a butadiene molecule is almost 100,000 kilocalories per mole, the difference in energy between different possible shapes of the molecule is just 1 kilocalorie per mole. That means that if you want to correctly predict butadiene’s natural shape, then the same level of precision is needed as measuring the width of a football field down to the millimeter.

With the advent of digital computing after World War II, scientists developed a whole menagerie of computational methods that went beyond this mean field description of electrons. While these methods come in a bewildering alphabet soup of abbreviations, they all generally fall somewhere on an axis that trades off accuracy with efficiency. At one extreme, there are methods that are essentially exact, but scale worse than exponentially with the number of electrons, making them impractical for all but the smallest molecules. At the other extreme are methods that scale linearly, but are not very accurate. These computational methods have had an enormous impact on the practice of chemistry – the 1998 Nobel Prize in chemistry was awarded to the originators of many of these algorithms.

Fermionic Neural Networks

Despite the breadth of existing computational quantum mechanical tools, we felt a new method was needed to address the problem of efficient representation. There’s a reason that the largest quantum chemical calculations only run into the tens of thousands of electrons for even the most approximate methods, while classical chemical calculation techniques like molecular dynamics can handle millions of atoms. The state of a classical system can be described easily – we just have to track the position and momentum of each particle. Representing the state of a quantum system is far more challenging. A probability has to be assigned to every possible configuration of electron positions. This is encoded in the wavefunction, which assigns a positive or negative number to every configuration of electrons, and the wavefunction squared gives the probability of finding the system in that configuration. The space of all possible configurations is enormous – if you tried to represent it as a grid with 100 points along each dimension, then the number of possible electron configurations for the silicon atom would be larger than the number of atoms in the universe!

This is exactly where we thought deep neural networks could help. In the last several years, there have been huge advances in representing complex, high-dimensional probability distributions with neural networks. We now know how to train these networks efficiently and scalably. We surmised that, given these networks have already proven their mettle at fitting high-dimensional functions in artificial intelligence problems, maybe they could be used to represent quantum wavefunctions as well. We were not the first people to think of this – researchers such as Giuseppe Carleo and Matthias Troyer and others have shown how modern deep learning could be used for solving idealised quantum problems. We wanted to use deep neural networks to tackle more realistic problems in chemistry and condensed matter physics, and that meant including electrons in our calculations.

There is just one wrinkle when dealing with electrons. Electrons must obey the Pauli exclusion principle, which means that they can’t be in the same space at the same time. This is because electrons are a type of particle known as fermions, which include the building blocks of most matter – protons, neutrons, quarks, neutrinos, etc. Their wavefunction must be antisymmetric – if you swap the position of two electrons, the wavefunction gets multiplied by -1. That means that if two electrons are on top of each other, the wavefunction (and the probability of that configuration) will be zero.

This meant we had to develop a new type of neural network that was antisymmetric with respect to its inputs, which we have dubbed the Fermionic Neural Network, or FermiNet. In most quantum chemistry methods, antisymmetry is introduced using a function called the determinant. The determinant of a matrix has the property that if you swap two rows, the output gets multiplied by -1, just like a wavefunction for fermions. So you can take a bunch of single-electron functions, evaluate them for every electron in your system, and pack all of the results into one matrix. The determinant of that matrix is then a properly antisymmetric wavefunction. The major limitation of this approach is that the resulting function – known as a Slater determinant – is not very general. Wavefunctions of real systems are usually far more complicated. The typical way to improve on this is to take a large linear combination of Slater determinants – sometimes millions or more – and add some simple corrections based on pairs of electrons. Even then, this may not be enough to accurately compute energies.


Continue Reading
AI13 hours ago

How does it know?! Some beginner chatbot tech for newbies.

AI13 hours ago

Who is chatbot Eliza?

AI1 day ago

FermiNet: Quantum Physics and Chemistry from First Principles

AI1 day ago

How to take S3 backups with DejaDup on Ubuntu 20.10

AI3 days ago

How banks and finance enterprises can strengthen their support with AI-powered customer service…

AI3 days ago

GBoard Introducing Voice — Smooth Texting and Typing

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition