Connect with us

AI

Executive Interview: Steve Bennett, Director Global Government Practice, SAS 

Using AI and analytics to optimize delivery of government service to citizens   Steve Bennett is Director of the Global Government Practice at SAS, and is the former director of the US National Biosurveillance Integration Center (NBIC) in the Department of Homeland Security, where he worked for 12 years. The mission of the NBIC was to provide early […]

Published

on

Steve Bennett of SAS seeks to use AI and analytics to help drive government decision-making, resulting in better outcomes for citizens.   

Using AI and analytics to optimize delivery of government service to citizens  

Steve Bennett is Director of the Global Government Practice at SAS, and is the former director of the US National Biosurveillance Integration Center (NBIC) in the Department of Homeland Security, where he worked for 12 years. The mission of the NBIC was to provide early warning and situational awareness of health threats to the nation. He led a team of over 30 scientists, epidemiologists, public health, and analytics experts. With a PhD in computational biochemistry from Stanford University, and an undergraduate degree in chemistry and biology from Caltech, Bennet has a strong passion for using analytics in government to help make better public better decisions. He recently spent a few minutes with AI Trends Editor John P. Desmond to provide an update of his work.  

AI Trends: How does AI help you facilitate the role of analytics in the government?  

Steve Bennett, Director of Global Government Practice, SAS

Steve Bennett: Well, artificial intelligence is something we’ve been hearing a lot about everywhere, even in government, which can often be a bit slower to adopt or implement new technologies. Yet even in government, AI is a pretty big deal. We talk about analytics and government use of data to drive better government decision-making, better outcomes for citizens. That’s been true for a long time.   

A lot of government data exists in forms that are not easily analyzed using traditional statistical methods or traditional analytics. So AI presents the opportunity to get the sorts of insights from government data that may not be possible using other methods. Many folks in the community are excited about the promise of AI being able to help government unlock the value of government data for its missions.  

Are there any examples you would say that exemplify the work? 

AI is well-suited to certain sorts of problems, like finding anomalies or things that stick out in data, needles in a haystack, if you will. AI can be very good at that. AI can be good at finding patterns in very complex datasets. It can be hard for a human to sift through that data on their own, to spot the things that might require action. AI can help detect those automatically.  

For example, we’ve been partnering with the US Food and Drug Administration to support efforts to keep the food supply safe in the United States. One of the challenges for the FDA as the supply chain has gotten increasingly global, is detecting contamination of food. The FDA often has to be reactive. They have to wait for something to happen or wait for something to get pretty far down the line before they can identify it and take action. We worked with FDA to help them implement AI and apply it to that process, so they can more effectively predict where they might see an increased likelihood of contamination in the supply chain and act proactively instead of reactively. So that’s an example of how AI can be used to help support safer food for Americans. 

In another example, AI is helping with predictive maintenance for government fleets and vehicles. We work quite closely with Lockheed Martin to support predictive maintenance with AI for some of the most advanced airframes in the world, like the C-130 [transport] and the F-35 [combat aircraft]. AI helps to identify problems in very complex machines before those problems cause catastrophic failure. The ability for a machine to tell you before it breaks is something AI can do.   

Another example was around unemployment. We have worked with several cities globally to help them figure out how to best put unemployed people back to work. That is something top of mind now as we see increase unemployment because of Covid. For one city in Europe, we have a goal of getting people back to work in 13 weeks or less. They compiled racial and demographic data on the unemployed such as education, previous work experience, whether they have children, where they live—lots of data.  

They matched that to data about government programs, such as job training requested by specific employers, reskilling, and other programs. We built an AI system using machine learning to optimally match people based on what we knew to the best mix of government programs that would get them back to work the fastest. We are using the technology to optimize the government benefits, The results were good at the outset. They did a pilot prior to the Covid outbreak and saw promising results.    

Another example is around juvenile justice. We worked with a particular US state to help them figure out the best way to combat recidivism among juvenile offenders. They had data on 19,000 cases over many years, all about young people who came into juvenile corrections, served their time there, got out and then came back. They wanted to know how they could lower the recidivism rate. We found we could use machine learning to look at aspects of each of these kids, and figure out which of them might benefit from certain special programs after they leave juvenile corrections, to get skills that reduce the likelihood we would see them back in the system again.  

To be clear, this was not profiling, putting a stigma or mark on these kids. It was trying to figure out how to match limited government programs to the kids who would best benefit from those.   

What are key AI technologies that are being employed in your work today? 

Much of what we talk about having a near-term impact falls into the family of what we call machine learning. Machine learning has this great property of being able to take a lot of training data and being able to learn which parts of that data are important for making predictions or identifying patterns. Based on what we learn from that training data, we can apply that to new data coming in.  

A specialized form of machine learning is deep learning, which is good at automatically detecting things in video streams, such as a car or a person. That relies on deep learning.  We have worked in healthcare to help radiologists do a better job detecting cancer from health scans. Police and defense applications in many cases rely on real time video. The ability to make sense of that video very quickly is greatly enhanced by machine learning and deep learning.  

Another area to mention are real time interaction systems, AI chatbots. We’re seeing governments increasingly seeking to turn to chatbots to help them connect with citizens. If a benefits agency or a tax agency is able to build a system that can automatically interact with citizens, it makes government more responsive to citizens. It’s better than waiting on the phone on hold.   

How far along would you say the government sector is in its use of AI and how does it compare to two years ago? 

The government is certainly further along than it was two years ago. In the data we have looked at, 70% of government managers have expressed interest in using AI to enhance their mission. That signal is stronger than what we saw two years ago. But I would say that we don’t see a lot of enterprise-wide applications of AI in the government. Often AI is used for particular projects or specific applications within an agency to help fulfill its mission. So as AI continues to mature, we would expect it to have more of an enterprise-wide use for large scale agency missions.  

What would you say are the challenges using AI to deliver on analytics in government?  

We see a range of challenges in several categories. One is around data quality and execution. One of the first things an agency needs to figure out is whether they have a problem that is well-suited for AI. Would it show patterns or signals in the data? If so, would the project deliver value for the government?  

A big challenge is data quality. For machine learning to work well requires a lot of examples of a lot of data. It’s a very data-hungry sort of technology. If you don’t have that data or you don’t have access to it, even if you’ve got a great problem that could normally be very well-suited for government, you’re not going to be able to use AI.  

Another problem that we see quite often in governments is that the data exists, but it’s not very well organized. It might exist on spreadsheets on a bunch of individual computers all over the agency. It’s not in a place where it can be all brought together and analyzed in an AI way. So the ability for the data to be brought to bear is really important.   

Another one that’s important. Even if you have all of your data in the right place, and you have a problem very well-suited for AI, it could be that culturally, the agency just isn’t ready to make use of the recommendations coming from an AI system in its day-to-day mission. This might be called a cultural challenge. The people in the agency might not have a lot of trust in the AI systems and what they can do. Or it might be an operational mission where there always needs to be a human in the loop. Either way, sometimes culturally there might be limitations in what an agency is ready to use. And we would advise not to bother with AI if you haven’t thought about whether you can actually use it for something when you’re done. That’s how you get a lot of science projects in government.  

We always advise people to think about what they will get at the end of the AI project, and make sure they are ready to drive the results into the decision-making process. Otherwise, we don’t want to waste time and government resources. You might do something different that you are comfortable using in your decision processes. That’s really important to us.  As an example of what not to do, when I worked in government, I made the mistake of spending two years building an outstanding analytics project, using high-performance modeling and simulation, working in Homeland Security. But we didn’t do a good job working on the cultural side, getting those key stakeholders and senior leaders ready to use it. And so we delivered a great technical solution, but we had a bunch of senior leaders that weren’t ready to use it. We learned the hard way that the cultural piece really does matter. 

We also have challenges around data privacy. Government, more than many industries, touches very sensitive data. And as I mentioned, these methods are very data-hungry, so we often need a lot of data. Government has to make doubly sure that it’s following its own privacy protection laws and regulations, and making sure that we are very careful with citizen data and following all the privacy laws in place in the US. And most countries have privacy regulations in place to protect personal data.  

The second component is a challenge around what government is trying to get the systems to do. AI in retail is used to make recommendations, based on what you have been looking at and what you have bought. An AI algorithm is running in the background. The shopper might not like the recommendation, but the negative consequences of that are pretty mild.   

But in government, you might be using AI or analytics to make decisions with bigger impacts—determining whether somebody gets a tax refund, or whether a benefits claim is approved or denied. The outcomes of these decisions have potentially serious impacts. The stakes are much higher when the algorithms get things wrong. Our advice to government is that for key decisions, there always should be that human-in-the-loop. We would never recommend that a system automatically drives some of these key decisions, particularly if they have potential adverse actions for citizens.   

Finally, the last challenge that comes to mind is the challenge of where the research is going. This idea of “could you” versus “should you.” Artificial intelligence unlocks a whole set of areas that you can use such as facial recognition. Maybe in a Western society with liberal, democratic values, we might decide we shouldn’t use it, even though we could. Places like China in many cities are tracking people in real time using advanced facial recognition. In the US, that’s not in keeping with our values, so we choose not to do that.   

That means any government agency thinking about doing an AI project needs to think about values up front. You want to make sure that those values are explicitly encoded in how the AI project is set up. That way we don’t get results on the other end that are not in keeping with our values or where we want to go.  

You mentioned data bias. Are you doing anything in particular to try to protect against bias in the data? 

Good question. Bias is the real area of concern in any kind of AI machine learning work. The AI machine learning system is going to perform in concert with the way it was trained on the training data. So developers need to be careful in the selection of training data, and the team needs systems in place to review the training data so that it’s not biased. We’ve all heard and read the stories in the news about the facial recognition company in China—they make this great facial recognition system, but they only train it on Asian faces. And so guess what? It’s good at detecting Asian faces, but it’s terrible at detecting faces that are darker in color or that are lighter in color, or that have different facial features.  

We have heard many stories like that. You want to make sure you don’t have racial bias, gender bias, or any other kind of bias we want to avoid in the data training set. Encode those explicitly up front when you’re planning your project; that can go a long way towards helping to limit bias. But even if you’ve done that, you want to make sure you’re checking for bias in a system’s performance. We have many great technologies built into our machine learning tools to help you automatically look for those biases and detect if they are present. You also want to be checking for bias after the system has been deployed, to make sure if something pops up, you see it and can take care of it.  

From your background in bioscience, how well would you say the federal government has done in responding to the COVID-19 virus? 

There really are two industries that bore the brunt, at least initially from the COVID-19 spread: government and health care. In most places in the world, health care is part of government. So it has been a big public sector effort to try to deal with COVID. It’s been hit and miss, with many challenges. No other entity can marshal financial resources like the government, so getting economic support out to those that need is really important. Analytics plays a role in that.  

So one of the things that we did in supporting government using what we’re good at—data and analytics in AI—was to look at how we could help use the data to do a better job responding to COVID. We did a lot of work on the simple side of taking what government data they had and putting it into a simple dashboard that displayed where resources were. That way they could quickly identify if they had to move a supply such as masks to a different location. We worked on a more complex AI system to optimize the use of intensive care beds for a government in Europe that wanted to plan use of its medical resources. 

Contact tracing, the ability to very quickly identify people that are exposed and then identify who they’ve been around so that we can isolate those people, is something that can be greatly supported and enhanced by analytics. And we’ve done a lot of work around how to take contact tracing that’s been used for centuries and make it fit for supporting COVID-19 work. The government can do a lot with its data, with analytics and with AI in the fight against COVID-19. 

Do you have any advice for young people, either in school now or early in their careers, for what they should study if they are interested in pursuing work in AI, and especially if they’re interested in working in the government? 

If you are interested in getting into AI, I would suggest two things to focus on. One would be the technical side. If you have a solid understanding of how to implement and use AI, and you’ve built experience doing it as part of your coursework or part of your research work in school, you are highly valuable to government. Many people know a little about AI; they may have taken some business courses on it. But if you have the technical chops to be able to implement it, and you have a passion for doing that inside of government, you will be highly valuable. There would not be a lot of people like you. 

Just as important as the AI side and the data science technical piece, I would highly advise students to work on storytelling. AI can be highly technical when you get into the details. If you’re going to talk to a government or agency leader or an elected official, you will lose them if you can’t quickly tie the value of artificial intelligence to their mission. We call them ‘unicorns’ in SAS, people that have high technical ability and a detailed understanding of how these models can help government, and they have the ability to tell good stories and draw that line to the “so what?” How can a senior agency official in government, how can they use it? How is it helpful to them? 

To work on good presentation skills and practice them is just as important as the technical side. You will find yourself very influential and able to make a difference if you’ve got a good balance of those skills. That’s my view.  

I would also say, in terms of where you specialize technically, being able to converse in SAS has been recently ranked as one of the most highly valued jobs skills. The specific aspects of those technical pieces that can be very, very marketable to you inside and outside of government. 

Learn more about Steve Bennett on the SAS Blog. 

Source: https://www.aitrends.com/executive-interview/executive-interview-steve-bennett-director-global-government-practice-sas/

AI

How does it know?! Some beginner chatbot tech for newbies.

Published

on

Wouter S. Sligter

Most people will know by now what a chatbot or conversational AI is. But how does one design and build an intelligent chatbot? Let’s investigate some essential concepts in bot design: intents, context, flows and pages.

I like using Google’s Dialogflow platform for my intelligent assistants. Dialogflow has a very accurate NLP engine at a cost structure that is extremely competitive. In Dialogflow there are roughly two ways to build the bot tech. One is through intents and context, the other is by means of flows and pages. Both of these design approaches have their own version of Dialogflow: “ES” and “CX”.

Dialogflow ES is the older version of the Dialogflow platform which works with intents, context and entities. Slot filling and fulfillment also help manage the conversation flow. Here are Google’s docs on these concepts: https://cloud.google.com/dialogflow/es/docs/concepts

Context is what distinguishes ES from CX. It’s a way to understand where the conversation is headed. Here’s a diagram that may help understand how context works. Each phrase that you type triggers an intent in Dialogflow. Each response by the bot happens after your message has triggered the most likely intent. It’s Dialogflow’s NLP engine that decides which intent best matches your message.

Wouter Sligter, 2020

What’s funny is that even though you typed ‘yes’ in exactly the same way twice, the bot gave you different answers. There are two intents that have been programmed to respond to ‘yes’, but only one of them is selected. This is how we control the flow of a conversation by using context in Dialogflow ES.

Unfortunately the way we program context into a bot on Dialogflow ES is not supported by any visual tools like the diagram above. Instead we need to type this context in each intent without seeing the connection to other intents. This makes the creation of complex bots quite tedious and that’s why we map out the design of our bots in other tools before we start building in ES.

The newer Dialogflow CX allows for a more advanced way of managing the conversation. By adding flows and pages as additional control tools we can now visualize and control conversations easily within the CX platform.

source: https://cloud.google.com/dialogflow/cx/docs/basics

This entire diagram is a ‘flow’ and the blue blocks are ‘pages’. This visualization shows how we create bots in Dialogflow CX. It’s immediately clear how the different pages are related and how the user will move between parts of the conversation. Visuals like this are completely absent in Dialogflow ES.

It then makes sense to use different flows for different conversation paths. A possible distinction in flows might be “ordering” (as seen here), “FAQs” and “promotions”. Structuring bots through flows and pages is a great way to handle complex bots and the visual UI in CX makes it even better.

At the time of writing (October 2020) Dialogflow CX only supports English NLP and its pricing model is surprisingly steep compared to ES. But bots are becoming critical tech for an increasing number of companies and the cost reductions and quality of conversations are enormous. Building and managing bots is in many cases an ongoing task rather than a single, rounded-off project. For these reasons it makes total sense to invest in a tool that can handle increasing complexity in an easy-to-use UI such as Dialogflow CX.

This article aims to give insight into the tech behind bot creation and Dialogflow is used merely as an example. To understand how I can help you build or manage your conversational assistant on the platform of your choice, please contact me on LinkedIn.

Source: https://chatbotslife.com/how-does-it-know-some-beginner-chatbot-tech-for-newbies-fa75ff59651f?source=rss—-a49517e4c30b—4

Continue Reading

AI

Who is chatbot Eliza?

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Discover the whole story.

Published

on


Frédéric Pierron

Between 1964 and 1966 Eliza was born, one of the very first conversational agents. Its creator, Joseph Weizenbaum was a researcher at the famous Artificial Intelligence Laboratory of the MIT (Massachusetts Institute of Technology). His goal was to enable a conversation between a computer and a human user. More precisely, the program simulates a conversation with a Rogérian psychoanalyst, whose method consists in reformulating the patient’s words to let him explore his thoughts himself.

Joseph Weizenbaum (Professor emeritus of computer science at MIT). Location: Balcony of his apartment in Berlin, Germany. By Ulrich Hansen, Germany (Journalist) / Wikipedia.

The program was rather rudimentary at the time. It consists in recognizing key words or expressions and displaying in return questions constructed from these key words. When the program does not have an answer available, it displays a “I understand” that is quite effective, albeit laconic.

Weizenbaum explains that his primary intention was to show the superficiality of communication between a human and a machine. He was very surprised when he realized that many users were getting caught up in the game, completely forgetting that the program was without real intelligence and devoid of any feelings and emotions. He even said that his secretary would discreetly consult Eliza to solve his personal problems, forcing the researcher to unplug the program.

Conversing with a computer thinking it is a human being is one of the criteria of Turing’s famous test. Artificial intelligence is said to exist when a human cannot discern whether or not the interlocutor is human. Eliza, in this sense, passes the test brilliantly according to its users.
Eliza thus opened the way (or the voice!) to what has been called chatbots, an abbreviation of chatterbot, itself an abbreviation of chatter robot, literally “talking robot”.

Source: https://chatbotslife.com/who-is-chatbot-eliza-bfeef79df804?source=rss—-a49517e4c30b—4

Continue Reading

AI

FermiNet: Quantum Physics and Chemistry from First Principles

Weve developed a new neural network architecture, the Fermionic Neural Network or FermiNet, which is well-suited to modeling the quantum state of large collections of electrons, the fundamental building blocks of chemical bonds.

Published

on

Unfortunately, 0.5% error still isn’t enough to be useful to the working chemist. The energy in molecular bonds is just a tiny fraction of the total energy of a system, and correctly predicting whether a molecule is stable can often depend on just 0.001% of the total energy of a system, or about 0.2% of the remaining “correlation” energy. For instance, while the total energy of the electrons in a butadiene molecule is almost 100,000 kilocalories per mole, the difference in energy between different possible shapes of the molecule is just 1 kilocalorie per mole. That means that if you want to correctly predict butadiene’s natural shape, then the same level of precision is needed as measuring the width of a football field down to the millimeter.

With the advent of digital computing after World War II, scientists developed a whole menagerie of computational methods that went beyond this mean field description of electrons. While these methods come in a bewildering alphabet soup of abbreviations, they all generally fall somewhere on an axis that trades off accuracy with efficiency. At one extreme, there are methods that are essentially exact, but scale worse than exponentially with the number of electrons, making them impractical for all but the smallest molecules. At the other extreme are methods that scale linearly, but are not very accurate. These computational methods have had an enormous impact on the practice of chemistry – the 1998 Nobel Prize in chemistry was awarded to the originators of many of these algorithms.

Fermionic Neural Networks

Despite the breadth of existing computational quantum mechanical tools, we felt a new method was needed to address the problem of efficient representation. There’s a reason that the largest quantum chemical calculations only run into the tens of thousands of electrons for even the most approximate methods, while classical chemical calculation techniques like molecular dynamics can handle millions of atoms. The state of a classical system can be described easily – we just have to track the position and momentum of each particle. Representing the state of a quantum system is far more challenging. A probability has to be assigned to every possible configuration of electron positions. This is encoded in the wavefunction, which assigns a positive or negative number to every configuration of electrons, and the wavefunction squared gives the probability of finding the system in that configuration. The space of all possible configurations is enormous – if you tried to represent it as a grid with 100 points along each dimension, then the number of possible electron configurations for the silicon atom would be larger than the number of atoms in the universe!

This is exactly where we thought deep neural networks could help. In the last several years, there have been huge advances in representing complex, high-dimensional probability distributions with neural networks. We now know how to train these networks efficiently and scalably. We surmised that, given these networks have already proven their mettle at fitting high-dimensional functions in artificial intelligence problems, maybe they could be used to represent quantum wavefunctions as well. We were not the first people to think of this – researchers such as Giuseppe Carleo and Matthias Troyer and others have shown how modern deep learning could be used for solving idealised quantum problems. We wanted to use deep neural networks to tackle more realistic problems in chemistry and condensed matter physics, and that meant including electrons in our calculations.

There is just one wrinkle when dealing with electrons. Electrons must obey the Pauli exclusion principle, which means that they can’t be in the same space at the same time. This is because electrons are a type of particle known as fermions, which include the building blocks of most matter – protons, neutrons, quarks, neutrinos, etc. Their wavefunction must be antisymmetric – if you swap the position of two electrons, the wavefunction gets multiplied by -1. That means that if two electrons are on top of each other, the wavefunction (and the probability of that configuration) will be zero.

This meant we had to develop a new type of neural network that was antisymmetric with respect to its inputs, which we have dubbed the Fermionic Neural Network, or FermiNet. In most quantum chemistry methods, antisymmetry is introduced using a function called the determinant. The determinant of a matrix has the property that if you swap two rows, the output gets multiplied by -1, just like a wavefunction for fermions. So you can take a bunch of single-electron functions, evaluate them for every electron in your system, and pack all of the results into one matrix. The determinant of that matrix is then a properly antisymmetric wavefunction. The major limitation of this approach is that the resulting function – known as a Slater determinant – is not very general. Wavefunctions of real systems are usually far more complicated. The typical way to improve on this is to take a large linear combination of Slater determinants – sometimes millions or more – and add some simple corrections based on pairs of electrons. Even then, this may not be enough to accurately compute energies.

Source: https://deepmind.com/blog/article/FermiNet

Continue Reading
AI13 hours ago

How does it know?! Some beginner chatbot tech for newbies.

AI13 hours ago

Who is chatbot Eliza?

AI1 day ago

FermiNet: Quantum Physics and Chemistry from First Principles

AI1 day ago

How to take S3 backups with DejaDup on Ubuntu 20.10

AI3 days ago

How banks and finance enterprises can strengthen their support with AI-powered customer service…

AI3 days ago

GBoard Introducing Voice — Smooth Texting and Typing

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

AI3 days ago

Automatically detecting personal protective equipment on persons in images using Amazon Rekognition

Trending