Connect with us

AI

Legal chatbots assist legal professionals in their daily firm chores, activities, and support…

Published

on

KLoBot — the Best AI-Chatbot builder platform

The COVID-19 crisis has a profound impact across the legal sector, especially on the traditional law firms, mostly relying on the office model. The current pandemic situation has shown the importance of legal technology to the conventional as well as modern legal firms and the need for these solutions for business continuity in this scenario.

The year 2020 has seen a considerable shift in the legal industry in terms of technology adoption and ongoing transformation, which is now amplified due to the global pandemic situation. The current focus towards technology is driving change in the industry and is ensuring that the legal firms survive through this change.

Benefits of Tech-enabled solutions

The benefits of technology for small, midsized as well as larger law firms can vary substantially, but few of the obvious benefits include:

Quick adaptation to change is key to success!

The COVID-19 crisis has put a strain on legal professionals in terms of performance, productivity, and value-driven client delivery while working remotely. Additionally, pressures related to enhancing the ROI and retaining the talent are driving the legal organizations to boost their automation capabilities.

Technologies like AI and ML play an integral role in automating legal business, handling specific tasks, and reducing legal spending. A study estimated that around 23% of an attorney’s job could be automated with technology, including chatbots.

Automation provides opportunities for legal firms and in-house legal departments to efficiently handle a massive volume of data sets, reduce costs, and attract new clients. Automation in the legal sector prepares law firms to drive high performance and thrive in the altering legal landscape. Hence the legal organizations are leveraging automation to adapt to a demanding legal market.

1. 8 Proven Ways to Use Chatbots for Marketing (with Real Examples)

2. How to Use Texthero to Prepare a Text-based Dataset for Your NLP Project

3. 5 Top Tips For Human-Centred Chatbot Design

4. Chatbot Conference Online

During the COVID-19 crisis for running and managing the remote staff, a law firm requires a broad range of tools. From task management to client management, from scheduling to invoicing and billing systems, a law firm must have few of the key tools to run remotely. A host of technologies, including chatbots, virtual assistants, legal bots, cloud, and document automation, are enabling legal firms to automate the routine tasks and enhance their attorney’s productivity.

Essential tools to operate a remote legal staff

Several companies have already invested in legal technology, and as a next step, they have started hiring candidates with needed technical skills because a dedicated technology talent is a crucial resource for remote working law firms.

By 2023 around 33% of the company legal departments are expected to have a committed legal technology expert to support the automation of in-house workflows.

Incorporating chatbots within businesses eliminates the manual processes and aid lawyers and attorneys to work more promptly and efficiently during this work from the home state. Monotonous administrative tasks including, document creation, setting up meetings, gathering client information, and other basic activities, are now being performed by chatbots. These chatbots can directly interact with the clients to update them on case status or any additional information, which saves the legal staff’s time and allow them to focus on other crucial issues that entail their expertise.

Integrating chatbots with the internal systems allows lawyers and paralegals to quickly retrieve the case related information, which further enables them to handle a more significant number of clients and maintain their billable hours. The legal industry is amongst those industries that are investing heavily in chatbots and are being benefitted from chatbots.

A chatbot is a solution that can surely bring value to law firms by assisting legal professionals in their daily firm chores as well as support queries while working remotely.

In addition, a legal chatbot can also enhance accessibility and provide strategic value to the law firms in this time of crisis. Chatbots will further continue to disrupt the legal industry by abetting law firms as well as paralegals to become 24/7 available for clients.

KLoBot is a chatbot builder platform that simplifies human-machine interactions via intuitive conversational user interfaces.

Chatbots designed with KLoBot can be created within minutes and can facilitate internal as well as external law firm communications. These chatbots empower attorneys to automate the tedious and time-consuming tasks and streamline the business processes.

Chatbots built on the KLoBot platform facilitate law firms to better showcase their value proposition and meet the internal as well as external pressures during the current COVID-19 crisis.

KLoBot enables law firms to meet current client demands and competitive challenges. KLoBot also presents an opportunity for legal firms to save time on tedious activities, diversify revenue, and stay ahead of the competition. To know more about the usefulness of chatbots in the legal ecosystem, get in touch with us.

Source: https://chatbotslife.com/legal-chatbots-assist-legal-professionals-in-their-daily-firm-chores-activities-and-support-d48215340fd4?source=rss—-a49517e4c30b—4

AI

How to Improve Your Supply Chain With Deep Reinforcement Learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart. Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources […]

The post How to Improve Your Supply Chain With Deep Reinforcement Learning appeared first on TOPBOTS.

Published

on

reinforcement learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart.

Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources to create the products we use and consume every day. To track all of the inputs to a single, simple product would be staggering. Yet supply chain organizations inside vertically integrated corporations are tasked with managing inputs from raw materials, to manufacturing, warehousing, and distribution to customers. The companies that do this best cut down on waste from excess storage, to unneeded transportation costs, and lost time to get products and materials to later stages in the system. Optimizing these systems is a key component in businesses as dissimilar as Apple and Saudi Aramco.

A lot of time and effort has been put into building effective supply chain optimization models, but due to their size and complexity, they can be difficult to build and manage. With advances in machine learning, particularly reinforcement learning, we can train a machine learning model to make these decisions for us, and in many cases, do so better than traditional approaches!

TL;DR

We train a deep reinforcement learning model using Ray and or-gym to optimize a multi-echelon inventory management model and benchmark it against a derivative free optimization model using Powell’s Method.

Multi-Echelon Supply Chain

In our example, we’re going to work with a multi-echelon supply chain model with lead times. This means that we have different stages of our supply chain that we need to make decisions for, and each decision that we make at different levels are going to affect decisions downstream. In our case, we have M stages going back to the producer of our raw materials all the way to our customers. Each stage along the way has a different lead time, or time it takes for the output of one stage to arrive and become the input for the next stage in the chain. This may be 5 days, 10 days, whatever. The longer these lead times become, the earlier you need to anticipate customer orders and demand to ensure you don’t stock out or lose sales!

If this in-depth educational content on is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Inventory Management with OR-Gym

The OR-Gym library has a few multi-echelon supply chain models ready to go to simulate this structure. For this, we’ll use the InvManagement-v1 environment, which has the structure shown above, but results in lost sales if you don’t have sufficient inventory to meet customer demand.

If you haven’t already, go ahead and install the package with:

pip install or-gym

Once installed, we can set up our environment with:

env = or_gym.make('InvManagement-v1')

This is a four-echelon supply chain by default. The actions determine how much material to order from the echelon above at each time step. The orders quantities are limited by the capacity of the supplier and their current inventory. So, if you order 150 widgets from a supplier that has a shipment capacity of 100 widgets and only has 90 widgets on hand, you’re going to only get 90 sent.

Each echelon has its own costs structure, pricing, and lead times. The last echelon (Stage 3 in this case) provides raw materials, and we don’t have any inventory constraints on this stage, assuming that the mine, oil well, forest — or whatever produces your raw material inputs — is large enough that this isn’t a constraint we need to concern ourselves with.

Default parameter values for the Invmanagement-v1 environment.

As with all or-gym environments, if these settings don’t suit you, simply pass an environment configuration dictionary to the make function to customize your supply chain accordingly (an example is given here).

Training with Ray

To train your environment, we’re going to leverage the Ray library to speed up our training, so go ahead and import your packages.

import or_gym
from or_gym.utils import create_env
import ray
from ray.rllib import agents
from ray import tune

To get started, we’re going to need a brief registration function to ensure that Ray knows about the environment we want to run. We can register that with the register_env function shown below.

def register_env(env_name, env_config={}): env = create_env(env_name) tune.register_env(env_name, lambda env_name: env(env_name, env_config=env_config))

From here, we can set up our RL configuration and everything we need to train the model.

# Environment and RL Configuration Settings
env_name = 'InvManagement-v1'
env_config = {} # Change environment parameters here
rl_config = dict( env=env_name, num_workers=2, env_config=env_config, model=dict( vf_share_layers=False, fcnet_activation='elu', fcnet_hiddens=[256, 256] ), lr=1e-5
) # Register environment
register_env(env_name, env_config)

The rl_config dictionary is where you can set all of the relevant hyperparameters or set your system to run on a GPU. Here, we’re just going to use 2 workers for parallelization, and train a two-layer network with an ELU activation function. Additionally, if you’re going to use tune for hyperparameter tuning, then you can use tools like tune.gridsearch() to systematically update learning rates, change the network, or whatever you like.

Once your happy with that, go head and choose your algorithm and get to training! Below, I just use the PPO algorithm because I find it trains well on most environments.

# Initialize Ray and Build Agent
ray.init(ignore_reinit_error=True)
agent = agents.ppo.PPOTrainer(env=env_name, config=rl_config) results = []
for i in range(500): res = agent.train() results.append(res) if (i+1) % 5 == 0: print('\rIter: {}\tReward: {:.2f}'.format( i+1, res['episode_reward_mean']), end='')
ray.shutdown()

The code above will initialize ray, then build the agent according to the configuration you specified previously. If you’re happy with that, then let it run for a bit and see how it does!

One thing to note with this environment: if the learning rate is too high, the policy function will begin to diverge such that the loss becomes astronomically large. At that point, you’ll wind up getting an error, typically stemming from Ray’s default pre-processor with state showing bizarre values because the actions being given by the network are all nan. This is easy to fix by bringing the learning rate down a bit and trying again.

Let’s take a look at the performance.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec # Unpack values from each iteration
rewards = np.hstack([i['hist_stats']['episode_reward'] for i in results])
pol_loss = [ i['info']['learner']['default_policy']['policy_loss'] for i in results]
vf_loss = [ i['info']['learner']['default_policy']['vf_loss'] for i in results] p = 100
mean_rewards = np.array([np.mean(rewards[i-p:i+1]) if i >= p else np.mean(rewards[:i+1]) for i, _ in enumerate(rewards)])
std_rewards = np.array([np.std(rewards[i-p:i+1]) if i >= p else np.std(rewards[:i+1]) for i, _ in enumerate(rewards)]) fig = plt.figure(constrained_layout=True, figsize=(20, 10))
gs = fig.add_gridspec(2, 4)
ax0 = fig.add_subplot(gs[:, :-2])
ax0.fill_between(np.arange(len(mean_rewards)), mean_rewards - std_rewards, mean_rewards + std_rewards, label='Standard Deviation', alpha=0.3)
ax0.plot(mean_rewards, label='Mean Rewards')
ax0.set_ylabel('Rewards')
ax0.set_xlabel('Episode')
ax0.set_title('Training Rewards')
ax0.legend() ax1 = fig.add_subplot(gs[0, 2:])
ax1.plot(pol_loss)
ax1.set_ylabel('Loss')
ax1.set_xlabel('Iteration')
ax1.set_title('Policy Loss') ax2 = fig.add_subplot(gs[1, 2:])
ax2.plot(vf_loss)
ax2.set_ylabel('Loss')
ax2.set_xlabel('Iteration')
ax2.set_title('Value Function Loss') plt.show()
Image by author.

It looks like our agent learned a decent policy!

One of the difficulties of deep reinforcement learning for these classic, operations research problems is the lack of optimality guarantees. In other words, we can look at that training curve above and see that it is learning a better and better policy — and it seems to be converging on a policy — but we don’t know how good that policy is. Could we do better? Should we invest more time (and money) into hyperparameter tuning? To answer this, we need to turn to some different methods and develop a benchmark.

Derivative Free Optimization

A good way to benchmark an RL model is with derivative free optimization (DFO). Like RL, DFO treats the system as a black-box model providing inputs and getting some feedback in return to try again as it seeks the optimal value.

Unlike RL, DFO has no concept of a state. This means that we will try to find a fixed re-order policy to bring inventory up to a certain level to balance holding costs and profit from sales. For example, if the policy at stage 0 is to re-order up to 10 widgets, and the currently, we have 4 widgets, then the policy states we’re going to re-order 6. In the RL case, it would take into account the current pipeline and all of the other information that we provide into the state. So RL is more adaptive and ought to outperform a straightforward DFO implementation. If it doesn’t, then we know we need to go back to the drawing board.

While it may sound simplistic, this fixed re-order policy isn’t unusual in industrial applications, partly because real supply chains consist of many more variables and interrelated decisions than we’re modeling here. So a fixed policy is tractable and something that supply chain professionals can easily work with.

Implementing DFO

There are a lot of different algorithms and solvers out there for DFO. For our purposes, we’re going to leverage Scipy’s optimize library to implement Powell’s Method. We won’t get into the details here, but this is a way to quickly find minima on functions and can be used for discrete optimization – like we have here.

from scipy.optimize import minimize

Because we’re going to be working with a fixed re-order policy, we need a quick function to translate inventory levels into actions to evaluate.

def base_stock_policy(policy, env): ''' Implements a re-order up-to policy. This means that for each node in the network, if the inventory at that node falls below the level denoted by the policy, we will re-order inventory to bring it to the policy level. For example, policy at a node is 10, current inventory is 5: the action is to order 5 units. ''' assert len(policy) == len(env.init_inv), ( 'Policy should match number of nodes in network' + '({}, {}).'.format( len(policy), len(env.init_inv))) # Get echelon inventory levels if env.period == 0: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period]) else: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period] - env.B[env.period-1, :-1]) # Get unconstrained actions unc_actions = policy - inv_ech unc_actions = np.where(unc_actions>0, unc_actions, 0) # Ensure that actions can be fulfilled by checking # constraints inv_const = np.hstack([env.I[env.period, 1:], np.Inf]) actions = np.minimum(env.c, np.minimum(unc_actions, inv_const)) return actions

The base_stock_policy function takes the policy levels we supply and calculates the difference between the level and the inventory as described above. One thing to note, when we calculate the inventory level, we include all of the inventory in transit to the stage as well (given in env.T). For example, if the current inventory on hand for stage 0 is 100, and there is a lead time of 5 days between stage 0 and stage 1, then we take all of those orders for the past 5 days into account as well. So, if stage 0 ordered 10 units each day, then the inventory at this echelon would be 150. This makes policy levels greater than capacity meaningful because we’re looking at more than just the inventory in our warehouse today, but looking at everything in transit too.

Our DFO method needs to make function evaluation calls to see how the selected variables perform. In our case, we have an environment to evaluate, so we need a function that will run an episode of our environment and return the appropriate results.

def dfo_func(policy, env, *args): ''' Runs an episode based on current base-stock model settings. This allows us to use our environment for the DFO optimizer. ''' env.reset() # Ensure env is fresh rewards = [] done = False while not done: action = base_stock_policy(policy, env) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break rewards = np.array(rewards) prob = env.demand_dist.pmf(env.D, **env.dist_param) # Return negative of expected profit return -1 / env.num_periods * np.sum(prob * rewards)

Rather than return the sum of the rewards, we’re returning the negative expectation of our rewards. The reason for the negative is the Scipy function we’re using seeks to minimize whereas our environment is designed to maximize the reward, so we invert this to ensure everything is pointing in the right direction. We calculate the expected rewards by multiplying by the probability of our demand based on the distribution. We could take more samples to estimate the distribution and calculate our expectation that way (and for many real-world applications, this would be required), but here, we have access to the true distribution so we can use that to reduce our computational burden.

Finally, we’re ready to optimize.

The following function will build an environment based on your configuration settings, take our dfo_func to evaluate, and apply Powell’s Method to the problem. It will return our policy and ensure that our answer contains only positive integers (e.g. we can’t order half a widget or a negative number of widgets).

def optimize_inventory_policy(env_name, fun, init_policy=None, env_config={}, method='Powell'): env = or_gym.make(env_name, env_config=env_config) if init_policy is None: init_policy = np.ones(env.num_stages-1) # Optimize policy out = minimize(fun=fun, x0=init_policy, args=env, method=method) policy = out.x.copy() # Policy must be positive integer policy = np.round(np.maximum(policy, 0), 0).astype(int) return policy, out

Now it’s time to put it all together.

policy, out = optimize_inventory_policy('InvManagement-v1', dfo_func)
print("Re-order levels: {}".format(policy))
print("DFO Info:\n{}".format(out))Re-order levels: [540 216 81]
DFO Info: direc: array([[ 0. , 0. , 1. ], [ 0. , 1. , 0. ], [206.39353826, 81.74560612, 28.78995703]]) fun: -0.9450780368543933 message: 'Optimization terminated successfully.' nfev: 212 nit: 5 status: 0 success: True x: array([539.7995151 , 216.38046861, 80.66902905])

Our DFO model found a fixed-stock policy with re-order levels at 540 for stage 0, 216 for stage 1, and 81 for stage 2. It did this with only 212 function evaluations, i.e. it simulated 212 episodes to find the optimal value.

We can run then feed this policy into our environment, say 1,000 times, to generate some statistics and compare it to our RL solution.

env = or_gym.make(env_name, env_config=env_config)
eps = 1000
rewards = []
for i in range(eps): env.reset() reward = 0 while True: action = base_stock_policy(policy, eenv) s, r, done, _ = env.step(action) reward += r if done: rewards.append(reward) break

Comparing Performance

Before we get into the reward comparisons, note that these are not perfect, 1:1 comparisons. As mentioned before, DFO yields us a fixed policy whereas RL has a more flexible, dynamic policy that changes based on state information. Our DFO approach was also given some information in terms of probabilities of demand to calculate the expectation on, RL had to infer that from additional sampling. So while RL learned from nearly ~65k episodes and DFO only had to make 212 function calls, they aren’t exactly comparable. Considering that to enumerate every meaningful fixed policy once would require ~200 million episodes, then RL doesn’t look so sample inefficient given its task.

So, how do these stack up?

Image by author.

What we can see above is that RL does indeed outperform our DFO policy by 11% on average (460 to 414). The RL model overtook the DFO policy after ~15k episodes and improved steadily after that. There is some higher variance with the RL policy however, with a few terrible episodes thrown in to the mix. All things considered, we did get stronger results overall from the RL approach, as expected.

In this case, neither method was very difficult to implement nor computationally intensive. I forgot to change my rl_config settings to run on my GPU and it still only took about 25 minutes to train on my laptop while the DFO model took ~2 seconds to run. More complex models may not be so friendly in either case.

Another thing to note, both methods can be very sensitive to initial conditions and neither are guaranteed to find the optimum policy in every case. If you have a problem you’d like to apply RL to, maybe use a simple DFO solver first, try a few initial conditions to get a feel for the problem, then spin up the full, RL model. You may find that the DFO policy is sufficient for your task.

Hopefully this gave a good overview of how to use these methods and the or-gym library. Leave feedback or questions if you have any!

This article was originally published on DataHubbs and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more applied AI updates.

We’ll let you know when we release more technical education.

Continue Reading

AI

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning […]

Published

on

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning has traditionally been difficult to come by.

Deep Java Library (DJL) is a new open-source deep learning framework for Java built by AWS. It sits on top of native engines, so you can train entirely in DJL while using different engines on the backend, such as PyTorch and Apache MXNet. It can also import and run models built using Tensorflow, Keras, and PyTorch. DJL can bridge the ease of Kinesis Video Streams with the power of deep learning for your own video analytics application.

In this tutorial, we walk through running an object detection model against a Kinesis video stream. In object detection, the computer finds different types of objects in an image and draws a bounding box, describing their locations inside the image. For example, you can use detection to recognize objects like dogs or people to avoid false alarms in a home security camera.

The full project and instructions to run it are available in the DJL demo repository.

Setting up

To begin, create a new Java project with the following dependencies, shown here in gradle format:

dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl:api" runtimeOnly "ai.djl.mxnet:mxnet-model-zoo" runtimeOnly "ai.djl.mxnet:mxnet-native-auto" implementation "software.amazon.awssdk:kinesisvideo:2.10.75" implementation "software.amazon.kinesis:amazon-kinesis-client:2.2.9" implementation "com.amazonaws:amazon-kinesis-video-streams-parser-library:1.0.13"
}

The DJL ImageVisitor

Because the model works on images, you can create a DJL FrameVisitor that visits and runs your model on each frame in the video. In real applications, it might help to only run your model on a fraction of the frames in the video. See the following code:

FrameVisitor frameVisitor = FrameVisitor.create(new DjlImageVisitor());

The DjlImageVisitor class extends the H264FrameDecoder to provide the capability to convert the frame into a standard Java BufferedImage. Because DJL natively supports this class, you can run it directly from the BufferedImage.

In DJL, the Predictor is used to run the trained model against live data. This is often referred to as inference or prediction. It fully encapsulates the inference experience by taking your input through preprocessing to prepare it into the model’s data structure, running the model itself, and postprocessing the data into an easy-to-use output class. In the following code block, the Predictor converts an Image to the set of outputs, DetectedObjects. An ImageFactory converts a standard Java BufferedImage into the DJL Image class:

public class DjlImageVisitor extends H264FrameDecoder { Predictor<Image, DetectedObjects> predictor; ImageFactory factory = ImageFactory.getInstance(); ... }

DJL also provides a model zoo where you can find many models trained on different tasks, datasets, and engines. For now, create a Predictor using the basic SSD object detection model. You can also use the default preprocessing and postprocessing defined within the model zoo to directly create a Predictor. For your own applications, you can define custom processing in a Translator and pass it in when creating a new Predictor:

Criteria<Image, DetectedObjects> criteria = Criteria.builder() .setTypes(Image.class, DetectedObjects.class) .optArtifactId("ai.djl.mxnet:ssd") .build();
predictor = ModelZoo.loadModel(criteria).newPredictor();

Then, you just need to define the FrameVisitors process method that is called to handle the various frames as follows. You convert the Frame into a BufferedImage using the decodeH264Frame method defined within the H264FrameDecoder. You wrap that into an Image using the ImageFactory you created earlier. Then, you use your Predictor to run prediction using the SSD model. See the following code:

 @Override public void process( Frame frame, MkvTrackMetadata trackMetadata, Optional<FragmentMetadata> fragmentMetadata) throws FrameProcessException { Image image = factory.fromImage(decodeH264Frame(frame, trackMetadata)); DetectedObjects prediction = predictor.predict(image); }

Using the prediction

At this point, you have the detected objects and can use them for whatever your application requires. For a simple application, you could just print out all the class names that you detected to standard out as follows:

 String classStr = prediction .items() .stream() .map(Classification::getClassName) .collect(Collectors.joining(", ")); System.out.println("Found objects: " + classStr);

You could also find out if there is a high probability that a person was in the image using the following code:

 boolean hasPerson = prediction .items() .stream() .anyMatch( c -> "person".equals(c.getClassName()) && c.getProbability() > 0.5);

Another option is to use the image visualization methods in the Image class to draw the bounding boxes on top of the original image. Then, you can get a visual representation of the detected objects. See the following code:

 image.drawBoundingBoxes(prediction); Path outputFile = Paths.get("out/annotatedImage.png"); try (OutputStream os = Files.newOutputStream(outputFile)) { image.save(os, "png"); }

Running the stream

You’re now ready to set up your video stream. For instructions, see Create a Kinesis Video Stream. Make sure to record the REGION and STREAM_NAME that you used so you can pass it into your application.

Then, create a new thread pool to run your application. You also need to build a GetMediaWorker with all the data for your video stream and run it on the thread pool. For your getMediaworker, you need to pass in the data you pulled from the Kinesis Video Streams console describing your video stream. You also need to provide the AWS credentials for accessing the stream. Use the SystemPropertiesCredentialsProvider, which finds the credentials in the JVM System Properties. You can find more details about providing these credentials in the demo repository. Lastly, we need to pass in the StartSelectorType.NOW to start using the stream immediately. See the following code:

ExecutorService executorService = Executors.newFixedThreadPool(1); AmazonKinesisVideoClientBuilder amazonKinesisVideoBuilder = AmazonKinesisVideoClientBuilder.standard();
amazonKinesisVideoBuilder.setRegion(REGION.getName());
amazonKinesisVideoBuilder.setCredentials(new SystemPropertiesCredentialsProvider());
AmazonKinesisVideo amazonKinesisVideo = amazonKinesisVideoBuilder.build(); GetMediaWorker getMediaWorker = GetMediaWorker.create( REGION, new SystemPropertiesCredentialsProvider(), STREAM_NAME, new StartSelector().withStartSelectorType(StartSelectorType.NOW), amazonKinesisVideo, frameVisitor);
executorService.submit(getMediaWorker);

Conclusion

That’s it! You’re ready to begin sending data to your stream and detecting the objects in the video. You can find more information about the Kinesis Video Streams API in the Amazon Kinesis Video Streams Producer SDK Java GitHub repo. The full Kinesis Video Streams DJL demo is available with the rest of the DJL demo applications and integrations with many other AWS and Java tools in the demo repository.

Now that you have integrated Kinesis Video Streams and DJL, you can improve your application in many different ways. You can choose additional object detection and image-based models from the more than 70 pre-trained and ready-to-use models in our model zoo from GluonCV, TorchHub, and Keras. You can run these or custom models across any of the engines supported by DJL, including Tensorflow, PyTorch, MXNet, and ONNX Runtime. DJL even has full training support so you can build your own model to add to your video streaming application instead of relying on a pre-trained one.

Don’t forget to follow our GitHub repo, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Authors

Zach Kimberg is a Software Engineer with AWS Deep Learning working mainly on Apache MXNet for Java and Scala. Outside of work he enjoys reading, especially Fantasy.

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/video-streaming-and-deep-learning-using-amazon-kinesis-video-streams-with-deep-java-library/

Continue Reading

AI

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning […]

Published

on

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning has traditionally been difficult to come by.

Deep Java Library (DJL) is a new open-source deep learning framework for Java built by AWS. It sits on top of native engines, so you can train entirely in DJL while using different engines on the backend, such as PyTorch and Apache MXNet. It can also import and run models built using Tensorflow, Keras, and PyTorch. DJL can bridge the ease of Kinesis Video Streams with the power of deep learning for your own video analytics application.

In this tutorial, we walk through running an object detection model against a Kinesis video stream. In object detection, the computer finds different types of objects in an image and draws a bounding box, describing their locations inside the image. For example, you can use detection to recognize objects like dogs or people to avoid false alarms in a home security camera.

The full project and instructions to run it are available in the DJL demo repository.

Setting up

To begin, create a new Java project with the following dependencies, shown here in gradle format:

dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl:api" runtimeOnly "ai.djl.mxnet:mxnet-model-zoo" runtimeOnly "ai.djl.mxnet:mxnet-native-auto" implementation "software.amazon.awssdk:kinesisvideo:2.10.75" implementation "software.amazon.kinesis:amazon-kinesis-client:2.2.9" implementation "com.amazonaws:amazon-kinesis-video-streams-parser-library:1.0.13"
}

The DJL ImageVisitor

Because the model works on images, you can create a DJL FrameVisitor that visits and runs your model on each frame in the video. In real applications, it might help to only run your model on a fraction of the frames in the video. See the following code:

FrameVisitor frameVisitor = FrameVisitor.create(new DjlImageVisitor());

The DjlImageVisitor class extends the H264FrameDecoder to provide the capability to convert the frame into a standard Java BufferedImage. Because DJL natively supports this class, you can run it directly from the BufferedImage.

In DJL, the Predictor is used to run the trained model against live data. This is often referred to as inference or prediction. It fully encapsulates the inference experience by taking your input through preprocessing to prepare it into the model’s data structure, running the model itself, and postprocessing the data into an easy-to-use output class. In the following code block, the Predictor converts an Image to the set of outputs, DetectedObjects. An ImageFactory converts a standard Java BufferedImage into the DJL Image class:

public class DjlImageVisitor extends H264FrameDecoder { Predictor<Image, DetectedObjects> predictor; ImageFactory factory = ImageFactory.getInstance(); ... }

DJL also provides a model zoo where you can find many models trained on different tasks, datasets, and engines. For now, create a Predictor using the basic SSD object detection model. You can also use the default preprocessing and postprocessing defined within the model zoo to directly create a Predictor. For your own applications, you can define custom processing in a Translator and pass it in when creating a new Predictor:

Criteria<Image, DetectedObjects> criteria = Criteria.builder() .setTypes(Image.class, DetectedObjects.class) .optArtifactId("ai.djl.mxnet:ssd") .build();
predictor = ModelZoo.loadModel(criteria).newPredictor();

Then, you just need to define the FrameVisitors process method that is called to handle the various frames as follows. You convert the Frame into a BufferedImage using the decodeH264Frame method defined within the H264FrameDecoder. You wrap that into an Image using the ImageFactory you created earlier. Then, you use your Predictor to run prediction using the SSD model. See the following code:

 @Override public void process( Frame frame, MkvTrackMetadata trackMetadata, Optional<FragmentMetadata> fragmentMetadata) throws FrameProcessException { Image image = factory.fromImage(decodeH264Frame(frame, trackMetadata)); DetectedObjects prediction = predictor.predict(image); }

Using the prediction

At this point, you have the detected objects and can use them for whatever your application requires. For a simple application, you could just print out all the class names that you detected to standard out as follows:

 String classStr = prediction .items() .stream() .map(Classification::getClassName) .collect(Collectors.joining(", ")); System.out.println("Found objects: " + classStr);

You could also find out if there is a high probability that a person was in the image using the following code:

 boolean hasPerson = prediction .items() .stream() .anyMatch( c -> "person".equals(c.getClassName()) && c.getProbability() > 0.5);

Another option is to use the image visualization methods in the Image class to draw the bounding boxes on top of the original image. Then, you can get a visual representation of the detected objects. See the following code:

 image.drawBoundingBoxes(prediction); Path outputFile = Paths.get("out/annotatedImage.png"); try (OutputStream os = Files.newOutputStream(outputFile)) { image.save(os, "png"); }

Running the stream

You’re now ready to set up your video stream. For instructions, see Create a Kinesis Video Stream. Make sure to record the REGION and STREAM_NAME that you used so you can pass it into your application.

Then, create a new thread pool to run your application. You also need to build a GetMediaWorker with all the data for your video stream and run it on the thread pool. For your getMediaworker, you need to pass in the data you pulled from the Kinesis Video Streams console describing your video stream. You also need to provide the AWS credentials for accessing the stream. Use the SystemPropertiesCredentialsProvider, which finds the credentials in the JVM System Properties. You can find more details about providing these credentials in the demo repository. Lastly, we need to pass in the StartSelectorType.NOW to start using the stream immediately. See the following code:

ExecutorService executorService = Executors.newFixedThreadPool(1); AmazonKinesisVideoClientBuilder amazonKinesisVideoBuilder = AmazonKinesisVideoClientBuilder.standard();
amazonKinesisVideoBuilder.setRegion(REGION.getName());
amazonKinesisVideoBuilder.setCredentials(new SystemPropertiesCredentialsProvider());
AmazonKinesisVideo amazonKinesisVideo = amazonKinesisVideoBuilder.build(); GetMediaWorker getMediaWorker = GetMediaWorker.create( REGION, new SystemPropertiesCredentialsProvider(), STREAM_NAME, new StartSelector().withStartSelectorType(StartSelectorType.NOW), amazonKinesisVideo, frameVisitor);
executorService.submit(getMediaWorker);

Conclusion

That’s it! You’re ready to begin sending data to your stream and detecting the objects in the video. You can find more information about the Kinesis Video Streams API in the Amazon Kinesis Video Streams Producer SDK Java GitHub repo. The full Kinesis Video Streams DJL demo is available with the rest of the DJL demo applications and integrations with many other AWS and Java tools in the demo repository.

Now that you have integrated Kinesis Video Streams and DJL, you can improve your application in many different ways. You can choose additional object detection and image-based models from the more than 70 pre-trained and ready-to-use models in our model zoo from GluonCV, TorchHub, and Keras. You can run these or custom models across any of the engines supported by DJL, including Tensorflow, PyTorch, MXNet, and ONNX Runtime. DJL even has full training support so you can build your own model to add to your video streaming application instead of relying on a pre-trained one.

Don’t forget to follow our GitHub repo, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Authors

Zach Kimberg is a Software Engineer with AWS Deep Learning working mainly on Apache MXNet for Java and Scala. Outside of work he enjoys reading, especially Fantasy.

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/video-streaming-and-deep-learning-using-amazon-kinesis-video-streams-with-deep-java-library/

Continue Reading
AI38 mins ago

How to Improve Your Supply Chain With Deep Reinforcement Learning

AI42 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI42 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI4 hours ago

Conversation Designers: who are they and what do they do?

AI4 hours ago

Automating Bot Testing at Haptik

AI5 hours ago

Why Facebook’s New Machine Translation Model is a Great Step for AI

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

Trending