Connect with us

AI

RENGA Inc. automates code reviews with Amazon CodeGuru

This guest post was authored by Kazuma Ohara, Director of RENGA Inc., and edited by Yumiko Kanasugi, Solutions Architect at AWS Japan. RENGA Inc. operates Mansion Note, one of Japan’s most popular condominium review and rating websites, which gets over a million unique visitors per month. Mansion Note provides a service where people can check […]

Published

on

This guest post was authored by Kazuma Ohara, Director of RENGA Inc., and edited by Yumiko Kanasugi, Solutions Architect at AWS Japan.

RENGA Inc. operates Mansion Note, one of Japan’s most popular condominium review and rating websites, which gets over a million unique visitors per month. Mansion Note provides a service where people can check reviews and rankings of condominiums and apartments all over Japan. People of all positions, such as current residents, former residents, neighbors, experts, real estate agents, and property owners, clarify their positions first and then post and share their candid opinions and reviews about the condominiums. This “wisdom of crowds” is expected to help potential buyers and tenants better imagine their new home before moving in, and can help eliminate regrets such as, “This is different from what I expected,” or, “I should have chosen a different condo.”

The company has a total of six engineers, including myself. Code reviews have been an essential process in our development, because RENGA takes code quality very seriously. The company, however, used to face a challenge in which code review tasks increased in proportion to the quantity of development, which led to an increased burden on reviewers. Also, no matter how many times we reviewed code, some bugs remained unnoticed, so we needed a mechanism to conduct code reviews more exhaustively.

We saw the announcement of Amazon CodeGuru at re:Invent 2019. The moment we learned that CodeGuru is a machine learning (ML)-based code review service, we knew it was exactly the tool we were looking for. At the time of the announcement, we were making significant improvements to our source code, so we thought that CodeGuru Reviewer might help us with that. We decided to adopt the tool, and it pointed out issues that neither our members nor other static analytics tools had ever detected. In this post, I talk about why RENGA decided to adopt CodeGuru, as well as its adoption process.

Maintaining code quality

RENGA was founded in 2012; 2020 marks its 8th anniversary. Although our product is getting mature, we still invest a lot of resources in development so we can extend features quickly. We not only accelerate the development cycle, but also give the same level of priority to maintaining code quality. When extending features, poor quality code adds complexity to the system and can become a technical debt. On the other hand, as long as consistent code quality is maintained, scaling the system doesn’t prevent developers from extending features, because the code itself is simple. Keeping the balance between agility and quality is important, especially for startups that continuously release new features.

RENGA has a two-step code review process. When a developer commits a fix to our remote repository on GitHub and makes a pull request, two senior members review it first. Then I do the final check and merge it into the primary branch (unless I find an issue). We also check the minimum coding rules using Checkstyle during the build phase.

In the past, we had a challenge with the cost and quality of code reviews. As the amount of code increased at RENGA, the burden on reviewers also increased. We spend about 5% of our development time on code reviews, and reviewers spend an additional 1 hour a day on average for code reviews. Code reviews can, however, be a bottleneck when we want to quickly release new features and promptly deliver those values to our users. Also, the more code we need to review, the less accurate it is to identify issues, and some issues may even be overlooked. One solution is to increase reviewers, but that isn’t easy because code reviews require not only extensive business and technical knowledge, but also an understanding of the top modules. So we needed an automated tool that could offload the reviewers’ workload.

Adopting CodeGuru Reviewer

CodeGuru Reviewer is an automated tool that you can seamlessly integrate into your development pipeline, so we found it very easy to adopt the tool. We had high expectations that the tool may not only solve cost issues, but also help us gain insights from a different perspective than humans because the tool is based on ML. The tool was still in the preview stage, but upon being offered a free tier, we decided to try it out.

The following architecture illustrates RENGA’s development pipeline.

Setting up CodeGuru Reviewer is very easy. All you need to do is select a repository on GitHub and associate it with the tool. AWS recommends that you create a new GitHub user for CodeGuru Reviewer before associating a GitHub repository with the tool. CodeGuru Reviewer is enabled after the association is complete.

The following screenshot shows the Associate repository page on the CodeGuru console.

CodeGuru Reviewer is triggered by a pull request. The tool provides recommendations by adding comments to the pull request, usually within 15 minutes after the request is made.

The following text is a recommendation that CodeGuru Reviewer generated:

It is more efficient to directly use Stream::min or Stream::max than Stream::sorted and Stream::findFirst. The former is O(n) in terms of time while the latter is not. Also the former is O(1) and the latter is O(n) in terms of memory.

The part of the code the recommendation pointed out was actually not a bottleneck, but it did help us improve performance. Learning this method has helped our developers code better with more confidence.

The following text is another example of a recommendation from CodeGuru Reviewer:

Consider closing the resource returned by the following method call: newInputStream. Currently, there are execution paths that do not contain closure statements, e.g., when exception is thrown by SampleData.read. Either a) close the object returned by newInputStream() in a try-finally block or b) close the resource by declaring the object returned by newInputStream() in a try-with-resources block.

Although there was no actual resource leakage, we modified the part that was pointed out to clarify that no leakage will occur, which improved the code readability.

After adopting CodeGuru Reviewer, we feel that the product generates fewer recommendations compared to other existing static analytics tools, but it provides highly accurate recommendations with less false-positive advice. Too many recommendations require extra time for triage, so accurate recommendations are a big bonus for busy developers.

Summary

Although the code review process is important, it shouldn’t increase the workload for reviewers and become a bottleneck in development. By adopting CodeGuru Reviewer, we successfully automated code reviews and reduced reviewers’ workloads. Furthermore, learning the best practices of coding—which we weren’t aware of—has helped us develop with more confidence. Going forward, we plan on measuring metrics such as cyclomatic complexity so we can provide higher-quality services to our customers promptly. We also expect that CodeGuru Reviewer will further expand its recommendation items.


About the Authors

Kazuma Ohara is the Director of RENGA Inc., an internet services company headquartered in Japan.

Yumiko Kanasugi is a Solutions Architect with Amazon Web Services Japan, supporting digital native business customers to utilize AWS.

Source: https://aws.amazon.com/blogs/machine-learning/renga-inc-automates-code-reviews-with-amazon-codeguru/

AI

How to Improve Your Supply Chain With Deep Reinforcement Learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart. Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources […]

The post How to Improve Your Supply Chain With Deep Reinforcement Learning appeared first on TOPBOTS.

Published

on

reinforcement learning

What has set Amazon apart from the competition in online retail? Their supply chain. In fact, this has long been one of the greatest strengths of one of their chief competitors, Walmart.

Supply chains are highly complex systems consisting of hundreds if not thousands of manufacturers and logistics carriers around the world who combine resources to create the products we use and consume every day. To track all of the inputs to a single, simple product would be staggering. Yet supply chain organizations inside vertically integrated corporations are tasked with managing inputs from raw materials, to manufacturing, warehousing, and distribution to customers. The companies that do this best cut down on waste from excess storage, to unneeded transportation costs, and lost time to get products and materials to later stages in the system. Optimizing these systems is a key component in businesses as dissimilar as Apple and Saudi Aramco.

A lot of time and effort has been put into building effective supply chain optimization models, but due to their size and complexity, they can be difficult to build and manage. With advances in machine learning, particularly reinforcement learning, we can train a machine learning model to make these decisions for us, and in many cases, do so better than traditional approaches!

TL;DR

We train a deep reinforcement learning model using Ray and or-gym to optimize a multi-echelon inventory management model and benchmark it against a derivative free optimization model using Powell’s Method.

Multi-Echelon Supply Chain

In our example, we’re going to work with a multi-echelon supply chain model with lead times. This means that we have different stages of our supply chain that we need to make decisions for, and each decision that we make at different levels are going to affect decisions downstream. In our case, we have M stages going back to the producer of our raw materials all the way to our customers. Each stage along the way has a different lead time, or time it takes for the output of one stage to arrive and become the input for the next stage in the chain. This may be 5 days, 10 days, whatever. The longer these lead times become, the earlier you need to anticipate customer orders and demand to ensure you don’t stock out or lose sales!

If this in-depth educational content on is useful for you, you can subscribe to our AI research mailing list to be alerted when we release new material. 

Inventory Management with OR-Gym

The OR-Gym library has a few multi-echelon supply chain models ready to go to simulate this structure. For this, we’ll use the InvManagement-v1 environment, which has the structure shown above, but results in lost sales if you don’t have sufficient inventory to meet customer demand.

If you haven’t already, go ahead and install the package with:

pip install or-gym

Once installed, we can set up our environment with:

env = or_gym.make('InvManagement-v1')

This is a four-echelon supply chain by default. The actions determine how much material to order from the echelon above at each time step. The orders quantities are limited by the capacity of the supplier and their current inventory. So, if you order 150 widgets from a supplier that has a shipment capacity of 100 widgets and only has 90 widgets on hand, you’re going to only get 90 sent.

Each echelon has its own costs structure, pricing, and lead times. The last echelon (Stage 3 in this case) provides raw materials, and we don’t have any inventory constraints on this stage, assuming that the mine, oil well, forest — or whatever produces your raw material inputs — is large enough that this isn’t a constraint we need to concern ourselves with.

Default parameter values for the Invmanagement-v1 environment.

As with all or-gym environments, if these settings don’t suit you, simply pass an environment configuration dictionary to the make function to customize your supply chain accordingly (an example is given here).

Training with Ray

To train your environment, we’re going to leverage the Ray library to speed up our training, so go ahead and import your packages.

import or_gym
from or_gym.utils import create_env
import ray
from ray.rllib import agents
from ray import tune

To get started, we’re going to need a brief registration function to ensure that Ray knows about the environment we want to run. We can register that with the register_env function shown below.

def register_env(env_name, env_config={}): env = create_env(env_name) tune.register_env(env_name, lambda env_name: env(env_name, env_config=env_config))

From here, we can set up our RL configuration and everything we need to train the model.

# Environment and RL Configuration Settings
env_name = 'InvManagement-v1'
env_config = {} # Change environment parameters here
rl_config = dict( env=env_name, num_workers=2, env_config=env_config, model=dict( vf_share_layers=False, fcnet_activation='elu', fcnet_hiddens=[256, 256] ), lr=1e-5
) # Register environment
register_env(env_name, env_config)

The rl_config dictionary is where you can set all of the relevant hyperparameters or set your system to run on a GPU. Here, we’re just going to use 2 workers for parallelization, and train a two-layer network with an ELU activation function. Additionally, if you’re going to use tune for hyperparameter tuning, then you can use tools like tune.gridsearch() to systematically update learning rates, change the network, or whatever you like.

Once your happy with that, go head and choose your algorithm and get to training! Below, I just use the PPO algorithm because I find it trains well on most environments.

# Initialize Ray and Build Agent
ray.init(ignore_reinit_error=True)
agent = agents.ppo.PPOTrainer(env=env_name, config=rl_config) results = []
for i in range(500): res = agent.train() results.append(res) if (i+1) % 5 == 0: print('\rIter: {}\tReward: {:.2f}'.format( i+1, res['episode_reward_mean']), end='')
ray.shutdown()

The code above will initialize ray, then build the agent according to the configuration you specified previously. If you’re happy with that, then let it run for a bit and see how it does!

One thing to note with this environment: if the learning rate is too high, the policy function will begin to diverge such that the loss becomes astronomically large. At that point, you’ll wind up getting an error, typically stemming from Ray’s default pre-processor with state showing bizarre values because the actions being given by the network are all nan. This is easy to fix by bringing the learning rate down a bit and trying again.

Let’s take a look at the performance.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec # Unpack values from each iteration
rewards = np.hstack([i['hist_stats']['episode_reward'] for i in results])
pol_loss = [ i['info']['learner']['default_policy']['policy_loss'] for i in results]
vf_loss = [ i['info']['learner']['default_policy']['vf_loss'] for i in results] p = 100
mean_rewards = np.array([np.mean(rewards[i-p:i+1]) if i >= p else np.mean(rewards[:i+1]) for i, _ in enumerate(rewards)])
std_rewards = np.array([np.std(rewards[i-p:i+1]) if i >= p else np.std(rewards[:i+1]) for i, _ in enumerate(rewards)]) fig = plt.figure(constrained_layout=True, figsize=(20, 10))
gs = fig.add_gridspec(2, 4)
ax0 = fig.add_subplot(gs[:, :-2])
ax0.fill_between(np.arange(len(mean_rewards)), mean_rewards - std_rewards, mean_rewards + std_rewards, label='Standard Deviation', alpha=0.3)
ax0.plot(mean_rewards, label='Mean Rewards')
ax0.set_ylabel('Rewards')
ax0.set_xlabel('Episode')
ax0.set_title('Training Rewards')
ax0.legend() ax1 = fig.add_subplot(gs[0, 2:])
ax1.plot(pol_loss)
ax1.set_ylabel('Loss')
ax1.set_xlabel('Iteration')
ax1.set_title('Policy Loss') ax2 = fig.add_subplot(gs[1, 2:])
ax2.plot(vf_loss)
ax2.set_ylabel('Loss')
ax2.set_xlabel('Iteration')
ax2.set_title('Value Function Loss') plt.show()
Image by author.

It looks like our agent learned a decent policy!

One of the difficulties of deep reinforcement learning for these classic, operations research problems is the lack of optimality guarantees. In other words, we can look at that training curve above and see that it is learning a better and better policy — and it seems to be converging on a policy — but we don’t know how good that policy is. Could we do better? Should we invest more time (and money) into hyperparameter tuning? To answer this, we need to turn to some different methods and develop a benchmark.

Derivative Free Optimization

A good way to benchmark an RL model is with derivative free optimization (DFO). Like RL, DFO treats the system as a black-box model providing inputs and getting some feedback in return to try again as it seeks the optimal value.

Unlike RL, DFO has no concept of a state. This means that we will try to find a fixed re-order policy to bring inventory up to a certain level to balance holding costs and profit from sales. For example, if the policy at stage 0 is to re-order up to 10 widgets, and the currently, we have 4 widgets, then the policy states we’re going to re-order 6. In the RL case, it would take into account the current pipeline and all of the other information that we provide into the state. So RL is more adaptive and ought to outperform a straightforward DFO implementation. If it doesn’t, then we know we need to go back to the drawing board.

While it may sound simplistic, this fixed re-order policy isn’t unusual in industrial applications, partly because real supply chains consist of many more variables and interrelated decisions than we’re modeling here. So a fixed policy is tractable and something that supply chain professionals can easily work with.

Implementing DFO

There are a lot of different algorithms and solvers out there for DFO. For our purposes, we’re going to leverage Scipy’s optimize library to implement Powell’s Method. We won’t get into the details here, but this is a way to quickly find minima on functions and can be used for discrete optimization – like we have here.

from scipy.optimize import minimize

Because we’re going to be working with a fixed re-order policy, we need a quick function to translate inventory levels into actions to evaluate.

def base_stock_policy(policy, env): ''' Implements a re-order up-to policy. This means that for each node in the network, if the inventory at that node falls below the level denoted by the policy, we will re-order inventory to bring it to the policy level. For example, policy at a node is 10, current inventory is 5: the action is to order 5 units. ''' assert len(policy) == len(env.init_inv), ( 'Policy should match number of nodes in network' + '({}, {}).'.format( len(policy), len(env.init_inv))) # Get echelon inventory levels if env.period == 0: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period]) else: inv_ech = np.cumsum(env.I[env.period] + env.T[env.period] - env.B[env.period-1, :-1]) # Get unconstrained actions unc_actions = policy - inv_ech unc_actions = np.where(unc_actions>0, unc_actions, 0) # Ensure that actions can be fulfilled by checking # constraints inv_const = np.hstack([env.I[env.period, 1:], np.Inf]) actions = np.minimum(env.c, np.minimum(unc_actions, inv_const)) return actions

The base_stock_policy function takes the policy levels we supply and calculates the difference between the level and the inventory as described above. One thing to note, when we calculate the inventory level, we include all of the inventory in transit to the stage as well (given in env.T). For example, if the current inventory on hand for stage 0 is 100, and there is a lead time of 5 days between stage 0 and stage 1, then we take all of those orders for the past 5 days into account as well. So, if stage 0 ordered 10 units each day, then the inventory at this echelon would be 150. This makes policy levels greater than capacity meaningful because we’re looking at more than just the inventory in our warehouse today, but looking at everything in transit too.

Our DFO method needs to make function evaluation calls to see how the selected variables perform. In our case, we have an environment to evaluate, so we need a function that will run an episode of our environment and return the appropriate results.

def dfo_func(policy, env, *args): ''' Runs an episode based on current base-stock model settings. This allows us to use our environment for the DFO optimizer. ''' env.reset() # Ensure env is fresh rewards = [] done = False while not done: action = base_stock_policy(policy, env) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break rewards = np.array(rewards) prob = env.demand_dist.pmf(env.D, **env.dist_param) # Return negative of expected profit return -1 / env.num_periods * np.sum(prob * rewards)

Rather than return the sum of the rewards, we’re returning the negative expectation of our rewards. The reason for the negative is the Scipy function we’re using seeks to minimize whereas our environment is designed to maximize the reward, so we invert this to ensure everything is pointing in the right direction. We calculate the expected rewards by multiplying by the probability of our demand based on the distribution. We could take more samples to estimate the distribution and calculate our expectation that way (and for many real-world applications, this would be required), but here, we have access to the true distribution so we can use that to reduce our computational burden.

Finally, we’re ready to optimize.

The following function will build an environment based on your configuration settings, take our dfo_func to evaluate, and apply Powell’s Method to the problem. It will return our policy and ensure that our answer contains only positive integers (e.g. we can’t order half a widget or a negative number of widgets).

def optimize_inventory_policy(env_name, fun, init_policy=None, env_config={}, method='Powell'): env = or_gym.make(env_name, env_config=env_config) if init_policy is None: init_policy = np.ones(env.num_stages-1) # Optimize policy out = minimize(fun=fun, x0=init_policy, args=env, method=method) policy = out.x.copy() # Policy must be positive integer policy = np.round(np.maximum(policy, 0), 0).astype(int) return policy, out

Now it’s time to put it all together.

policy, out = optimize_inventory_policy('InvManagement-v1', dfo_func)
print("Re-order levels: {}".format(policy))
print("DFO Info:\n{}".format(out))Re-order levels: [540 216 81]
DFO Info: direc: array([[ 0. , 0. , 1. ], [ 0. , 1. , 0. ], [206.39353826, 81.74560612, 28.78995703]]) fun: -0.9450780368543933 message: 'Optimization terminated successfully.' nfev: 212 nit: 5 status: 0 success: True x: array([539.7995151 , 216.38046861, 80.66902905])

Our DFO model found a fixed-stock policy with re-order levels at 540 for stage 0, 216 for stage 1, and 81 for stage 2. It did this with only 212 function evaluations, i.e. it simulated 212 episodes to find the optimal value.

We can run then feed this policy into our environment, say 1,000 times, to generate some statistics and compare it to our RL solution.

env = or_gym.make(env_name, env_config=env_config)
eps = 1000
rewards = []
for i in range(eps): env.reset() reward = 0 while True: action = base_stock_policy(policy, eenv) s, r, done, _ = env.step(action) reward += r if done: rewards.append(reward) break

Comparing Performance

Before we get into the reward comparisons, note that these are not perfect, 1:1 comparisons. As mentioned before, DFO yields us a fixed policy whereas RL has a more flexible, dynamic policy that changes based on state information. Our DFO approach was also given some information in terms of probabilities of demand to calculate the expectation on, RL had to infer that from additional sampling. So while RL learned from nearly ~65k episodes and DFO only had to make 212 function calls, they aren’t exactly comparable. Considering that to enumerate every meaningful fixed policy once would require ~200 million episodes, then RL doesn’t look so sample inefficient given its task.

So, how do these stack up?

Image by author.

What we can see above is that RL does indeed outperform our DFO policy by 11% on average (460 to 414). The RL model overtook the DFO policy after ~15k episodes and improved steadily after that. There is some higher variance with the RL policy however, with a few terrible episodes thrown in to the mix. All things considered, we did get stronger results overall from the RL approach, as expected.

In this case, neither method was very difficult to implement nor computationally intensive. I forgot to change my rl_config settings to run on my GPU and it still only took about 25 minutes to train on my laptop while the DFO model took ~2 seconds to run. More complex models may not be so friendly in either case.

Another thing to note, both methods can be very sensitive to initial conditions and neither are guaranteed to find the optimum policy in every case. If you have a problem you’d like to apply RL to, maybe use a simple DFO solver first, try a few initial conditions to get a feel for the problem, then spin up the full, RL model. You may find that the DFO policy is sufficient for your task.

Hopefully this gave a good overview of how to use these methods and the or-gym library. Leave feedback or questions if you have any!

This article was originally published on DataHubbs and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more applied AI updates.

We’ll let you know when we release more technical education.

Continue Reading

AI

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning […]

Published

on

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning has traditionally been difficult to come by.

Deep Java Library (DJL) is a new open-source deep learning framework for Java built by AWS. It sits on top of native engines, so you can train entirely in DJL while using different engines on the backend, such as PyTorch and Apache MXNet. It can also import and run models built using Tensorflow, Keras, and PyTorch. DJL can bridge the ease of Kinesis Video Streams with the power of deep learning for your own video analytics application.

In this tutorial, we walk through running an object detection model against a Kinesis video stream. In object detection, the computer finds different types of objects in an image and draws a bounding box, describing their locations inside the image. For example, you can use detection to recognize objects like dogs or people to avoid false alarms in a home security camera.

The full project and instructions to run it are available in the DJL demo repository.

Setting up

To begin, create a new Java project with the following dependencies, shown here in gradle format:

dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl:api" runtimeOnly "ai.djl.mxnet:mxnet-model-zoo" runtimeOnly "ai.djl.mxnet:mxnet-native-auto" implementation "software.amazon.awssdk:kinesisvideo:2.10.75" implementation "software.amazon.kinesis:amazon-kinesis-client:2.2.9" implementation "com.amazonaws:amazon-kinesis-video-streams-parser-library:1.0.13"
}

The DJL ImageVisitor

Because the model works on images, you can create a DJL FrameVisitor that visits and runs your model on each frame in the video. In real applications, it might help to only run your model on a fraction of the frames in the video. See the following code:

FrameVisitor frameVisitor = FrameVisitor.create(new DjlImageVisitor());

The DjlImageVisitor class extends the H264FrameDecoder to provide the capability to convert the frame into a standard Java BufferedImage. Because DJL natively supports this class, you can run it directly from the BufferedImage.

In DJL, the Predictor is used to run the trained model against live data. This is often referred to as inference or prediction. It fully encapsulates the inference experience by taking your input through preprocessing to prepare it into the model’s data structure, running the model itself, and postprocessing the data into an easy-to-use output class. In the following code block, the Predictor converts an Image to the set of outputs, DetectedObjects. An ImageFactory converts a standard Java BufferedImage into the DJL Image class:

public class DjlImageVisitor extends H264FrameDecoder { Predictor<Image, DetectedObjects> predictor; ImageFactory factory = ImageFactory.getInstance(); ... }

DJL also provides a model zoo where you can find many models trained on different tasks, datasets, and engines. For now, create a Predictor using the basic SSD object detection model. You can also use the default preprocessing and postprocessing defined within the model zoo to directly create a Predictor. For your own applications, you can define custom processing in a Translator and pass it in when creating a new Predictor:

Criteria<Image, DetectedObjects> criteria = Criteria.builder() .setTypes(Image.class, DetectedObjects.class) .optArtifactId("ai.djl.mxnet:ssd") .build();
predictor = ModelZoo.loadModel(criteria).newPredictor();

Then, you just need to define the FrameVisitors process method that is called to handle the various frames as follows. You convert the Frame into a BufferedImage using the decodeH264Frame method defined within the H264FrameDecoder. You wrap that into an Image using the ImageFactory you created earlier. Then, you use your Predictor to run prediction using the SSD model. See the following code:

 @Override public void process( Frame frame, MkvTrackMetadata trackMetadata, Optional<FragmentMetadata> fragmentMetadata) throws FrameProcessException { Image image = factory.fromImage(decodeH264Frame(frame, trackMetadata)); DetectedObjects prediction = predictor.predict(image); }

Using the prediction

At this point, you have the detected objects and can use them for whatever your application requires. For a simple application, you could just print out all the class names that you detected to standard out as follows:

 String classStr = prediction .items() .stream() .map(Classification::getClassName) .collect(Collectors.joining(", ")); System.out.println("Found objects: " + classStr);

You could also find out if there is a high probability that a person was in the image using the following code:

 boolean hasPerson = prediction .items() .stream() .anyMatch( c -> "person".equals(c.getClassName()) && c.getProbability() > 0.5);

Another option is to use the image visualization methods in the Image class to draw the bounding boxes on top of the original image. Then, you can get a visual representation of the detected objects. See the following code:

 image.drawBoundingBoxes(prediction); Path outputFile = Paths.get("out/annotatedImage.png"); try (OutputStream os = Files.newOutputStream(outputFile)) { image.save(os, "png"); }

Running the stream

You’re now ready to set up your video stream. For instructions, see Create a Kinesis Video Stream. Make sure to record the REGION and STREAM_NAME that you used so you can pass it into your application.

Then, create a new thread pool to run your application. You also need to build a GetMediaWorker with all the data for your video stream and run it on the thread pool. For your getMediaworker, you need to pass in the data you pulled from the Kinesis Video Streams console describing your video stream. You also need to provide the AWS credentials for accessing the stream. Use the SystemPropertiesCredentialsProvider, which finds the credentials in the JVM System Properties. You can find more details about providing these credentials in the demo repository. Lastly, we need to pass in the StartSelectorType.NOW to start using the stream immediately. See the following code:

ExecutorService executorService = Executors.newFixedThreadPool(1); AmazonKinesisVideoClientBuilder amazonKinesisVideoBuilder = AmazonKinesisVideoClientBuilder.standard();
amazonKinesisVideoBuilder.setRegion(REGION.getName());
amazonKinesisVideoBuilder.setCredentials(new SystemPropertiesCredentialsProvider());
AmazonKinesisVideo amazonKinesisVideo = amazonKinesisVideoBuilder.build(); GetMediaWorker getMediaWorker = GetMediaWorker.create( REGION, new SystemPropertiesCredentialsProvider(), STREAM_NAME, new StartSelector().withStartSelectorType(StartSelectorType.NOW), amazonKinesisVideo, frameVisitor);
executorService.submit(getMediaWorker);

Conclusion

That’s it! You’re ready to begin sending data to your stream and detecting the objects in the video. You can find more information about the Kinesis Video Streams API in the Amazon Kinesis Video Streams Producer SDK Java GitHub repo. The full Kinesis Video Streams DJL demo is available with the rest of the DJL demo applications and integrations with many other AWS and Java tools in the demo repository.

Now that you have integrated Kinesis Video Streams and DJL, you can improve your application in many different ways. You can choose additional object detection and image-based models from the more than 70 pre-trained and ready-to-use models in our model zoo from GluonCV, TorchHub, and Keras. You can run these or custom models across any of the engines supported by DJL, including Tensorflow, PyTorch, MXNet, and ONNX Runtime. DJL even has full training support so you can build your own model to add to your video streaming application instead of relying on a pre-trained one.

Don’t forget to follow our GitHub repo, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Authors

Zach Kimberg is a Software Engineer with AWS Deep Learning working mainly on Apache MXNet for Java and Scala. Outside of work he enjoys reading, especially Fantasy.

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/video-streaming-and-deep-learning-using-amazon-kinesis-video-streams-with-deep-java-library/

Continue Reading

AI

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning […]

Published

on

Amazon Kinesis Video Streams allows you to easily ingest video data from connected devices for processing. One of the most effective ways to process this video data is using the power of deep learning. You can create an efficient service infrastructure to run these computations with a Java server, but Java support for deep learning has traditionally been difficult to come by.

Deep Java Library (DJL) is a new open-source deep learning framework for Java built by AWS. It sits on top of native engines, so you can train entirely in DJL while using different engines on the backend, such as PyTorch and Apache MXNet. It can also import and run models built using Tensorflow, Keras, and PyTorch. DJL can bridge the ease of Kinesis Video Streams with the power of deep learning for your own video analytics application.

In this tutorial, we walk through running an object detection model against a Kinesis video stream. In object detection, the computer finds different types of objects in an image and draws a bounding box, describing their locations inside the image. For example, you can use detection to recognize objects like dogs or people to avoid false alarms in a home security camera.

The full project and instructions to run it are available in the DJL demo repository.

Setting up

To begin, create a new Java project with the following dependencies, shown here in gradle format:

dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl:api" runtimeOnly "ai.djl.mxnet:mxnet-model-zoo" runtimeOnly "ai.djl.mxnet:mxnet-native-auto" implementation "software.amazon.awssdk:kinesisvideo:2.10.75" implementation "software.amazon.kinesis:amazon-kinesis-client:2.2.9" implementation "com.amazonaws:amazon-kinesis-video-streams-parser-library:1.0.13"
}

The DJL ImageVisitor

Because the model works on images, you can create a DJL FrameVisitor that visits and runs your model on each frame in the video. In real applications, it might help to only run your model on a fraction of the frames in the video. See the following code:

FrameVisitor frameVisitor = FrameVisitor.create(new DjlImageVisitor());

The DjlImageVisitor class extends the H264FrameDecoder to provide the capability to convert the frame into a standard Java BufferedImage. Because DJL natively supports this class, you can run it directly from the BufferedImage.

In DJL, the Predictor is used to run the trained model against live data. This is often referred to as inference or prediction. It fully encapsulates the inference experience by taking your input through preprocessing to prepare it into the model’s data structure, running the model itself, and postprocessing the data into an easy-to-use output class. In the following code block, the Predictor converts an Image to the set of outputs, DetectedObjects. An ImageFactory converts a standard Java BufferedImage into the DJL Image class:

public class DjlImageVisitor extends H264FrameDecoder { Predictor<Image, DetectedObjects> predictor; ImageFactory factory = ImageFactory.getInstance(); ... }

DJL also provides a model zoo where you can find many models trained on different tasks, datasets, and engines. For now, create a Predictor using the basic SSD object detection model. You can also use the default preprocessing and postprocessing defined within the model zoo to directly create a Predictor. For your own applications, you can define custom processing in a Translator and pass it in when creating a new Predictor:

Criteria<Image, DetectedObjects> criteria = Criteria.builder() .setTypes(Image.class, DetectedObjects.class) .optArtifactId("ai.djl.mxnet:ssd") .build();
predictor = ModelZoo.loadModel(criteria).newPredictor();

Then, you just need to define the FrameVisitors process method that is called to handle the various frames as follows. You convert the Frame into a BufferedImage using the decodeH264Frame method defined within the H264FrameDecoder. You wrap that into an Image using the ImageFactory you created earlier. Then, you use your Predictor to run prediction using the SSD model. See the following code:

 @Override public void process( Frame frame, MkvTrackMetadata trackMetadata, Optional<FragmentMetadata> fragmentMetadata) throws FrameProcessException { Image image = factory.fromImage(decodeH264Frame(frame, trackMetadata)); DetectedObjects prediction = predictor.predict(image); }

Using the prediction

At this point, you have the detected objects and can use them for whatever your application requires. For a simple application, you could just print out all the class names that you detected to standard out as follows:

 String classStr = prediction .items() .stream() .map(Classification::getClassName) .collect(Collectors.joining(", ")); System.out.println("Found objects: " + classStr);

You could also find out if there is a high probability that a person was in the image using the following code:

 boolean hasPerson = prediction .items() .stream() .anyMatch( c -> "person".equals(c.getClassName()) && c.getProbability() > 0.5);

Another option is to use the image visualization methods in the Image class to draw the bounding boxes on top of the original image. Then, you can get a visual representation of the detected objects. See the following code:

 image.drawBoundingBoxes(prediction); Path outputFile = Paths.get("out/annotatedImage.png"); try (OutputStream os = Files.newOutputStream(outputFile)) { image.save(os, "png"); }

Running the stream

You’re now ready to set up your video stream. For instructions, see Create a Kinesis Video Stream. Make sure to record the REGION and STREAM_NAME that you used so you can pass it into your application.

Then, create a new thread pool to run your application. You also need to build a GetMediaWorker with all the data for your video stream and run it on the thread pool. For your getMediaworker, you need to pass in the data you pulled from the Kinesis Video Streams console describing your video stream. You also need to provide the AWS credentials for accessing the stream. Use the SystemPropertiesCredentialsProvider, which finds the credentials in the JVM System Properties. You can find more details about providing these credentials in the demo repository. Lastly, we need to pass in the StartSelectorType.NOW to start using the stream immediately. See the following code:

ExecutorService executorService = Executors.newFixedThreadPool(1); AmazonKinesisVideoClientBuilder amazonKinesisVideoBuilder = AmazonKinesisVideoClientBuilder.standard();
amazonKinesisVideoBuilder.setRegion(REGION.getName());
amazonKinesisVideoBuilder.setCredentials(new SystemPropertiesCredentialsProvider());
AmazonKinesisVideo amazonKinesisVideo = amazonKinesisVideoBuilder.build(); GetMediaWorker getMediaWorker = GetMediaWorker.create( REGION, new SystemPropertiesCredentialsProvider(), STREAM_NAME, new StartSelector().withStartSelectorType(StartSelectorType.NOW), amazonKinesisVideo, frameVisitor);
executorService.submit(getMediaWorker);

Conclusion

That’s it! You’re ready to begin sending data to your stream and detecting the objects in the video. You can find more information about the Kinesis Video Streams API in the Amazon Kinesis Video Streams Producer SDK Java GitHub repo. The full Kinesis Video Streams DJL demo is available with the rest of the DJL demo applications and integrations with many other AWS and Java tools in the demo repository.

Now that you have integrated Kinesis Video Streams and DJL, you can improve your application in many different ways. You can choose additional object detection and image-based models from the more than 70 pre-trained and ready-to-use models in our model zoo from GluonCV, TorchHub, and Keras. You can run these or custom models across any of the engines supported by DJL, including Tensorflow, PyTorch, MXNet, and ONNX Runtime. DJL even has full training support so you can build your own model to add to your video streaming application instead of relying on a pre-trained one.

Don’t forget to follow our GitHub repo, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Authors

Zach Kimberg is a Software Engineer with AWS Deep Learning working mainly on Apache MXNet for Java and Scala. Outside of work he enjoys reading, especially Fantasy.

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/video-streaming-and-deep-learning-using-amazon-kinesis-video-streams-with-deep-java-library/

Continue Reading
AI35 mins ago

How to Improve Your Supply Chain With Deep Reinforcement Learning

AI40 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI40 mins ago

Video streaming and deep learning: Using Amazon Kinesis Video Streams with Deep Java Library

AI4 hours ago

Conversation Designers: who are they and what do they do?

AI4 hours ago

Automating Bot Testing at Haptik

AI5 hours ago

Why Facebook’s New Machine Translation Model is a Great Step for AI

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

AI23 hours ago

Bringing real-time machine learning-powered insights to rugby using Amazon SageMaker

Trending