Code360 powered by Coding Ninjas X Code360 powered by Coding Ninjas X
Table of contents
Understanding Racing Types and Enabling Sensors Supported by AWS DeepRacer
Choose Sensors for AWS DeepRacer Racing Types
Front-facing Camera
Front-facing stereo camera
LiDAR sensor
Configure Agent for Training AWS DeepRacer Models
Tailor AWS DeepRacer Training for Time Trials
Tailor AWS DeepRacer Training for Head-to-Head Races
Frequently asked questions
What is DeepRacer AWS?
What is DeepRacer's learning rate?
How does DeepRacer work?
What is an AWS DeepRacer student?
Which AWS services work behind the scenes to build AWS DeepRacer?
Last Updated: Mar 27, 2024

AWS DeepRacer Part-2

Master Python: Predicting weather forecasts
Ashwin Goyal
Product Manager @


AWS DeepRacer is an integrated learning system that allows users of all levels to learn and experiment with reinforcement learning and construct autonomous driving apps. It is made up of the following elements:

AWS DeepRacer Console is a machine learning tool that allows you to train and test reinforcement learning models in a simulated autonomous driving environment.

The AWS DeepRacer League is the first global independent racing league globally. This race has prizes, glory, and a chance to advance to the Championship Cup.

Understanding Racing Types and Enabling Sensors Supported by AWS DeepRacer

You can compete in the following sorts of racing events in the AWS DeepRacer League:

  • Time trial: race against the clock on an unobstructed track to see who can complete the lap in the quickest time.
  • Object avoidance: compete against the clock on a course with stationary objects to get the fastest lap time.
  • Head-to-head racing pits you against one or more other vehicles on the same course to cross the finish line first.

Currently, only time trials are supported in AWS DeepRacer community races.

To give your AWS DeepRacer car enough capabilities to detect its surroundings for a specific race type, you should experiment with different sensors. The AWS DeepRacer-supported sensors that can allow the supported sorts of autonomous racing events are described in the following section.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job

Choose Sensors for AWS DeepRacer Racing Types

The default sensor on your AWS DeepRacer car is a front-facing monocular camera. To construct front-facing stereo cameras, add another front-facing monocular camera, or use a LiDAR unit to augment either the monocular or stereo cameras.

The functional capabilities of AWS DeepRacer-supported sensors are summarised here, along with brief cost-benefit analyses:

Front-facing Camera

Images of the surroundings in front of the host vehicle, including track boundaries and forms, can be captured via a single-lens front-facing camera. It's the most affordable sensor, and it's suitable for more minor autonomous driving activities like obstacle-free time trials on well-marked tracks. It can avoid stationary impediments on set points on the course with adequate training. However, because the information about the obstacle placement is integrated into the trained model, the Time trial: race against the clock on a clear track to determine who can finish the lap in the shortest amount of time. The model is unlikely to converge with stationery items put at random places or other moving vehicles on the track.

In the actual world, the default sensor on the AWS DeepRacer car is a single-lens front-facing camera. The Camera includes a 120-degree wide-angle lens and collects RGB photos at 15 frames per second, subsequently transformed into greyscale images of 160 x 120 pixels (fps). These sensor features are kept in the simulator to increase the likelihood that the trained model will transfer successfully from the simulation to the actual world.

Front-facing stereo camera

Two or more lenses are used in a stereo camera to capture pictures with the exact resolution and frequency. The depth of seen objects is determined using images from both lenses. The depth information provided by a stereo camera is critical for the host car to avoid colliding with obstacles or other vehicles in front, especially in more dynamic environments. The addition of depth information, on the other hand, causes training to converge more slowly.

The double-lens stereo camera on the AWS DeepRacer physical car is built by adding another single-lens camera and putting each Camera on the left and right sides of the vehicle. The image captures from both cameras are synchronized using the AWS DeepRacer software. The collected pictures are transformed to greyscale, layered, then input into a neural network for inference. The same method is reproduced in the simulator to teach the model to generalize well to a real-world situation.

LiDAR sensor

A LiDAR sensor sends out pulses of light beyond the visible spectrum and measures the time it takes for each pulse to return using spinning lasers. A point on a giant 3D map centered on the LiDAR unit is used to record the direction and distance to the items that a given pulse impacts.

LiDAR, for example, aids in the detection of the host vehicle's blind areas to avoid crashes when the car changes lanes. You can allow the host vehicle to acquire enough information to take relevant actions by integrating LiDAR with mono or stereo cameras. A LiDAR sensor, on the other hand, is more expensive than a camera. The neural network must interpret the LiDAR data. As a result, concurrent training will take longer.

A LiDAR sensor is put on the back of the AWS DeepRacer physical car and slanted down by 6 degrees. It ranges from 15cm to 2m and rotates at ten revolutions per second. It can identify items behind and behind the host vehicle and tall objects that are not obscured by the vehicle's front portions. The LiDAR unit's angle and range are designed to make it less vulnerable to ambient noise.

The following combination of approved sensors can be used to design your AWS DeepRacer vehicle:

  • Front-facing single-lens Camera only.
  • This setup is ideal for time trials and obstacle avoidance with fixed-position objects.
  • The front-facing stereo camera only.
  • This arrangement helps avoid obstacles in fixed or random places.
  • Front-facing single-lens Camera with LiDAR.
  • This setup is ideal for obstacle avoidance or head-to-head competition.
  • Front-facing stereo camera with LiDAR.
  • This setup is excellent for obstacle avoidance and head-to-head racing, but it's not the most cost-effective for time trials.

Adding additional sensors to your AWS DeepRacer vehicle captures more data about the surroundings to put into the underlying neural network in training, allowing it to go from time trials to obstacle avoidance to head-to-head racing. This makes training more difficult since the model must deal with additional complications. Finally, your learning to train model tasks gets more complicated.

To learn more quickly, begin by training for time trials, then move on to object avoidance, and finally head-to-head racing. In the next part, you'll discover more thorough recommendations.

Configure Agent for Training AWS DeepRacer Models

You must equip the agent with the necessary sensors to train a reinforcement learning model for an AWS DeepRacer car to race in obstacle avoidance or head-to-head racing. You might use the default agent with a single-lens camera for simple time trials. You may modify the action space and choose a neural network architecture when configuring the agent to work better with the specified sensors and meet the required driving criteria. You may even vary the agent's look during training to help visual recognition.

The agent configuration is saved as part of the model's information for training and assessment when you set it up. The agent automatically uses the provided sensors, action space, and neural network technology to automatically retrieve the recorded configuration for evaluation.

The methods to configure an agent in the AWS DeepRacer console are outlined in this section.

In the AWS DeepRacer console, configure an AWS DeepRacer agent.

  1. Log in to the DeepRacer console on AWS.
  2. Choose Garage from the main navigation pane.
  3. The WELCOME TO THE GARAGE dialogue window appears when you launch Garage for the first time. Pick > or X to leave the dialogue box, or choose > or X to browse through an introduction to the many sensors supported by the AWS DeepRacer vehicle. This introductory information may be found in Garage's help panel.
  4. Choose Build a new car from the Garage page.
  5. Choose one or more sensors from the Mod your own car page's Mod requirements to determine the optimal combination for your desired racing kinds.
  6. Choose Camera to prepare for your AWS DeepRacer car time trials. Other sensor types should be used for obstacle avoidance or head-to-head racing. If you want to use a stereo camera, make sure you have another single-lens camera. The stereo camera created by AWS DeepRacer using Overfitting is probable, and the model may not generalize to diverse obstacle positions. If you merely want the trained model to identify and avoid blind areas in obstacle avoidance or head-to-head racing, you may add a LiDAR sensor to the agent in either instance.
  7. Choose a supported network topology on the Garage page under Neural network topologies.
  8. Two cameras with a single lens A single-lens camera or a single-vehicle pair of double-lens stereo cameras are available., racing to avoid stationary objects or competing against other moving cars. However, training a deeper neural network is more expensive; driving on more challenging tracks with sharp curves and twists requires a deeper neural network (with more layers). The trained model can handle more spartan track conditions or driving criteria, such as time trials on an obstacle-free track without opponents.
  9. AWS DeepRacer, in particular, supports 3-layer CNN and 5-layer CNN.
  10. Choose Next on the Garage page to begin configuring the agent's action space.
  11. Leave the default settings on the Action space page for your initial training. Experiment with different parameters for the steering angle, peak speed, and granularities in successive movements. Then choose Next.
  12. Enter a name in Name your DeepRacer on the Color your vehicle to stand out in the crowd tab, and then select a color for the agent from the Vehicle color selection. Then choose Submit.
  13. Examine the new agent's options on the Garage page. Choose Mod vehicle and go back to Step 4 of the previous stages to make further changes.

Now, your agent is ready for training.

Tailor AWS DeepRacer Training for Time Trials

If this is your first time using AWS DeepRacer, begin with a primary time trial to familiarise yourself with how to train AWS DeepRacer models to drive your car. You'll receive a more gentle introduction to core concepts like reward function, agent, environment, and so on. Your objective is to teach a model how to keep the car on the track and complete a lap quickly. The trained model may then be deployed to your AWS DeepRacer vehicle for testing physically without the need for any extra sensors.

You may pick the default agent from Garage on the AWS DeepRacer dashboard to train a model for this situation. A single front-facing camera, a default action area, and a default neural network architecture have been set up for the default agent. Before going on to more advanced AWS DeepRacer models, starting with the default agent is a good idea.

Follow the steps below to train your model using the default agent.

  1. Begin by teaching your model a basic track with more standard curves and less abrupt bends. Use the built-in incentive system. In addition, the model should be trained for 30 minutes. Evaluate your model on the same track when the training task is done to see if the agent can complete a lap.
  2. Learn about the parameters of the reward function. Continue training with various incentives to encourage the agent to move faster. Increase the following model's training duration to 1 - 2 hours. Compare the reward graphs from the first and second training sessions. Experiment until the reward graph no longer improves.
  3. More information about action space may be found here. Increase the maximal speed of the model for the third time (e.g., one m/s). To change the action space, you must create a new agent in Garage when you have the opportunity to do so. When changing your agent's top speed, keep in mind that the greater the maximum speed, the faster the agent can complete the evaluation track and the quicker your AWS DeepRacer car can naturally achieve a lap. However, because the agent is more likely to overshoot on a curve and hence veer off course, a higher peak speed frequently means a longer time for the training to converge.
  4. Reduce the granularity of the reward function to allow the agent more freedom to accelerate or decelerate, and adjust the reward function in various ways to help training converge faster. Evaluate the 3rd model once the activity combines to determine if the lap time improves. Continue investigating until there is no more progress to be made.
  5. Repeat Steps 1 through 3 with a more challenging track. Examine your model differ from the one you used to train on to see if it can generalize to other virtual channels and real-world settings.
  6. (Optional) Experiment with alternative hyperparameter values to optimize the training process, then repeat Steps 1–3.
  7. Examine and examine the AWS DeepRacer logs (optional).
  8. AWS DeepRacer Training for Object Avoidance Races may be customized.

Move on to the next more difficult challenge—obstacle avoidance—after mastering time trials and training a few convergent models. Your objective is to introduce a model that can complete a lap as quickly as possible while avoiding colliding with the objects placed on the track. This is a more complex problem for the agent to understand, and training takes longer.

Obstacles may be put at fixed or random places throughout the course in the AWS DeepRacer dashboard, allowing for two forms of obstacle avoidance training. The barriers at selected locations remain in the same spot throughout the training job. The obstacles change places randomly from episode to episode because of the variable locales.

Because the system has fewer degrees of freedom, it is easier for training to converge for location-fixed obstacle avoidance. However, models might overfit when location information is included. It takes longer for the model to assemble. A shallower network (with fewer layers), on the other hand, costs less and takes less time to train. Because the agent must continually learn to avoid crashing into obstacles in areas it hasn't seen before, training for randomly positioned obstacle avoidance is more difficult to converge. Models trained with this option, on the other hand, tend to generalize and transfer well to real-world races. Install obstacles in predictable areas, familiarise yourself with the behaviors, and then move to unpredictable sites.

Obstacles in the AWS DeepRacer simulator are cuboid boxes with the exact dimensions of the AWS DeepRacer vehicle's packaging box (9.5" (L) x 15.25" (W) x 10/5" (H)). Placing the package box as a barrier on the actual track makes it easier for models that have been trained. As a result, the models may be overfitted and incapable of generalization.

Follow the recommended practice described in the stages below to experiment with obstacle avoidance:

  1. Use the default agent, customize an existing agent or create a new one to experiment with new sensors and action spaces. You should keep the peak speed below 0.8 m/s and the granularity of the rate to one or two levels.
  2. Begin training a model with two items in fixed places for around three hours. Using the example reward function, train the model on the course you'll be racing on or a track that closely mimics it. The 2019 Championship Cup course has a straightforward layout ideal for peak race training. Assess the model on a similar approach with the same amount of obstacles. Keep an eye on how, if at all, the total predicted reward converges.
  3. Learn about the parameters of the reward function. Experiment with different reward functions. Increase the number of obstacles to four. Train the agent to see whether they can converge in the same length of time. If it still doesn't work, try adjusting your reward function, lowering the peak speed or reducing the number of barriers, and retraining the agent. Experiment until you don't see any more substantial improvements.
  4. Now it's time to work on avoiding obstacles in unexpected places. Additional sensors accessible from Garage in the AWS DeepRacer dashboard must be configured for the agent. A stereo camera can be used. You may also combine a LiDAR unit with a single-lens or stereo camera, although training time will be longer. Set the top speed of the action space to a low value (e.g., two m/s) to help the training converge faster. Use an external neural network for the network design, which has been determined to be sufficient for obstacle avoidance.
  5. Begin training the new agent for obstacle avoidance with four randomly placed items on a basic track for four hours. Then test your model in the same way, to determine whether it can complete laps with obstacles placed at random. If not, you may want to change your reward function, experiment with alternative sensors, or extend your training duration. Another suggestion is to clone an existing model to continue training while leveraging previously acquired knowledge.
  6. (Optional) Increase the peak speed of the action space or add extra obstacles to the track at random. Experiment with different sensor combinations and change the reward functions and hyperparameter settings to see what works best. Experiment with the CNN network structure of five layers. Then, retrain the model to see how they affect the training's convergence.

Tailor AWS DeepRacer Training for Head-to-Head Races

After completing obstacle avoidance training, you're ready to go to the next level of difficulty: training models for head-to-head races. Unlike obstacle avoidance events, head-to-head racing takes place in a dynamic environment with moving cars. You aim to develop models for your AWS DeepRacer car so that it can race against other moving vehicles to reach the finish line first without straying off track or colliding with other cars. You may train a head-to-head racing model in the AWS DeepRacer console by having your agent compete against 1-4 bot cars. On a longer course, generally speaking, you should have more obstacles.

Each bot car travels at a steady pace along a predetermined course. You may tell it to change lanes or stay the same one it started in. You may have the bot vehicles uniformly placed over the track on both routes, similar to obstacle avoidance training. You may only have up to four bot vehicles on the way at a time on the console. With more competing cars on the track, the learning agent has more possibilities to interact with the other vehicles in various settings. As a result, it learns more in a single training task, and the agent is taught more quickly. On the other hand, each training is likely to take longer to converge.

To teach an agent using bot vehicles, set the agent's action space's top speed higher than the bot vehicles' (constant) speed so that the agent has more passing opportunities during training. Set the agent's peak speed to 0.8 m/s and the bot vehicle's traveling speed to 0.4 m/s as a decent starting point. When you give the bots the ability to change lanes, the training becomes more difficult since the agent must learn how to avoid colliding with a moving car in the same street and how to avoid colliding with another moving vehicle in the other lane. The bots may be programmed to change lanes at random intervals.

An interval's duration is chosen randomly from a time range (e.g., 1s to 5s) that you designate before beginning the training operation. This lane-changing behavior is akin to real-world head-to-head racing, and the taught agent should provide superior results. However, training the model to converge takes longer.

To refine your training for head-to-head racing, follow these steps:

  1. Create a new training agent using stereo cameras and a LiDAR device in the AWS DeepRacer console's Garage. It is feasible to train a decent model against bot vehicles using simply a stereo camera. When the agent changes lanes, LiDAR helps to eliminate blind spots. Set the top speed to a reasonable level. One m/s is a decent starting point.
  2. Start with two bot vehicles to prepare for head-to-head racing against them. Set the bot's speed to be slower than your agent's peak speed (for example, convert the simulator-trained model to the real world, Disable the lane-changing option, then choose the newly generated training agent. If the agent's highest speed is one m/s, 0.5 m/s)., then train for three hours. Use the same track or a track that closely mimics the one you'll be racing on. The 2019 Championship Cup course has a straightforward layout ideal for peak race training. Make the very minimum of changes or use one of the reward function examples.
  3. Once the trained model has been added, evaluate it on the same track. Clone your trained model for a second head-to-head race model for increasingly tricky jobs. After that, you may either try out additional bot vehicles or activate lane-changing options. Begin by switching lanes slowly at random intervals of more than 2 seconds. You might wish to try out custom reward functions as well. If you don't consider a balance between exceeding other cars and maintaining on track, your unique reward function logic may be identical to that for obstacle avoidance. Depending on how excellent your prior model was, you may need to train for another 3 to 6 hours. Examine your models to evaluate how well they function.

Frequently asked questions

What is DeepRacer AWS?

The AWS DeepRacer is a self-driving 1/18th size race car used to test RL models on a physical track. The automobile demonstrates how a model trained in a simulated environment can be translated to the real world by using cameras to view the way and a reinforcement model to regulate throttle and steering.

What is DeepRacer's learning rate?

The epoch can be tiny for smaller batches, but it can have a significant value for larger sets. Of all the hyperparameters, the learning rate is the most essential. The default learning rate of 0.003 is adequate, but we can increase or decrease it according to our needs.

How does DeepRacer work?

AWS DeepRacer uses reinforcement learning to provide autonomous driving for the AWS DeepRacer vehicle. You use a virtual environment with a simulated track to train and assess a reinforcement learning model. After the training is completed, you submit the trained model artifacts to your AWS DeepRacer car.

What is an AWS DeepRacer student?

AWS DeepRacer uses reinforcement learning to provide autonomous driving for the AWS DeepRacer vehicle. You use a virtual environment with a simulated track to train and assess a reinforcement learning model. After the training is completed, you submit the trained model artifacts to your AWS DeepRacer car.

Which AWS services work behind the scenes to build AWS DeepRacer?

Because AWS DeepRacer utilizes SageMaker and AWS RoboMaker behind the scenes, it's integrated. The integration incorporates specific reinforcement learning problems, making training more accessible to novices.


So that's the end of the article AWS DeepRacer Part-2

After reading about the AWS DeepRacer Part-2, are you not feeling excited to read/explore more articles on the topic of AWS? Don't worry; Coding Ninjas has you covered.

Refer to our Guided Path on Coding Ninjas Studio to upskill yourself in Data Structures and AlgorithmsCompetitive ProgrammingJavaScriptSystem Design, and many more! If you want to test your competency in coding, you may check out the mock test series and participate in the contests hosted on Coding Ninjas Studio! But if you have just started your learning process and are looking for questions asked by tech giants like Amazon, Microsoft, Uber, etc; you must look at the problems, interview experiences, and interview bundle for placement preparations.

Nevertheless, you may consider our paid courses to give your career an edge over others!

Do upvote our blogs if you find them helpful and engaging!

Happy Learning!

Previous article
TensorFlow on AWS
Next article
AWS DeepRacer(Race Concept)
Live masterclass