Introduction
DLAMI (Deep Learning AMI) is a one-stop-shop for deep learning in the cloud. In many Amazon EC2 regions, this customized machine instance is available in several instance types, from a CPU-only instance up to the latest high-power multi-GPU instance.
Amazon Machine Images (AMIs) are the information necessary to create a virtual server in the cloud known as an instance. AMI is specified when an instance is launched, and you can run as many instances as you need from that AMI. In addition, you can launch instances from as many different AMIs as you need.
Why should you use DLAMI?
You can train neural networks in two ways: on the CPU or on the GPU. All modern frameworks support the GPU for training since it shows much better cost/efficiency results than training with the CPU.
Some criteria need to be met in order to leverage these GPU advantages:
- Access to the GPU is required.
- The GPU drivers must be set up correctly.
- When training neural networks, you must have libraries that can use all the GPU power. A library must be compatible with the hardware and drivers listed in items #1 and #2.
- To use a neural network, you need a framework that has been compiled with your libraries.
To use GPU, you either need to download the source code and the library and then build it by yourself, or Download the pre-build version of the framework with GPU support, then install the required library and use it. There is one significant drawback in either case: both of them require some technical knowledge from users to begin using them. GPU-enabled NN frameworks are not widely used around the world because of this reason.
DLAMI is the first solution that includes everything that is needed out of the box:
- drivers for the latest GPU from Nvidia.
- Latest libraries (CUDA and CuDNN).
- Pre-build frameworks that are built with GPU support.
Here is a list of the frameworks that are already integrated with the DLAMI and are ready to use: MXNet, Caffe, Caffe2, TensorFlow, Theano, CNTK, Torch, Keras.
What is Jupyter Notebook?
Jupyter Notebooks are open-source, interactive web applications that allow users to create and share documents that contain interactive calculations, code, images, etc. Data, code, and visualizations can all be gathered in a single notebook, where users can create interactive stories that can be edited and shared.
Jupyter notebooks are widely used, well documented, and offer an easy-to-use interface for creating, editing, and running them. "The Notebook" runs as a web application called the "Dashboard," where users can open files and run code snippets. Users can view output in a neat and organized manner via the browser. The kernel is the other component of the notebook. During notebook execution, the kernel acts as a "computational engine". This is similar to a web server or back-end application. Python code is executed in the Jupyter notebook using the IPython kernel (Jupyter was previously called IPython notebook). Other kernels are available for other languages.
Set up a Jupyter Notebook Server
A Jupyter notebook server lets you generate and run Jupyter notebooks directly from a DLAMI instance. With Jupyter notebooks, you can use the AWS infrastructure and AWS packages within the DLAMI to conduct ML experiments for training and inference.
To set up a Jupyter notebook server, you must:
- Configure the Jupyter notebook server on your Amazon EC2 DLAMI instance.
- Configure your client so that you can connect to the Jupyter notebook server. We provide configuration instructions for Windows, macOS, and Linux clients.
- Test the setup by logging in to the Jupyter notebook server.
As soon as you have the Jupyter server running, you can run the tutorials in your web browser. If you use the Deep Learning AMI with Conda or have set up Python environments, you can change the Python kernel within Jupyter notebooks. Choose the relevant kernel before trying to run framework-specific tutorials.