Last updated: Mar 14, 2022

Natural Language Processing

Natural Language Processing (NLP) is a broad subset of artificial intelligence that specialises in language, text, and scripts. NLP applications allow you to process text, analyse it, and generate results that can be used for things like translation, comprehension, and developing enhanced scripts. NLP is not limited to English; it is applicable to a wide range of languages.

Text Preprocessing

It is always necessary to process any dataset or text source before working on it. Creating a text resource so that it can be further analysed and predictions can be made. This section discusses operations such as spell correction, smoothing, tokenization, stemming, and so on.
N-Gram Modelling
This blog will focus on one of the simplest machine learning models called the N-Gram model and its implementation. Let's begin.
Naive Bayes and Laplace Smoothing EASY
This article explains how Naive Bayes and Laplace Smoothing can be integrated to build a better text classifier and how it will help to tackle the zero probability problem.
Computational Morphology
This blog will discuss the morphological information that helps us capture the data and the different linguistic terms involved.
Tokenization in NLP EASY
In this article, you will understand what is Tokenisation in NLP
Stemming in NLP EASY
In this article, you will understand what is Stemming in NLP
Chunking in NLP (Natural Language Processing) EASY
Learn about chunking in NLP (Natural Language Processing), the need of chunking, types of chunking and its implementation, chunking in python, etc.
Lemmatization with TextBlob EASY
This article discusses the theoretical knowledge about Lemmatization with TextBlob.
Bag of Words in NLP EASY
This article gives a detailed overview of the Bag of Words in NLP.
Author Komal
0 upvotes

Text Classification

Text classification is the process of categorising text, primarily words and sentences, based on the category and sub-category to which they belong. This can be useful for analysing a collection of content that falls into a specific category. Understanding the sentiment of tweets, for example, by analysing them. Learn more about different classification methods.
Text Classification in NLP EASY
This article will learn about NLP with its applications; later, we will see the steps involved in Text Classification with its implementation.
Maximum Entropy Model
In this article, we will look into MaxEnt and its approach. Later we will study MEMM.
Conditional Random Fields HARD
In this blog, we will learn about conditional random fields in machine learning and their implementation.
Conditional Random Fields EASY
In this article, we will discuss the concept of Conditional Random Fields (CRF). Also, will see the maths behind it along with its use cases.
Named Entity Recognition
In this article, we will talk about NER and its various applications that can be used to detect named entities in texts.
NER with KERAS and Tensorflow EASY
In this article, we will see the implementation of the name entity recognition model using keras and tensorflow.

Word Embeddings

Word Embedding is a clever algorithm for grouping words with similar meanings or contexts into a vector or group. This is very useful for grouping similar words and further analysing and processing them. Word Embedding can be accomplished using a variety of models, including Word2Vec, GloVe, and the BERT model.
Word Embeddings in NLP EASY
In this article, we will learn how we can use text data as input in machine learning algorithms, what Word Embeddings are, and their use.
Embedding Layers in Keras
The following article will introduce you to word embedding in keras and why it plays an essential role in NLP.