Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Maximum Likelihood
3.
Log-Linear Model
4.
Approach
5.
Maximum Entropy Likelihood
6.
Applications
7.
Maximum Entropy Markov Model
8.
Shortcomings Of MEMM
9.
Frequently Asked Questions
10.
Key Takeaways
Last Updated: Mar 27, 2024

Maximum Entropy Model

Author Mayank Goyal
0 upvote

Introduction

We can see any problems in natural language processing as linguistic classification problems in which linguistic contexts are used to predict linguistic classes. Maximum entropy models are a clean way to combine various pieces of contextual evidence to estimate the probability of a particular linguistic class occurring with a specific linguistic context. 

Maximum entropy classification is a method that generalizes logistic regression to multiclass problems. The Maximum Entropy model is a type of log-linear model.

If we are given some data and told to decide, we could think of attributes about the data,i.e.,  features. Some of these features might be more important than others.

We apply a weight to each feature found in the data, and we add up all of the features. Finally, the weighted sum is normalized to give a fraction between 0 and 1. We can use this fraction to tell us the score of how confident we might be in making a decision.

Maximum Likelihood

The principle of Maximum Likelihood is defined as we have to find the parameter values w such that it models the input data x with the maximum probability. The aim is to find the weight parameters that will maximize the likelihood of the training data.

We Assume we have a random sample with a training set of n examples. We assume input values to be independent, so the probability function f(x,w) is the product of the probabilities of each input.   

Like maximum likelihood, the entire conditional probability says we choose a parameter estimate w_hat that maximizes the product f(yi|xi, w). 

We assume yi are independent conditionally on xi. To find w_hat, we can use gradient-based solutions like gradient descent. Gradient descent involves:

  • Starting with random weights w.
  • Looping through the training dataset.
  • Calculating the gradient.

At each iteration, we update the weights w by moving some distance in the direction of the gradient. We repeat this until it converges or completes its iteration count.

Log-Linear Model

The log-linear model uses a linear combination of features and weights to find the predicted label with maximum log-likelihood. The log-likelihood is the logarithm of the likelihood function. Since the logarithm function is a monotonic increasing function, maximizing the log-likelihood maximizes the likelihood.

We describe the probability as:

The function f(x,y) is a function that can account for relations between data and labels. It expresses some characteristics of the data point. It results in a value of 0 or 1 depending on the absence or presence. The wj is a weight of the feature function that captures how closely a given feature is related to a provided label. In the training process, wj is randomly initialized initially. The training process will learn the weight through gradient descent with some optimization methods. 

Approach

In the training phase, we have to find weight w. Let us start with the log-likelihood function:

This function L(w) measures how well w explains the labeled data. The higher value of P(y|x; w) greater is the value of L(w). The maximum-likelihood function uses the argmax function to find the best values for the parameter w:

The process involves iterating through training data many iterations.

  1. Initially, initialize the w to some random values.
  2. Keep iterating through each input. During each iteration, we update the weight by finding the derivative of L(w) concerning wj. 
  3. Updating vector was below and repeated until converged.

Maximum Entropy Likelihood

The maximum entropy model is log-linear. MaxEnt handles multinomial distribution. The maximum entropy principle states that we have to model the given set of data by finding the highest entropy to satisfy the constraints of our previous knowledge.

To find the probability for each class, Maximum Entropy is defined as:

Applications

MaxEnt classification is a more classical machine learning task and solves problems beyond natural language processing. Here are a few:

  • Sentiment analysis (e.g., given a product review, the reviewer likes and dislikes about the product).
  • Preferences (e.g., Given a person's demographics, who will a person vote for? Would they prefer Superman, Batman, or the Teenage Mutant Ninja Turtles? etc.).
  • Diagnosis (e.g., Given characteristics of several medical images and patient history, what medical condition is a person at risk of having?).

Maximum Entropy Markov Model

There are many systems where there is a time or state dependency. These systems evolve through a sequence of states, and past states influence the current state. For example, stock prices, DNA sequencing, human speech, or words in a sentence.

Maximum Entropy Markov Model makes use of state-time dependencies,i.e., it uses predictions of the past and the current observation to make the current prediction.

In image analysis, we're required to classify the object into one of many classes. We estimate the probability for each class. Rather than take a hard decision on one of the outcomes, it's better to output probabilities, which will benefit downstream tasks.

Multinomial logistic regression is also called softmax regression or Maximum Entropy (MaxEnt) classifier. Entropy's related to the disorder. Higher the disorder, less predictable the outcomes, and hence more information. For example, an unbiased coin has more information (and entropy) than one that mostly lands up heads. MaxEnt is about picking a probability distribution that maximizes the entropy.

Then, there's Markov Chain. It models a system as a set of states with probabilities assigned to state transitions. While MaxEnt computes probabilities for each input independently, the Markov chain recognizes a dependency from one state to the next. Thus, MEMM maximizes entropy plus using state dependencies (Markov Model).

The MEMM has dependencies between each state and the full observation sequence explicitly. MEMM has only one transition probability matrix. This matrix encapsulates previous states y(i−1) and current observation x(i) pairs in the training data to the current state y(i). Our goal is to find the P(y1,y2,…,yn|x1,x2,…xn). This is given by:

Since HMM only depends on the previous state, we can limit the condition of y(n) given y(n-1). This is the Markov independence assumption.

So MEMM defines using Log-linear model as:

Shortcomings Of MEMM

MEMM suffers from what's called the label bias problem. Once we're in a state or label, the following observation will select one of many transitions leaving that state. However, the model as a whole would have many more transitions. If a state has only one outgoing change, the observation has no influence. Simply put, transition scores are normalized on a per-state basis.

Frequently Asked Questions

Q1. What is the condition for maximum entropy?

Ans. The principle of maximum entropy states that the probability distribution that best represents the current state of knowledge about a system is the one with the most significant entropy in the context of precisely stated primary data.

Q2. Is maximum entropy possible?

Ans. The maximum entropy principle (MaxEnt) states that the most appropriate distribution to model a given set of data is the one with the highest entropy among all those that satisfy our prior knowledge's constraints.

Q3. Which distribution has maximum entropy?

Ans. Therefore, the normal distribution is the maximum entropy distribution with a known mean and variance.

Key Takeaways

Let us brief the article.

Firstly, we saw the meaning of the maximum entropy model. Moving on, we saw the building blocks of the maximum entropy model like maximum likelihood and log-linear model. Later, we saw the approach used by the maximum entropy model. Later we saw the applications of the maximum entropy model. Lastly, we saw the model of maximum entropy likelihood,i.e., maximum entropy Markov model, and some of its shortcomings. That's the end of the article.

I hope you all like it.

Happy Learning Ninajs!

Live masterclass