Table of contents
1.
Introduction
2.
Ways to Remove Duplicates from the List
2.1.
Using set() Method
2.2.
Python
2.3.
Using List Comprehension
2.4.
Python
2.4.1.
Using List Comprehension with enumerate()
2.5.
Python
2.6.
Using collections.OrderedDict.fromkeys()
2.7.
Python
2.8.
Using in, not in Operators
2.9.
Python
2.10.
Using List Comprehension and list.index() Method
2.11.
Python
2.12.
Using Counter() Method
2.13.
Python
2.14.
Using Numpy unique Method
2.15.
Python
2.16.
Using a Pandas DataFrame
2.17.
Python
3.
When to Use Each Method
4.
Frequently Asked Questions
4.1.
What's the fastest way to remove duplicates in Python?
4.2.
How do you remove duplicates in Python without changing order?
4.3.
Which data structure removes duplicates in Python?
4.4.
Are duplicates allowed in set Python?
5.
Conclusion
Last Updated: Aug 28, 2025
Easy

Removing duplicates in lists

Introduction

Remove duplicates from Python lists with tailored methods based on element type, list size, order preservation, and efficiency considerations.

python remove duplicates from list

This article will explore various methods to remove duplicates from a list in Python, providing clear examples and explanations to ensure you can apply these techniques effectively.

Ways to Remove Duplicates from the List

Using set() Method

The set() method is the most straightforward way to remove duplicates. Sets are unordered collections of unique elements in Python. By converting a list to a set, you automatically remove all duplicate items.

Example:

  • Python

Python

my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = list(set(my_list))

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

The list is converted to a set, which removes duplicates, and then converted back to a list.

Using List Comprehension

List comprehension offers a concise way to create a new list by iterating over each element and adding it only if it's not already present.

Example:

  • Python

Python

my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = []

[unique_items.append(x) for x in my_list if x not in unique_items]

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output

[1, 2, 3, 4, 5]


Explanation:

We iterate over my_list and append each item to unique_items only if it's not already included.

Using List Comprehension with enumerate()

This method is a variation of the list comprehension method that also keeps track of the index, which can be useful for more complex operations.

Example:

  • Python

Python

my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = [item for idx, item in enumerate(my_list) if item not in my_list[:idx]]

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

The enumerate() function adds a counter to the list and my_list[:idx] creates a slice of the list up to the current item, ensuring we only add items not already encountered.

Using collections.OrderedDict.fromkeys()

The OrderedDict from the collections module maintains the order of elements as they were inserted. When used with fromkeys(), it can remove duplicates while preserving the original order.

Example:

  • Python

Python

from collections import OrderedDict



my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = list(OrderedDict.fromkeys(my_list))

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

OrderedDict.fromkeys(my_list) creates an ordered dictionary without duplicates, which is then converted back to a list.

Using in, not in Operators

This is a more manual approach, where you create a new list and only add items that are not already present.

Example:

  • Python

Python

my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = []

for item in my_list:

   if item not in unique_items:

       unique_items.append(item)

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

We loop through my_list and use the not in operator to check if an item is in the new list before appending it.

Using List Comprehension and list.index() Method

This method uses list comprehension along with the list.index() method to add an element only if its index matches the current index, which means it's the first occurrence.

Example:

  • Python

Python

my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = [item for idx, item in enumerate(my_list) if my_list.index(item) == idx]

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

The list.index() method returns the first index of the element, which is compared with the current index.

Using Counter() Method

The Counter class from the collections module can also be used to remove duplicates. It creates a dictionary with list elements as keys and their counts as values.

Example:

  • Python

Python

from collections import Counter



my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = list(Counter(my_list))

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

Counter(my_list) counts the items, but when converting to a list, only keys are taken, which are unique.

Using Numpy unique Method

If you're working with numerical data and have NumPy installed, its unique function is a very efficient way to remove duplicates.

Example:

  • Python

Python

import numpy as np



my_list = [1, 2, 2, 3, 4, 4, 5]

unique_items = np.unique(my_list).tolist()

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

np.unique(my_list) finds the unique elements of the list, and tolist() converts the array back to a list.

Using a Pandas DataFrame

For those who work with data analysis, Pandas offers a convenient way to handle duplicates.

Example:

  • Python

Python

import pandas as pd



my_list = [1, 2, 2, 3, 4, 4, 5]

df = pd.DataFrame(my_list, columns=['Numbers'])

unique_items = df['Numbers'].drop_duplicates().tolist()

print(unique_items)
You can also try this code with Online Python Compiler
Run Code

Output:

[1, 2, 3, 4, 5]


Explanation:

We create a DataFrame from the list, then use drop_duplicates() to remove duplicates and convert the series back to a list.

When to Use Each Method

MethodUse Case
set() When order doesn't matter and you need speed.
List ComprehensionWhen you need to preserve order and have additional conditions.
collections.OrderedDict.fromkeys() When order matters and you're working with Python versions < 3.7.
in, not in operators For simplicity and clarity in small lists.
list.index() method When you need to check the index of items during duplication removal.
Counter() When you also need the count of items.
NumPy uniqueFor numerical data and performance in scientific computing.
Pandas DataFrame In data analysis tasks where you're likely already using Pandas.

Frequently Asked Questions

What's the fastest way to remove duplicates in Python?

The set() method is typically the fastest for removing duplicates, but it doesn't preserve the order of elements.

How do you remove duplicates in Python without changing order?

Use a combination of a list comprehension and a set to remove duplicates in Python without changing order: list(dict.fromkeys(your_list)).

Which data structure removes duplicates in Python?

The set data structure in Python automatically removes duplicates, ensuring each element is unique.

Are duplicates allowed in set Python?

No, duplicates are not allowed in a set in Python; each element must be unique.

Conclusion

Removing duplicates from a list in Python can be achieved through various methods, each with its own use cases and benefits. Whether you're looking for performance, order preservation, or working within a specific data analysis library, Python provides a solution.

Recommended Readings:

Python Operator Precedence

 

Live masterclass