9 obscure Python libraries for data science

Go beyond pandas, scikit-learn, and matplotlib and learn some new tricks for doing data science in Python.
357 readers like this.
Python in a tree

Original image CC0 from Wikimedia Commons

Python is an amazing language. In fact, it's one of the fastest growing programming languages in the world. It has time and again proved its usefulness both in developer job roles and data science positions across industries. The entire ecosystem of Python and its libraries makes it an apt choice for users (beginners and advanced) all over the world. One of the reasons for its success and popularity is its set of robust libraries that make it so dynamic and fast.

In this article, we will look at some of the Python libraries for data science tasks other than the commonly used ones like pandas, scikit-learn, and matplotlib. Although libraries like pandas and scikit-learn are the ones that come to mind for machine learning tasks, it's always good to learn about other Python offerings in this field.


Extracting data, especially from the web, is one of a data scientist's vital tasks. Wget is a free utility for non-interactive downloading files from the web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Since it is non-interactive, it can work in the background even if the user isn't logged in. So the next time you want to download a website or all the images from a page, wget will be there to assist.


$ pip install wget


import wget
url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'

filename = wget.download(url)
100% [................................................] 3841532 / 3841532



For people who get frustrated when working with date-times in Python, Pendulum is here. It is a Python package to ease datetime manipulations. It is a drop-in replacement for Python's native class. Refer to the documentation for in-depth information.


$ pip install pendulum


import pendulum

dt_toronto = pendulum.datetime(2012, 1, 1, tz='America/Toronto')
dt_vancouver = pendulum.datetime(2012, 1, 1, tz='America/Vancouver')




Most classification algorithms work best when the number of samples in each class is almost the same (i.e., balanced). But real-life cases are full of imbalanced datasets, which can have a bearing upon the learning phase and the subsequent prediction of machine learning algorithms. Fortunately, the imbalanced-learn library was created to address this issue. It is compatible with scikit-learn and is part of scikit-learn-contrib projects. Try it the next time you encounter imbalanced datasets.


pip install -U imbalanced-learn

# or

conda install -c conda-forge imbalanced-learn


For usage and examples refer to the documentation.


Cleaning text data during natural language processing (NLP) tasks often requires replacing keywords in or extracting keywords from sentences. Usually, such operations can be accomplished with regular expressions, but they can become cumbersome if the number of terms to be searched runs into the thousands.

Python's FlashText module, which is based upon the FlashText algorithm, provides an apt alternative for such situations. The best part of FlashText is the runtime is the same irrespective of the number of search terms. You can read more about it in the documentation.


$ pip install flashtext


Extract keywords:

from flashtext import KeywordProcessor
keyword_processor = KeywordProcessor()

# keyword_processor.add_keyword(<unclean name>, <standardised name>)

keyword_processor.add_keyword('Big Apple', 'New York')
keyword_processor.add_keyword('Bay Area')
keywords_found = keyword_processor.extract_keywords('I love Big Apple and Bay Area.')

['New York', 'Bay Area']

Replace keywords:

keyword_processor.add_keyword('New Delhi', 'NCR region')

new_sentence = keyword_processor.replace_keywords('I love Big Apple and new delhi.')

'I love New York and NCR region.'

For more examples, refer to the usage section in the documentation.


The name sounds weird, but FuzzyWuzzy is a very helpful library when it comes to string matching. It can easily implement operations like string comparison ratios, token ratios, etc. It is also handy for matching records kept in different databases.


$ pip install fuzzywuzzy


from fuzzywuzzy import fuzz
from fuzzywuzzy import process

# Simple Ratio

fuzz.ratio("this is a test", "this is a test!")

# Partial Ratio
fuzz.partial_ratio("this is a test", "this is a test!")

More examples can be found in FuzzyWuzzy's GitHub repo.


Time-series analysis is one of the most frequently encountered problems in machine learning. PyFlux is an open source library in Python that was explicitly built for working with time-series problems. The library has an excellent array of modern time-series models, including but not limited to ARIMA, GARCH, and VAR models. In short, PyFlux offers a probabilistic approach to time-series modeling. It's worth trying out.


pip install pyflux


Please refer to the documentation for usage and examples.


Communicating results is an essential aspect of data science, and visualizing results offers a significant advantage. IPyvolume is a Python library to visualize 3D volumes and glyphs (e.g., 3D scatter plots) in the Jupyter notebook with minimal configuration and effort. However, it is currently in the pre-1.0 stage. A good analogy would be something like this: IPyvolume's volshow is to 3D arrays what matplotlib's imshow is to 2D arrays. You can read more about it in the documentation.


Using pip
$ pip install ipyvolume

$ conda install -c conda-forge ipyvolume



Ipyvolume animation

Volume rendering:

Ipyvolume volume rendering animation


Dash is a productive Python framework for building web applications. It is written on top of Flask, Plotly.js, and React.js and ties modern UI elements like drop-downs, sliders, and graphs to your analytical Python code without the need for JavaScript. Dash is highly suitable for building data visualization apps that can be rendered in the web browser. Consult the user guide for more details.


pip install dash==0.29.0  # The core dash backend
pip install dash-html-components==0.13.2  # HTML components
pip install dash-core-components==0.36.0  # Supercharged components
pip install dash-table==3.1.3  # Interactive DataTable component (new!)


The following example shows a highly interactive graph with drop-down capabilities. As the user selects a value in the drop-down, the application code dynamically exports data from Google Finance into a Pandas DataFrame.

Dash app example animation

Dash app that ties a drop-down to a D3.js Plotly graph. (Source)


Gym from OpenAI is a toolkit for developing and comparing reinforcement learning algorithms. It is compatible with any numerical computation library, such as TensorFlow or Theano. The Gym library is a collection of test problems, also called environments, that you can use to work out your reinforcement-learning algorithms. These environments have a shared interface, which allows you to write general algorithms.


pip install gym


The following example will run an instance of the environment CartPole-v0 for 1,000 timesteps, rendering the environment at each step.

Gym animation

You can read about other environments on the Gym website.


These are my picks for useful, but little-known Python libraries for data science. If you know another one to add to this list, please mention it in the comments below. 

This was originally published on the Analytics Vidhya Medium channel and is reprinted with permission.

Parul is a Data Science and a Deep learning enthusiast. She is deeply interested in innovation, education, and programming and wants to solve real-life problems with Machine learning so that it can have a direct impact on society. She is also deeply passionate about 'Women in Technology' and constantly encourages and mentors young girls to join the STEM fields.


A few more:

Panel (panel.pyviz.org) --- easy dashboards for Python
PyViz (pyviz.org) -- general site for Visualization
Chartify (https://github.com/spotify/chartify/) -- high level plotting

Thanks, Travis for adding to the list.

In reply to by Travis Oliphant (not verified)

'I love New Delhi and NCR region.'

Instead of

'I love New York and NCR region.'

It basically replaces the common name with a standardised name. So, when we run the first code snippet, Big Apple gets standardised as New York already and in the second one, we assign NCR region as the standard format for new delhi.
Hence both Big Apple and new delhi get renamed.

In reply to by Raja Purkayastha (not verified)

Hello Parul Pandey, I just can’t stress enough how useful this tool is. It is a python prompt on steroids. It has completion, history, shell capabilities, and a lot more. Make sure that you take a look at it https://mybkexperience.me.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.