9 obscure Python libraries for data science

9 obscure Python libraries for data science

Go beyond pandas, scikit-learn, and matplotlib and learn some new tricks for doing data science in Python.

Python
Image credits : 

Original image CC0 from Wikimedia Commons

x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

Python is an amazing language. In fact, it's one of the fastest growing programming languages in the world. It has time and again proved its usefulness both in developer job roles and data science positions across industries. The entire ecosystem of Python and its libraries makes it an apt choice for users (beginners and advanced) all over the world. One of the reasons for its success and popularity is its set of robust libraries that make it so dynamic and fast.

In this article, we will look at some of the Python libraries for data science tasks other than the commonly used ones like pandas, scikit-learn, and matplotlib. Although libraries like pandas and scikit-learn are the ones that come to mind for machine learning tasks, it's always good to learn about other Python offerings in this field.

Wget

Extracting data, especially from the web, is one of a data scientist's vital tasks. Wget is a free utility for non-interactive downloading files from the web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Since it is non-interactive, it can work in the background even if the user isn't logged in. So the next time you want to download a website or all the images from a page, wget will be there to assist.

Installation

$ pip install wget

Example

import wget
url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'

filename = wget.download(url)
100% [................................................] 3841532 / 3841532

filename
'razorback.mp3'

Pendulum

For people who get frustrated when working with date-times in Python, Pendulum is here. It is a Python package to ease datetime manipulations. It is a drop-in replacement for Python's native class. Refer to the documentation for in-depth information.

Installation

$ pip install pendulum

Example

import pendulum

dt_toronto = pendulum.datetime(2012, 1, 1, tz='America/Toronto')
dt_vancouver = pendulum.datetime(2012, 1, 1, tz='America/Vancouver')

print(dt_vancouver.diff(dt_toronto).in_hours())

3

Imbalanced-learn

Most classification algorithms work best when the number of samples in each class is almost the same (i.e., balanced). But real-life cases are full of imbalanced datasets, which can have a bearing upon the learning phase and the subsequent prediction of machine learning algorithms. Fortunately, the imbalanced-learn library was created to address this issue. It is compatible with scikit-learn and is part of scikit-learn-contrib projects. Try it the next time you encounter imbalanced datasets.

Installation

pip install -U imbalanced-learn

# or

conda install -c conda-forge imbalanced-learn

Example

For usage and examples refer to the documentation.

FlashText

Cleaning text data during natural language processing (NLP) tasks often requires replacing keywords in or extracting keywords from sentences. Usually, such operations can be accomplished with regular expressions, but they can become cumbersome if the number of terms to be searched runs into the thousands.

Python's FlashText module, which is based upon the FlashText algorithm, provides an apt alternative for such situations. The best part of FlashText is the runtime is the same irrespective of the number of search terms. You can read more about it in the documentation.

Installation

$ pip install flashtext

Examples

Extract keywords:

from flashtext import KeywordProcessor
keyword_processor = KeywordProcessor()

# keyword_processor.add_keyword(<unclean name>, <standardised name>)

keyword_processor.add_keyword('Big Apple', 'New York')
keyword_processor.add_keyword('Bay Area')
keywords_found = keyword_processor.extract_keywords('I love Big Apple and Bay Area.')

keywords_found
['New York', 'Bay Area']

Replace keywords:

keyword_processor.add_keyword('New Delhi', 'NCR region')

new_sentence = keyword_processor.replace_keywords('I love Big Apple and new delhi.')

new_sentence
'I love New York and NCR region.'

For more examples, refer to the usage section in the documentation.

FuzzyWuzzy

The name sounds weird, but FuzzyWuzzy is a very helpful library when it comes to string matching. It can easily implement operations like string comparison ratios, token ratios, etc. It is also handy for matching records kept in different databases.

Installation

$ pip install fuzzywuzzy

Example

from fuzzywuzzy import fuzz
from fuzzywuzzy import process

# Simple Ratio

fuzz.ratio("this is a test", "this is a test!")
97

# Partial Ratio
fuzz.partial_ratio("this is a test", "this is a test!")
 100

More examples can be found in FuzzyWuzzy's GitHub repo.

PyFlux

Time-series analysis is one of the most frequently encountered problems in machine learning. PyFlux is an open source library in Python that was explicitly built for working with time-series problems. The library has an excellent array of modern time-series models, including but not limited to ARIMA, GARCH, and VAR models. In short, PyFlux offers a probabilistic approach to time-series modeling. It's worth trying out.

Installation

pip install pyflux

Example

Please refer to the documentation for usage and examples.

IPyvolume

Communicating results is an essential aspect of data science, and visualizing results offers a significant advantage. IPyvolume is a Python library to visualize 3D volumes and glyphs (e.g., 3D scatter plots) in the Jupyter notebook with minimal configuration and effort. However, it is currently in the pre-1.0 stage. A good analogy would be something like this: IPyvolume's volshow is to 3D arrays what matplotlib's imshow is to 2D arrays. You can read more about it in the documentation.

Installation

Using pip
$ pip install ipyvolume

Conda/Anaconda
$ conda install -c conda-forge ipyvolume

Examples

Animation:

Volume rendering:

Dash

Dash is a productive Python framework for building web applications. It is written on top of Flask, Plotly.js, and React.js and ties modern UI elements like drop-downs, sliders, and graphs to your analytical Python code without the need for JavaScript. Dash is highly suitable for building data visualization apps that can be rendered in the web browser. Consult the user guide for more details.

Installation

pip install dash==0.29.0  # The core dash backend
pip install dash-html-components==0.13.2  # HTML components
pip install dash-core-components==0.36.0  # Supercharged components
pip install dash-table==3.1.3  # Interactive DataTable component (new!)

Example

The following example shows a highly interactive graph with drop-down capabilities. As the user selects a value in the drop-down, the application code dynamically exports data from Google Finance into a Pandas DataFrame.

Gym

Gym from OpenAI is a toolkit for developing and comparing reinforcement learning algorithms. It is compatible with any numerical computation library, such as TensorFlow or Theano. The Gym library is a collection of test problems, also called environments, that you can use to work out your reinforcement-learning algorithms. These environments have a shared interface, which allows you to write general algorithms.

Installation

pip install gym

Example

The following example will run an instance of the environment CartPole-v0 for 1,000 timesteps, rendering the environment at each step.

You can read about other environments on the Gym website.

Conclusion

These are my picks for useful, but little-known Python libraries for data science. If you know another one to add to this list, please mention it in the comments below. 


This was originally published on the Analytics Vidhya Medium channel and is reprinted with permission.

About the author

Parul
Parul Pandey - Parul is a Data Science and a Deep learning enthusiast. She is deeply interested in innovation, education, and programming and wants to solve real-life problems with Machine learning so that it can have a direct impact on society. She is also deeply passionate about 'Women in Technology' and constantly encourages and mentors young girls to join the STEM fields. Academically, she is an engineering professional with a Bachelor of Technology (B.Tech.) focused on Electrical Engineering. She has...