Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
9 obscure Python libraries for data science
9 obscure Python libraries for data science
Go beyond pandas, scikit-learn, and matplotlib and learn some new tricks for doing data science in Python.
Get the newsletter
Python is an amazing language. In fact, it's one of the fastest growing programming languages in the world. It has time and again proved its usefulness both in developer job roles and data science positions across industries. The entire ecosystem of Python and its libraries makes it an apt choice for users (beginners and advanced) all over the world. One of the reasons for its success and popularity is its set of robust libraries that make it so dynamic and fast.
In this article, we will look at some of the Python libraries for data science tasks other than the commonly used ones like pandas, scikit-learn, and matplotlib. Although libraries like pandas and scikit-learn are the ones that come to mind for machine learning tasks, it's always good to learn about other Python offerings in this field.
WgetExtracting data, especially from the web, is one of a data scientist's vital tasks. Wget is a free utility for non-interactive downloading files from the web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Since it is non-interactive, it can work in the background even if the user isn't logged in. So the next time you want to download a website or all the images from a page, wget will be there to assist.
$ pip install wget
url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'
filename = wget.download(url)
100% [................................................] 3841532 / 3841532
For people who get frustrated when working with date-times in Python, Pendulum is here. It is a Python package to ease datetime manipulations. It is a drop-in replacement for Python's native class. Refer to the documentation for in-depth information.
$ pip install pendulum
dt_toronto = pendulum.datetime(2012, 1, 1, tz='America/Toronto')
dt_vancouver = pendulum.datetime(2012, 1, 1, tz='America/Vancouver')
Most classification algorithms work best when the number of samples in each class is almost the same (i.e., balanced). But real-life cases are full of imbalanced datasets, which can have a bearing upon the learning phase and the subsequent prediction of machine learning algorithms. Fortunately, the imbalanced-learn library was created to address this issue. It is compatible with scikit-learn and is part of scikit-learn-contrib projects. Try it the next time you encounter imbalanced datasets.
pip install -U imbalanced-learn
conda install -c conda-forge imbalanced-learn
For usage and examples refer to the documentation.
Cleaning text data during natural language processing (NLP) tasks often requires replacing keywords in or extracting keywords from sentences. Usually, such operations can be accomplished with regular expressions, but they can become cumbersome if the number of terms to be searched runs into the thousands.
Python's FlashText module, which is based upon the FlashText algorithm, provides an apt alternative for such situations. The best part of FlashText is the runtime is the same irrespective of the number of search terms. You can read more about it in the documentation.
$ pip install flashtext
from flashtext import KeywordProcessor
keyword_processor = KeywordProcessor()
# keyword_processor.add_keyword(<unclean name>, <standardised name>)
keyword_processor.add_keyword('Big Apple', 'New York')
keywords_found = keyword_processor.extract_keywords('I love Big Apple and Bay Area.')
['New York', 'Bay Area']
keyword_processor.add_keyword('New Delhi', 'NCR region')
new_sentence = keyword_processor.replace_keywords('I love Big Apple and new delhi.')
'I love New York and NCR region.'
For more examples, refer to the usage section in the documentation.
The name sounds weird, but FuzzyWuzzy is a very helpful library when it comes to string matching. It can easily implement operations like string comparison ratios, token ratios, etc. It is also handy for matching records kept in different databases.
$ pip install fuzzywuzzy
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
# Simple Ratio
fuzz.ratio("this is a test", "this is a test!")
# Partial Ratio
fuzz.partial_ratio("this is a test", "this is a test!")
More examples can be found in FuzzyWuzzy's GitHub repo.
Time-series analysis is one of the most frequently encountered problems in machine learning. PyFlux is an open source library in Python that was explicitly built for working with time-series problems. The library has an excellent array of modern time-series models, including but not limited to ARIMA, GARCH, and VAR models. In short, PyFlux offers a probabilistic approach to time-series modeling. It's worth trying out.
pip install pyflux
Please refer to the documentation for usage and examples.
Communicating results is an essential aspect of data science, and visualizing results offers a significant advantage. IPyvolume is a Python library to visualize 3D volumes and glyphs (e.g., 3D scatter plots) in the Jupyter notebook with minimal configuration and effort. However, it is currently in the pre-1.0 stage. A good analogy would be something like this: IPyvolume's volshow is to 3D arrays what matplotlib's imshow is to 2D arrays. You can read more about it in the documentation.
$ pip install ipyvolume
$ conda install -c conda-forge ipyvolume
pip install dash==0.29.0 # The core dash backend
pip install dash-html-components==0.13.2 # HTML components
pip install dash-core-components==0.36.0 # Supercharged components
pip install dash-table==3.1.3 # Interactive DataTable component (new!)
The following example shows a highly interactive graph with drop-down capabilities. As the user selects a value in the drop-down, the application code dynamically exports data from Google Finance into a Pandas DataFrame.
Gym from OpenAI is a toolkit for developing and comparing reinforcement learning algorithms. It is compatible with any numerical computation library, such as TensorFlow or Theano. The Gym library is a collection of test problems, also called environments, that you can use to work out your reinforcement-learning algorithms. These environments have a shared interface, which allows you to write general algorithms.
pip install gym
The following example will run an instance of the environment CartPole-v0 for 1,000 timesteps, rendering the environment at each step.
You can read about other environments on the Gym website.
These are my picks for useful, but little-known Python libraries for data science. If you know another one to add to this list, please mention it in the comments below.
This was originally published on the Analytics Vidhya Medium channel and is reprinted with permission.