Data Analysis – Machine Learning
Pandas (data preparation)
Pandas help you to carry out your entire data analysis workflow in Python without having to switch to a more domain specific language like R. Practical real world data analysis, reading and writing data, data alignment, reshaping, slicing, fancy indexing, and subsetting, size mutability, merging and joining, Hierarchical axis indexing, Time series-functionality.
See More: Pandas Documentation
Scikit-learn (Machine Learning)
- Simple and efficient tools for implementing Classification, Regression, Clustering, Dimensionality Reduction, Model Selection, Preprocessing.
- Built on NumPy, SciPy, and Matplotlib.
See More: Scikit-learn Documentation
Gensim (Topic Modelling)
Scalable statistical semantics, Analyse plain-text documents for semantic structure and Retrieve semantically similar documents.
See More: Gensim Documentation
NLTK (Natural Language Processing)
Text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries. Working with corpora, categorising text, analysing linguistic structure.
See More: NLTK Documentation
Package for managing hierarchical datasets which are designed to efficiently cope with large amounts of data. It is built on top of the HDF5 library and the NumPy package and features an object-oriented interface which is fast, extremely easy to use tool for interactively save and retrieve large amounts of data.
See More: Tables Documentation
Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence.
See More: Deep Learning Documentation
Seaborn is a Python visualisation library based on Matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
See More: Seaborn Documentation
It is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell, the jupyter notebook, web application servers, and four graphical user interface toolkits.
See More: Matplotlib Documentation
Bokeh is a Python interactive visualisation library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of D3.js, and to extend this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easily create interactive plots, dashboards, and data applications.
See More: Bokeh Documentation
Sci-py (data quality)
Python library used for scientific computing and technical computing.
SciPy contains modules for optimisation, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers and other tasks common in science and engineering.
See More: Sci-py Documentation
Big Data/Distributed Computing
hdfs3 is a lightweight Python wrapper for libhdfs3, to interact with the Hadoop File System HDFS.
See More: Hdfs3 Documentation
Luigi is a Python package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualisation, handling failures, command line integration, and much more.
See More: Luigi Documentation
It lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorised and tagged however you want. H5py uses straightforward NumPy and Python metaphors, like dictionary and NumPy array syntax. For example, you can iterate over data sets in a file, or check out the .shape or .dtype attributes of datasets.
See More: H5py Documentation
PyMongo is a Python distribution containing tools for working with MongoDB, and is the recommended way to work with MongoDB from Python.
See More: PyMongo Documentation
Dask is a flexible parallel computing library for analytic computing. Dask has two main components Dynamic task scheduling optimised for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimised for interactive computational workloads.“Big Data” collections like parallel arrays, data frames, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of the dynamic task schedulers.
See More: Dask Documentation
Dask.distributed is a lightweight library for distributed computing in Python. It extends both the and
dask APIs to moderate sized clusters.
Distributed serves to complement the existing PyData analysis stack to meet the following needs Low latency, Peer-to-peer data sharing, Complex Scheduling, Pure Python, Data Locality, Familiar APIs, Easy Setup.
See More: Dask.distributed Documentation
So these are some of the Python Libraries for Data Science, data analysis, Machine Learning, Security and Distributed computing.
If you think i miss out Something, let me know in the comments.
Prerequisite for fully working of Apache Spark(pyspark) with Jupyter i.e How to integrate Jupyter notebook and pyspark?
Step 1: – Download and Installed.
- Download and install Anaconda. (Anaconda comes with lots of packages like Jupyter, ipython, python3 and many more so no need to install these packages explicitly)
- Download and install if not installed Java(Because spark uses JVM to run.)
to check Java is install run this command in terminal :- $java -version or $which java (it returns path of java executable.)
- Download Spark and untar and move to your desired location and better to rename it as a spark.
- Data (in CSV format) to check for a proper working of Apache Spark.
Step 2: – Setting up Environment Variable.
- Copy the path from your preferred installation and then open
nanoor your favorite text editor. Note in setting environment variable path of folder is given not the executable file
$ sudo nano /etc/environment
- PATH=/path/of/Anaconda/bin:$PATH # (Anaconda bin directory contains jupyter, ipython, python3 )
To see PATH:- echo $PATH
Note again:- executable(software) is search and executed in order as its display in the output in echo $PATH
- Reload the environment variable file by running this command
Step 3: – Configure Apache Spark file spark-env.sh in conf folder
- cd /path/of/your/spark/folder/spark/conf/
- cp spark-env.sh.template spark-env.sh
- nano spark-env.sh
- add these line:
JAVA_HOME=/path/of/java/usr/lib/jvm/java–8-oracleStep 4:- Configure Apache Spark pyspark file in bin folder
- go to line 85 add this
- go to line 86 add this
- Save all
Step 5: – To Launch pyspark in jupyter which is a web-browser-based version of IPython, use:-
Exploring trending topics on twitter using Twitter API in Python
Machine Learning Model of advertising dataset using Linear Regression in Python
In our day-to-day life we generate a lot of data like tweets, facebook posts, comments, Blog posts, articles which are generally in our natural language and which falls in category of semi-structured and unstructured data, So as when we process natural language data “the unstructured data – plain text” we call it Natural Language Processing.
Natural Language Tool Kit is a library for NLP which deals with natural language such as plain text, words, sentences.
Building blocks of NLTK
- Tokenizers – Separating the text in to words and sentences
word tokenizer – separate by word
sentence tokenizer – separate by sentence
- Corpora – body of text such as any written speech, news article.
- Lexicon – dictionary, meaning of the words. which can be differ in context they are used.
let’s understand how the NLTK works, consider a sample_text such as
So NLTK comes to the rescue and separate the body of text (Corpora) in to sentences & words like
iPython Notebooks are the best way to showcase your Analysis, with the help of ipython notebooks you can tell stories with your code by embedding different types of visualizations, images and text. These iPython Notebooks are the simplest way to share you whole code history with your team-mates just like a blog.
As the name suggest iPython is it only for python language?
Make sure you install the specific kernel of the particular programming language. By default ipython kernel is preinstalled.
Is it iPython Notebook or jupyter Notebook?
The answer is both, This project was termed as iPython when it was developed and later on it was merged under a parent project named as jupyter notebook, so that it will not only reflected as notebook for python. So in some cases you’ll find people referring jupyter notebooks as ipython notebooks. And for those who have just started using or about to use the notebook both are the same thing don’t get confused.
Try without installing
Online Demo of jupyter notebook (Try the code in Python, Haskell, R, Scala).
Installing iPython Notebook
Simplest installation with Anaconda Python distribution available for Windows, Mac and Ubuntu.
Sharing the iPython notebooks
Embedding inside a webpage
- First download the notebook in .ipynb format.
- Open the downloaded file .ipynb in notepad (or any other text editor).
- Select all (Ctrl+A) the contents of the file.
- Go to https://gist.github.com/
- Enter the file name with extension & description.
- Paste the contents that you copied from .ipynb file in the gist
- Click create public gist.
- Copy the embedded code, example <scriptsrc=”https://gist.github.com/AnuragSinghChaudhary/6097a6a447f26d1256fc.js”></script>
- Paste this code inside any web page under HTML code your python notebook will embed inside the web page.
- You’ll be able to see the embedded iPython notebook under this web page as example.
- I’m using iPython notebooks for all my analysis practice.
- I have written this post in context of data science.
- iPython notebooks can be used in wide variety of context with other programming languages.