Anniversary Bonanza : Get 20% off on Full Stack Data Science programs. Coupon auto applied on checkout.
x
AlmaBetter Blogs > Introduction to NLP

Introduction to NLP

Data Science
NLP is a subdomain of AI that enables machines to read, understand and analyze human languages like text, speech, etc. The ultimate goal of NLP is to help computers understand language human behavior. It is the driving force behind things like virtual assistants, speech recognition, sentiment analysis, automatic text summarization, machine translation and much more.
Introduction to NLP

What are some of the applications of NLP?

  1. Grammarly, Microsoft Word, Google Docs

2.Search engines like DuckDuckGo, Google

3.Voice assistants — Alexa, Siri

4.News feeds- Facebook, Google News

5.Translation systems — Google translate

Why text preprocessing ?

Computers are great at working with structured data like spreadsheets and database tables, but we humans usually communicate in words, not in tables. Computers couldn’t understand those. To solve this problem, we have to come up with some advanced techniques. In NLP, we use some very smart techniques that convert languages to useful information like numbers or some mathematically interpretable objects so that we could use them in ML algorithms based upon our requirements.

Machine Learning needs data in numeric form. We first need to clean the textual data and this process to prepare(or clean) text data before encoding is called text preprocessing, this is the very first step to solve the NLP problems. SpaCy, NLTK are some libraries used to make our tasks of preprocessing easier.

Steps involved in preprocessing:

Cleaning

1.Removing URL-

1_8iZpGdPTldV7Ki_E5mVYdQ.png

Importing re library to remove url.

import re

def clean_url(text): return re.sub(r'http\S+, "", text) dataset['Clean_text']=dataset['Review'].apply(clean_url)

  1. Removing punctuations and numbers . Punctuation is basically the set of symbols [!”#$%&’()*+,-./:;<=>?@[]^_`{|}~]:

def clean_punctuations(text): return re.sub('[^a-zA-Z]',' ', text)

dataset['Clean_text']=dataset['Clean_text'].apply(clean_punctuations)

** 3. Converting all to lower case**

def clean_lower(text): return str(text).lower() dataset['Clean_text']=dataset['Clean_text'].apply(clean_lower)

4. Removing stopwords

import nltk nltk.download('stopwords') from nltk.corpus import stopwords stop=set(stopwords.words('english')) def clean_stopwords(text): return ' '.join([item for item in text if item not in stop]) dataset['Clean_text']=dataset['Clean_text'].apply(clean_stopwords)

5. Tokenization-

It’s a method of splitting a string into smaller units called tokens. A token could be a punctuation, word, mathematical symbol, number etc.

1_GaMrBWjMxvVsMo3S9iLzng.png

from nltk.tokenize import word_tokenize

def clean_tokenization(text): return word_tokenize(text)

dataset['Clean_text']=dataset['Clean_text'].apply(clean_tokenization)

** 6. Stemming and Lemmatization-**

  • Stemming algorithms work by cutting off the end or the beginning of the word, taking into account a list of common prefixes and suffixes that can be found in an inflected word. This indiscriminate cutting can be successful in some occasions, but not always, and that is why we affirm that this approach presents some limitations.

from nltk.stem import PorterStemmer stemmer=PorterStemmer() def Clean_stem(token): return [stemmer.stem(i) for i in token]

dataset['Clean_text']=dataset['Clean_text'].apply(Clean_stem)

  • Lemmatization, on the other hand, takes into consideration the morphological analysis of the words. To do so, it is necessary to have detailed dictionaries which the algorithm can look through to link the form back to its lemma. Look into the figure for clear picture.

from nltk.stem.snowball import SnowballStemmer

lemma = SnowballStemmer("english")

def clean_lemmatization(token): return [lemma.lemmatize(word=w,pos='v) for w in token]

dataset['Clean_text']=dataset['Clean_text'].apply(clean_lemmatization)

0_9BvN65J6sjvA3IzF.png

1_zfXXmhBs5Oe61KAmUzwl8A.png

7. Removing small words having length ≤2

After performing all required process in text processing there is some kind of noise is present in our corpus, so like that i am removing the words which have very short length.

0_u7_mWlExGor58thZ.png

def Clean_length(token): return [i for i in token if len(i)>2]

dataset['Clean_text']=dataset['Clean_text'].apply(Clean_length)

8. Convert the list into string back

0_aIU4HDsck_q86aWF.png

def convert_to_string(list1): return ' ' .join(list1) dataset['Clean_text']=dataset['Clean_text'].apply(convert_to_string)

Now we are all set to vectorize our text.

Vectorizing

  1. CountVectorizer- It converts a collection of text documents to a matrix of token counts: the occurrences of tokens in each document. This implementation produces a sparse representation of the counts.

0_73FCorAbb0sTI7u0.png

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(max_df = 0.9,min_df = 10)

X = vectorizer.fit_transform(dataset['Clean_text'])

  1. TF-IDF: In TF-IDF we transform a count matrix to a normalized tf: term-frequency or term-frequency times inverse document-frequency representation using TfidfTransformer. The formula that is used to compute the tf-idf for a term t of a document d in a document set is:

0_9rDpN5R8aJMR7dqe.png

from sklearn.feature_extraction.text import TfidfVectorizer as tf

tf = tf(max_df = 0.9,min_df = 10)

X = tf.fit_transform(dataset['Clean_text'])

Note-

In CountVectorizer we only count the number of times a word appears in the document which results in biasing in favour of most frequent words. This ends up in ignoring rare words which could have helped is in processing our data more efficiently.

To overcome this , we use TfidfVectorizer .

In TfidfVectorizer we consider overall document weightage of a word. It helps us in dealing with most frequent words. Using it we can penalize them. TfidfVectorizer weights the word counts by a measure of how often they appear in the documents.

That’s all folks, Have a nice day :)

Sumanta Muduli
Data Scientist at Flutura Decision Sciences & Analytics

Related Posts

  • Location
  • 4th floor, 133/2, Janardhan Towers, Residency Road, Bengaluru, Karnataka, 560025
  • Follow Us

© 2022 AlmaBetter