Feature Extraction

Feature extraction

For a general introduction to feature extraction with textual documents see the scikit-learn documentation.

 

TF-IDF schemes

SMART TF-IDF schemes

FreeDiscovery extends sklearn.feature_extraction.text.TfidfTransformer with a larger number of TF-IDF weighting and normalization schemes in SmartTfidfTransformer. It follows the SMART Information Retrieval System notation,

The different options are descibed in more detail in the table below,

Term frequency Document frequency Normalization
n (natural): tft,d

tft,d

n (no): 1 n (none): 1
l (logarithm): 1+log(tft,d)

1+log(tft,d)

t (idf): logNdft

logNdft

c (cosine): Σtϵdw2t

Σtϵdwt2

a (augmented): 0.5+0.5×tft,dmax(tft,d)

0.5+0.5×tft,dmax(tft,d)

s (smoothed idf):

logN+1dft+1

logN+1dft+1

l (length): Σtϵd|wt|

Σtϵd|wt|

b (boolean): {1,0,if tft,d>0otherwise

{1,if tft,d>00,otherwise

p (prob idf): logNdftdft

logN−dftdft

u (unique): Σtϵdbool(|wt|)

Σtϵdbool(|wt|)

L (log average): 1+log(tft,d)1+log(avgtϵd(tft,d))

1+log(tft,d)1+log(avgtϵd(tft,d))

d (smoothed prob idf): logN+1dftdft+1

logN+1−dftdft+1

Pivoted document length normalization

In addition to standard TF-IDF normalizations above, pivoted normalization was proposed by Singal et al. as a way to avoid over-penalising long documents. It can be enabled with the weighting='???p' parameter. For each document the normalization term
Vd is replaced by,

(1−α)avg(Vd)+αVd

where α (norm_alpha) is a user defined parameter, such as α∈. If norm_alpha=1 the pivot cancels out and this case corresponds to regular TF-IDF normalization.

See the example on Optimizing TF-IDF schemes for a more practical illustration.

References