18

How can I handle unknown values for label encoding in sk-learn? The label encoder will only blow up with an exception that new labels were detected.

What I want is the encoding of categorical variables via one-hot-encoder. However, sk-learn does not support strings for that. So I used a label encoder on each column.

My problem is that in my cross-validation step of the pipeline unknown labels show up. The basic one-hot-encoder would have the option to ignore such cases. An apriori pandas.getDummies /cat.codes is not sufficient as the pipeline should work with real-life, fresh incoming data which might contain unknown labels as well.

Would it be possible to use a CountVectorizer for this purpose?

5
  • 2
    Do you have a sample illustration for such a purpose? Commented Oct 29, 2016 at 16:48
  • 1
    Can you catch the exception, log it (or whatever), then move on? Or just ignore them? Commented Oct 29, 2016 at 16:54
  • If a predictive model is deployed as an API it is very likely it will be confronted with yet unknown labels of features. How can I deal with that in sklearn? Would you suggest to propagate the error to the API? Commented Oct 29, 2016 at 17:03
  • @GeorgHeiler, Have you tried looking into DictVectorizer which does binary one hot encoding of string features? You would need to input a list of dictionaries however. So, select the subset where the categorical values are present and do something like df[cat_cols].to_dict(orient='records') to create a mapping of list of dicts which could then be feeded to the DictVectorizer. These could also be included in a pipeline too which could be used by scikit-learn estimators. Commented Oct 29, 2016 at 17:36
  • @NickilMaveli I experimented a little bit with it but did not yet get it to work. Commented Oct 29, 2016 at 18:07

1 Answer 1

19

EDIT:

A more recent simpler/better way of handling this problem with scikit-learn is using the class sklearn.preprocessing.OneHotEncoder

from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(handle_unknown='ignore') enc.fit(train) enc.transform(train).toarray() 

Old answer:

There are several answers that mention pandas.get_dummies as a method for this, but I feel the labelEncoder approach is cleaner for implementing a model. Other similar answers mention using DictVectorizer for this, but again converting the entire DataFrame to dict is probably not a great idea.

Let's assume the following problematic columns:

from sklearn import preprocessing import numpy as np import pandas as pd train = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Paris'], 'letters': ['a', 'b', 'c', 'd', 'a', 'b']} train = pd.DataFrame(train) test = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Utila'], 'letters': ['a', 'b', 'c', 'a', 'b', 'b']} test = pd.DataFrame(test) 

Utila is a rarer city, and it isn't present in the training data but in the test set, that we can consider new data at inference time.

The trick is converting this value to "other" and including this in the labelEncoder object. Then we can reuse it in production.

c = 'city' le = preprocessing.LabelEncoder() train[c] = le.fit_transform(train[c]) test[c] = test[c].map(lambda s: 'other' if s not in le.classes_ else s) le_classes = le.classes_.tolist() bisect.insort_left(le_classes, 'other') le.classes_ = le_classes test[c] = le.transform(test[c]) test city letters 0 1 a 1 3 b 2 2 c 3 1 a 4 4 b 5 0 b 

To apply it to new data all we need is to save a le object for each column which can be easily done with Pickle.

This answer is based on this question which I feel wasn't totally clear to me, therefore added this example.

Sign up to request clarification or add additional context in comments.

9 Comments

You mean the difference is that your solution requires only one column in memory whereas my solution needs all columns in memory?
Sehr Would you suggest other or null? I think null is good as well because the preprocessing code will handle it of called before the label encoder.
If there's a "larger" value than other inside le.classes_, would bisect.insort_left(le_classes, 'other') end up inserting other before that element? If that's the case, the code for the elements after other would change, compromising the integrity of the code map.
Watch out to not do le.fit_transform(train[c]) before modifying the labelEncoder. Otherwise the mapping of the labelEncoder is not the same on train and test. Instead do le.fit(train[c]) and then, after modifying the labelEncoder do le.transform(train[c]).
Change le.classes_ = le_classes to le.classes_ = np.array(le_classes) in the old answer to preserve the usefulness of preprocessing.LabelEncoder().inverse_transform()
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.