trouble with using tokenizer.encode_plus
09:10 14 Sep 2020

#jupyter notebook

I'm trying to study BERT Classifier with https://colab.research.google.com/drive/1pTuQhug6Dhl9XalKB0zUGf4FIdYFlpcX#scrollTo=2bBdb3pt8LuQ

In that colab, starting with "Tokenize all of the sentence....."

At that part, I have a trouble "TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'"

**
input_ids = []
attention_masks = []

for sent in sentences:
    encoded_dict = tokenizer.encode_plus(
                    sent,                      # Sentence to encode.
                    add_special_tokens = True, # Add '[CLS]' and '[SEP]'
                    max_length = 64,           # Pad & truncate all sentences.
                    pad_to_max_length = True,
                    return_attention_mask = True,   # Construct attn. masks.
                    return_tensors = 'pt',     # Return pytorch tensors.
               )
python jupyter transformer-model