1149

I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error:

Traceback (most recent call last): File "SCRIPT LOCATION", line NUMBER, in <module> text = file.read() File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>` 

After reading this Q&A, see How to determine the encoding of text if you need help figuring out the encoding of the file you are trying to open.

2

16 Answers 16

1957

The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely.

You specify the encoding when you open the file:

file = open(filename, encoding="utf8") 
Sign up to request clarification or add additional context in comments.

7 Comments

if you're using Python 2.7, and getting the same error, try the io module: io.open(filename,encoding="utf8")
+1 for specifying the encoding on read. p.s. is it supposed to be encoding="utf8" or is it encoding="utf-8" ?
@1vand1ng0: of course Latin-1 works; it'll work for any file regardless of what the actual encoding of the file is. That's because all 256 possible byte values in a file have a Latin-1 codepoint to map to, but that doesn't mean you get legible results! If you don't know the encoding, even opening the file in binary mode instead might be better than assuming Latin-1.
I get the OP error even though the encoding is already specified correctly as UTF-8 (as shown above) in open(). Any ideas?
@rob_7cc That's not necessary. 'utf8' is an alias for UTF-8. docs
|
157

If file = open(filename, encoding="utf-8") doesn't work, try
file = open(filename, errors="ignore"), if you want to remove unneeded characters. (docs)

3 Comments

Warning: This will result in data loss when unknown characters are encountered (which may be fine depending on your situation).
using file = open(filename, errors="ignore") will ignore any error and not display them in terminal. It doesn't solve the actual issue.
Good warnings to heed. This solution was helpful for an application where I was searching through a number of files and selecting only the ones that apply to my use case. i.e. I couldn't control the file types or codings in the pool of files and I was just picking out the ones I needed based on text within the file. I wasn't concerned with data integrity and was not modifying the files.
91

Alternatively, if you don't need to decode the file, such as uploading the file to a website, use:

open(filename, 'rb') 

where r = reading, b = binary

3 Comments

Perhaps emphasize that the b will produce bytes instead of str data. Like you note, this is suitable if you don't need to process the bytes in any way.
The top two answers didn't work, but this one did. I was trying to read a dictionary of pandas dataframes and kept getting errrors.
@Realhermit Please see stackoverflow.com/questions/436220. Every text file has a particular encoding, and you have to know what it is in order to use it properly. The common guesses won't always be correct.
63

TLDR: Try: file = open(filename, encoding='cp437')

Why? When one uses:

file = open(filename) text = file.read() 

Python assumes the file uses the same codepage as current environment (cp1252 in case of the opening post) and tries to decode it to its own default UTF-8. If the file contains characters of values not defined in this codepage (like 0x90) we get UnicodeDecodeError. Sometimes we don't know the encoding of the file, sometimes the file's encoding may be not handled by Python (like e.g. cp790), sometimes the file can contain mixed encodings.

If such characters are unneeded, one may decide to replace them by question marks, with:

file = open(filename, errors='replace') 

Another workaround is to use:

file = open(filename, errors='ignore') 

The characters are then left intact, but other errors will be masked too.

A very good solution is to specify the encoding, yet not any encoding (like cp1252), but the one which maps every single-byte value (0..255) to a character (like cp437 or latin1):

file = open(filename, encoding='cp437') 

Codepage 437 is just an example. It is the original DOS encoding. All codes are mapped, so there are no errors while reading the file, no errors are masked out, the characters are preserved (not quite left intact but still distinguishable) and one can check their ord() values.

Please note that this advice is just a quick workaround for a nasty problem. Proper solution is to use binary mode, although it is not so quick.

5 Comments

Probably you should emphasize even more that randomly guessing at the encoding is likely to produce garbage. You have to know the encoding of the data.
There are many encodings that "have all characters defined" (you really mean "map every single-byte value to a character"). CP437 is very specifically associated with the Windows/DOS ecosystem. In most cases, Latin-1 (ISO-8859-1) will be a better starting guess.
@tripleee - The solution is a quick workaround for a nasty error, allowing to check what is going on. Sometimes there is a garbage character placed inside a big, perfectly encoded text. Using the encoding of the character would break the decoding of the rest of the text. What's more, the encoding may be not handled by Python (e.g. cp790). Still, today I would rather use binary mode and handle the decoding myself.
@Karl Knechtel - Yes, your phrase is better. I am going to edit my text.
Thanks for the update. However, the part about decoding cp1252 "to its own default UTF-8" is still weird. Python is decoding cp1252 into the internal string representation, which is not UTF-8 (or necessarily any standard Unicode representation, although Python strings are defined to be Unicode).
44

As an extension to @LennartRegebro's answer:

If you can't tell what encoding your file uses and the solution above does not work (it's not utf8) and you found yourself merely guessing - there are online tools that you could use to identify what encoding that is. They aren't perfect but usually work just fine. After you figure out the encoding you should be able to use solution above.

EDIT: (Copied from comment)

A quite popular text editor Sublime Text has a command to display encoding if it has been set...

  1. Go to View -> Show Console (or Ctrl+`)

enter image description here

  1. Type into field at the bottom view.encoding() and hope for the best (I was unable to get anything but Undefined but maybe you will have better luck...)

enter image description here

4 Comments

Some text editors will provide this information as well. I know that with vim you can get this via :set fileencoding (from this link)
Sublime Text, also -- open up the console and type view.encoding().
alternatively, you can open your file with notepad. 'Save As' and you shall see a drop-down with the encoding used
Please see stackoverflow.com/questions/436220 for more details on the general task.
17

Stop wasting your time, just add the following encoding="cp437" and errors='ignore' to your code in both read and write:

open('filename.csv', encoding="cp437", errors='ignore') open(file_name, 'w', newline='', encoding="cp437", errors='ignore') 

Godspeed

3 Comments

Before you apply that, be sure that you want your 0x90 to be decoded to 'É'. Check b'\x90'.decode('cp437').
This is absolutely horrible advice. Code page 437 is a terrible guess unless your source data comes from an MS-DOS system from the 1990s, and ignoring errors is often the worst possible way to silence the warnings. It's like cutting the wires to the "engine hot" and "fuel low" lights in your car to get rid of those annoying distractions.
Thanks. It works for my case when moving code from python2 to python 3.11
9

Below code will encode the utf8 symbols.

with open("./website.html", encoding="utf8") as file: contents = file.read() 

Comments

8
def read_files(file_path): with open(file_path, encoding='utf8') as f: text = f.read() return text 

OR (AND)

def read_files(text, file_path): with open(file_path, 'rb') as f: f.write(text.encode('utf8', 'ignore')) 

OR

document = Document() document.add_heading(file_path.name, 0) file_path.read_text(encoding='UTF-8')) file_content = file_path.read_text(encoding='UTF-8') document.add_paragraph(file_content) 

OR

def read_text_from_file(cale_fisier): text = cale_fisier.read_text(encoding='UTF-8') print("what I read: ", text) return text # return written text def save_text_into_file(cale_fisier, text): f = open(cale_fisier, "w", encoding = 'utf-8') # open file print("Ce am scris: ", text) f.write(text) # write the content to the file 

OR

def read_text_from_file(file_path): with open(file_path, encoding='utf8', errors='ignore') as f: text = f.read() return text # return written text def write_to_file(text, file_path): with open(file_path, 'wb') as f: f.write(text.encode('utf8', 'ignore')) # write the content to the file 

OR

import os import glob def change_encoding(fname, from_encoding, to_encoding='utf-8') -> None: ''' Read the file at path fname with its original encoding (from_encoding) and rewrites it with to_encoding. ''' with open(fname, encoding=from_encoding) as f: text = f.read() with open(fname, 'w', encoding=to_encoding) as f: f.write(text) 

Comments

5

Before you apply the suggested solution, you can check what is the Unicode character that appeared in your file (and in the error log), in this case 0x90: https://unicodelookup.com/#0x90/1 (or directly at Unicode Consortium site http://www.unicode.org/charts/ by searching 0x0090)

and then consider removing it from the file.

1 Comment

I have a web page at tripleee.github.io/8bit/#90 where you can look up the character's value in the various 8-bit encodings supported by Python. With enough data points, you can often infer a suitable encoding (though some of them are quite similar, and so establishing exactly which encoding the original writer used will often involve some guesswork, too).
4

for me encoding with utf16 worked

file = open('filename.csv', encoding="utf16") 

1 Comment

Like many of the other answers on this page, randomly guessing which encoding the OP is actually dealing with is mostly a waste of time. The proper solution is to tell them how to figure out the correct encoding, not offer more guesses (the Python documentation contains a list of all of them; there are many, many more which are not suggested in any answer here yet, but which could be correct for any random visitor). UTF-16 is pesky in that the results will often look vaguely like valid Chinese or Korean text if you don't speak the language.
4

In the newer version of Python (starting with 3.7), you can add the interpreter option -Xutf8, which should fix your problem. If you use Pycharm, just got to Run > Edit configurations (in tab Configuration change value in field Interpreter options to -Xutf8).

Or, equivalently, you can just set the environmental variable PYTHONUTF8 to 1.

1 Comment

This assumes that the source data is UTF-8, which is by no means a given.
3

For those working in Anaconda in Windows, I had the same problem. Notepad++ help me to solve it.

Open the file in Notepad++. In the bottom right it will tell you the current file encoding. In the top menu, next to "View" locate "Encoding". In "Encoding" go to "character sets" and there with patiente look for the enconding that you need. In my case the encoding "Windows-1252" was found under "Western European"

1 Comment

Only the viewing encoding is changed in this way. In order to effectively change the file's encoding, change preferences in Notepad++ and create a new document, as shown here: superuser.com/questions/1184299/….
3

If you are on Windows, the file may be starting with a UTF-8 BOM indicating it definitely is a UTF-8 file. As per https://bugs.python.org/issue44510, I used encoding="utf-8-sig", and the csv file was read successfully.

Comments

1

for me changing the Mysql character encoding the same as my code helped to sort out the solution. photo=open('pic3.png',encoding=latin1) enter image description here

2 Comments

Like many other random guesses, "latin-1" will remove the error, but will not guarantee that the file is decoded correctly. You have to know which encoding the file actually uses. Also notice that latin1 without quotes is a syntax error (unless you have a variable with that name, and it contains a string which represents a valid Python character encoding name).
In this particular example, the real problem is that a PNG file does not contain text at all. You should instead read the raw bytes (open('pic3.png', 'rb') where the b signifies binary mode).
1

This is an example of how I open and close file with UTF-8, extracted from a recent code:

def traducere_v1_txt(translator, file): data = [] with open(f"{base_path}/{file}" , "r" ,encoding='utf8', errors='ignore') as open_file: data = open_file.readlines() file_name = file.replace(".html","") with open(f"Translated_Folder/{file_name}_{input_lang}.html","w", encoding='utf8') as htmlfile: htmlfile.write(lxml1) 

Comments

1

This check helped me solve the issue:

with open(input_file, 'rb') as rawdata: result = chardet.detect(rawdata.read(10000)) encoding = result['encoding'] print(f"Detected encoding: {encoding}") with open(input_file, 'r', newline='', encoding=encoding, errors='replace') as csvfile: reader = csv.reader(csvfile) # read the file... 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.