Why the following code is still use "ascii" to decode the string. Didn't I tell python to use "utf-8" to decode the string? Plus, how come ignore did not work?
print data.encode('utf-8', 'ignore') UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 12355: