-1

Below python gives wrong length of string and wrong character.
Does anybody here have any idea?

>>> w ='lòng' >>> w 'lòng' >>> print (w) lòng >>> len(w) 5 >>> for ch in w: ... print (ch + "-") ... l- o- - n- g- >>> 
6
  • 1
    length should be 4 and w[1] should be 'ò'. Javascript and visual basic work but not python Commented Oct 15, 2019 at 17:26
  • 2
    Yours is a unicode string. The python len() method for strings counts code points. The second character uses 2 code points. Commented Oct 15, 2019 at 17:28
  • 1
    @rdas the question you identified as a duplicate is actually opposite of the problem described here. Commented Oct 15, 2019 at 17:43
  • 1
    The duplicate target doesn't explain this particular case particularly well, IMO, but take a look at the Unicode How-to section on comparing strings for an explanation of how unicode may compose an accented character from two separate characters. Commented Oct 15, 2019 at 17:44
  • @CommandMe I tested len('lòng') in both Python 3.7 and 3.8 on MacOS, and it is 4, not 5. The question cannot be reproduced. Commented Oct 16, 2019 at 16:07

3 Answers 3

3

The issue here is that in unicode some characters may be composed of combinations of other characters. In this case, 'lòng' includes lower case 'o' and a grave accent as separate characters.

>>> import unicodedata as ud >>> w ='lòng' >>> for c in w: ... print(ud.name(c)) ... LATIN SMALL LETTER L LATIN SMALL LETTER O COMBINING GRAVE ACCENT LATIN SMALL LETTER N LATIN SMALL LETTER G 

This is a decomposed unicode string, because the accented 'o' is decomposed into two characters. The unicodedata module provides the normalize function to convert between decomposed and composed forms:

>>> for c in ud.normalize('NFC', w): ... print(ud.name(c)) ... LATIN SMALL LETTER L LATIN SMALL LETTER O WITH GRAVE LATIN SMALL LETTER N LATIN SMALL LETTER G 

If you want to know whether a string is normalised to a particular form, but don't want to actually normalise it, and are using Python 3.8+, the more efficient unicodedata.is_normalized function can be used (credit to user Acumenus):

>>> ud.is_normalized('NFC', w) False >>> ud.is_normalized('NFD', w) True 

The Unicode HOWTO in the Python documentation includes a section on comparing strings which discusses this in more detail.

Sign up to request clarification or add additional context in comments.

Comments

1

Unicode allows a lot of flexibility on encoding a character. In this case, the is actually made up of 2 Unicode code points, one for the base character o and one for the accent mark. Unicode also has a character that represents both at the same time, and it doesn't care which you use. Unicode allows a lot of flexibility on encoding a character. Python includes a package unicodedatathat can provide a consistent representation.

>>> import unicodedata >>> w ='lòng' >>> len(w) 5 >>> len(unicodedata.normalize('NFC', w)) 4 

1 Comment

As of Python 3.8, the is_normalized function also exists and is faster to check first.
0

The problem is that the len function and the in operator are broken w.r.t. Unicode.

As of now, there are two answers that claim normalisation is the solution. Unfortunately, that's not true in general:

>>> w = 'Ꙝ̛͋ᄀᄀᄀ각ᆨᆨ👩❤️💋👩' >>> len(w) 19 >>> import unicodedata >>> len(unicodedata.normalize('NFC', w)) 19 >>> # 19 is still wrong 

To handle this task correctly, you need to operate on graphemes:

>>> from grapheme import graphemes >>> w = 'Ꙝ̛͋ᄀᄀᄀ각ᆨᆨ👩❤️💋👩' >>> len(list(graphemes(w))) 3 >>> # 3 is correct >>> for g in graphemes(w): ... print(g) Ꙝ̛͋ ᄀᄀᄀ각ᆨᆨ 👩❤️💋👩 

Also works for your w = 'lòng' input, correctly segments into 4 without any normalisation.

5 Comments

Yes, still broken in 3.8. You could have tested that yourself. They just updated some data files, of course that does not change how len works.
And you could have in your answer linked to the third-party package you are talking about.
I tested len('lòng') in both Python 3.7 and 3.8 on MacOS, and it is 4, not 5. As for your string Ꙝ̛͋ᄀᄀᄀ각ᆨᆨ👩❤️💋👩, its length was 14, not 19.
Your lòng differs from the lòng all the other participants in this thread are using. ––– The Stackexchange software sabotaged my example input. Here it is again in escaped notation so I can circumvent that bug: \N{U+0A65C}\N{U+0031B}\N{U+0034B}\N{U+00356}\N{U+00489}\N{U+01100}\N{U+01100}\N{U+01100}\N{U+0AC01}\N{U+011A8}\N{U+011A8}\N{U+1F469}\N{U+0200D}\N{U+02764}\N{U+0FE0F}\N{U+0200D}\N{U+1F48B}\N{U+0200D}\N{U+1F469}
The original issue could be reproduced by me only with Python 2.7 which is a version of Python that people shouldn't be using anymore.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.