I dislike UTF-16, because it imposes a hard upper limit on how many code points are representable.
UTF-16 can represent 1,112,064 code points.
UTF-8 and UTF-32 can represent 4,294,967,296 code points.
But because a bunch of APIs use UTF-16 exclusively (including Java, JavaScript, and Win32), the remaining 4,293,855,232 potential code points can never be used without causing massive, nearly unfixable breakage in a lot of important software.