Unicode width data inconsistent/outdated

Brian Inglis Brian.Inglis@SystematicSw.ab.ca
Mon Aug 7 21:29:00 GMT 2017


On 2017-08-07 13:30, Thomas Wolff wrote:
> Am 07.08.2017 um 21:07 schrieb Brian Inglis:
>> Implementation considerations for handling the Unicode tables described in
>>     http://www.unicode.org/versions/Unicode10.0.0/ch05.pdf
>> and implemented in
>>     https://www.strchr.com/multi-stage_tables
>>
>> ICU icu4[cj] uses a folded trie of the properties, where the unique property
>> combinations are indexed, strings of those indices are generated for fixed size
>> groups of character codes, unique values of those strings are then indexed, and
>> those indices assigned to each character code group. The result is a multi-level
>> indexing operation that returns the required property combination for each
>> character.
>>
>> https://slidegur.com/doc/4172411/folded-trie--efficient-data-structure-for-all-of-unicode
>>
>>
>> The FOX Toolkit uses a similar approach, splitting the 21 bit character code
>> into 7 bit groups, with two higher levels of 7 bit indices, and more tweaks to
>> eliminate redundancy.
>>
>> ftp://ftp.fox-toolkit.org/pub/FOX_Unicode_Tables.pdf
>>
> Thanks for the interesting links, I'll chech them out.
> But such multi-level tables don't really help without a given procedure how to
> update them (that's only available for the lowest level, not for the
> code-embedded levels).

Unicode estimates property tables can be reduced to 7-8KB using these
techniques, including using minimal int sizes for indices and array elements e.g
char, short if you can keep the indices small, rather than pointers.

Creation scripts used by PCRE and Python projects are linked from the bottom of
the second link above. Source and docs for these packages and ICU is available
under Cygwin, and FOX Toolkit is available in some distros and by FTP.

> Also, as I've demonstrated, my more straight-forward and more efficient approach
> will even use less total space than the multi-level approach if packed table
> entries are used.

Unicode recommends the double table index approach as a means of eliminating the
massive redundancy that exists in char property entries and char groups, and
using small integers instead of pointers, that can be optimized to meet
conformance levels and platform speed and size limits, at the cost of an annual
review of properties and rebuild. The amount of redundancy removed by this
approach is estimated in the FOX Toolkit doc and ranges across orders of
magnitude. Unfortunately none of these docs or sources quote sizes for any
Unicode release!

My own first take on these was to use run length encoded bitstrings for each
binary property, similar to database bitmap indices, but the grouping of
property blocks in Unicode, and their recommendation, persuaded me their
approach was likely backed by a bunch of supporting corps' and devs' R&D, and is
similar to those used for decades in database queries handling (lots of) small
value set equivalence class columns to reduce memory pressure while speeding up
selections.

-- 
Take care. Thanks, Brian Inglis, Calgary, Alberta, Canada

--
Problem reports:       http://cygwin.com/problems.html
FAQ:                   http://cygwin.com/faq/
Documentation:         http://cygwin.com/docs.html
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple



More information about the Cygwin mailing list