On Sat, 2006-01-07 at 12:26, Pablo Rodríguez wrote:
> The glyphs that aren't in the fontforge display are polytonic Greek ones
> (such as ῦ), that is mainly all accented characters on page 3 from
> both documents.
Unfortunately your greek character displays as an underscore in my mail
reader. Accented characters can be built by composing base glyphs with
the appropriate mark glyphs using GPOS features -- there is no guarantee
that they will be real glyphs.
> I think that these are wrong encoded since acroread 7 is not able to
> copy the text of these glyphs.
Then why are you blaming fontforge? If the glyphs are wrongly encoded in
the pdf, the ff will just read what's in the pdf.
> I hope it is clear now.
No, I think I'm still not understanding.
> I experience something really strange. If these PDF document are
> uncompressed (using pdftk [http://www.pdftk.com] or Multivalent
> [http://multivalent.sf.net]), fontforge is able to find more fonts on
Ah. There's a very odd object cross reference table that ff could only
half handle. So it missed some objects. The attached patch should fix.
But remember: ff can't handle all the compression modes used in pdf. LZW
is protected by patents for example. The docs do say that ff can only
read "many" of the fonts out of pdf files.
> the document. And trying to open OOENOK+MgPolKingScriptM in the
> uncompressed version of mgkingscript.pdf leads to segmentation fault.
Ah. There's no encoding data stored in that font, and ff didn't like
that. The attached patch should fix.