I'd like to have a __hash__ method added to Word,
Synset and Sense that hashes in the same manner as
these objects are compared (in __cmp__). This allows
use of Python's set objects to remove duplicates from
groups of Words, Synsets and Senses, e.g.:
py> set([wordnet.N['object'], wordnet.N['object'],
wordnet.N['physical object']])
set([object(n.), physical object(n.)])
py> set([wordnet.N['object'][0],
wordnet.N['object'][0], wordnet.N['object'][1]])
set(['object' in {noun: object, physical object},
'object' in {noun: aim, object, objective, target}])
py> set(synset for sense in wordnet.N['object'] for
synset in wntools.hypernyms(sense))
set(['object' in {noun: object}, 'object' in {noun:
object}, 'object' in {noun: aim, object, objective,
target}, {noun: group, grouping}, {noun: abstraction},
{noun: social relation}, {noun: entity}, {noun:
communication}, {noun: language, linguistic
communication}, {noun: ordering, order, ordination},
{noun: string}, {noun: goal, end}, {noun: relation},
{noun: constituent, grammatical constituent}, {noun:
content, cognitive content, mental object}, {noun:
syntagma, syntagm}, {noun: arrangement}, {noun: string
of words, word string, linguistic string}, 'object' in
{noun: object, physical object}, {noun: cognition,
knowledge, noesis}, {noun: series}, {noun:
psychological feature}, {noun: sequence}])
Adds hash methods to Word, Sense and Synset