You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(3) |
Jun
|
Jul
(3) |
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(11) |
Feb
(8) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(8) |
Aug
(6) |
Sep
(2) |
Oct
(21) |
Nov
(5) |
Dec
|
2002 |
Jan
(2) |
Feb
(4) |
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(9) |
Jul
(2) |
Aug
(4) |
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2003 |
Jan
(2) |
Feb
(4) |
Mar
|
Apr
|
May
(2) |
Jun
(16) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(2) |
2004 |
Jan
(2) |
Feb
(3) |
Mar
(2) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
2005 |
Jan
(13) |
Feb
(2) |
Mar
|
Apr
|
May
(5) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
2006 |
Jan
(6) |
Feb
(5) |
Mar
(12) |
Apr
(2) |
May
(3) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
(5) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
(2) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(4) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gleb S. <gle...@gm...> - 2020-01-02 12:31:52
|
Hi People! My name is Gleb Semenov, and I'm a newcomer here. Currently I'm profiling some hash algorithms for my project and I need wrapping library as an "algorithm driver"/ The mhash library is good choice but it does not contain some algorithms i need.I'm experimenting with SipHash and other modern fast hashes/ I can add algorithms I need to the mhash library by myself, but I must know the procedure, commit policy and so forth. Please brief me about the development process. Thank You Gleb |
From: Guan X. <gua...@gm...> - 2016-05-09 23:38:18
|
Your three programs (one in perl, two in c) are functionally different. On Tue, May 10, 2016 at 3:00 AM, Sebastian Heyn <seb...@ya...> wrote: > Hi, > > Iam playing at the moment a little with has pre-processing. I started off > using a obscenely slow bash script, and then went on using a perl one-liner. > Now I though I could increase performance even more using a C > implementation. > > My test setup looks like this > > perl: > cat "$1" | perl -MDigest::MD5=md5_hex -lpe '$_ = md5_hex $_' >"$1".md5 > > c: > cat "$1" | ./mhash_wrapper.md5 >"$1".md5 > > whereas I tested 2 types of C programs to do the job. (The c sources are > attached below) > > The result was strange. > > Perl: 33 seconds > C using fread (single-character-input): 78 seconds > C using fgets (line-input): 128 seconds > > the password list has 18584390 passwords. > > Now for my understanding. How can I improve the performance of my > kiddie-c-program using mhash - or can I use some cpu-depending compiler > switches (sse/mmx) to increase performance? > The mhash is not compiled statically - does this have a influcence > > C Number 1 using fgets > > #include <mhash.h> //mhash > #include <stdio.h> > #include <stdlib.h> //exit > #include <unistd.h> //getopt > #include <ctype.h> //getopt > > > > int main(int argc, char **argv) > { > > int i; > int len; > MHASH td; > unsigned char buffer; > unsigned char *hash; > char line[4096]; > > // td = mhash_init(MHASH_WHIRLPOOL); > // td = mhash_init(MHASH_RIPEMD160); > td = mhash_init(MHASH_MD5); > > if (td == MHASH_FAILED) exit(1); > > while (fgets(line, sizeof line, stdin) != NULL) { // read a line > each time > len = strlen(line); > char *p = strrchr(line, '\n'); > > if (p != NULL) > mhash(td, line, len - 1); // strip the new line > else > mhash(td, line, len); > > hash = mhash_end(td); > // for (i = 0; i < mhash_get_block_size(MHASH_WHIRLPOOL); > i++) { printf("%.2x", hash[i]); } > // for (i = 0; i < mhash_get_block_size(MHASH_RIPEMD160); > i++) { printf("%.2x", hash[i]); } > for (i = 0; i < mhash_get_block_size(MHASH_MD5); i++) { > printf("%.2x", hash[i]); } > > printf("\n"); > td = mhash_init(MHASH_WHIRLPOOL); > free(hash); > } > > > > > exit(0); > } > > > > > and C number 2 using fread > > #include <mhash.h> //mhash > #include <stdio.h> > #include <stdlib.h> //exit > #include <unistd.h> //getopt > > > int main(void) > { > > int i; > MHASH td; > unsigned char buffer; > unsigned char *hash; > > td = mhash_init(MHASH_MD5); > > if (td == MHASH_FAILED) exit(1); > > while (fread(&buffer, 1, 1, stdin) == 1) { // read from stdin > until receive EOF > > if ( buffer != '\n' ) { mhash(td, &buffer, 1); } //dont calculate > line break > if ( buffer == '\n' ) { //received line break > hash = mhash_end(td); > > for (i = 0; i < mhash_get_block_size(MHASH_MD5); i++) { > printf("%.2x", hash[i]); } > printf("\n"); > td = mhash_init(MHASH_MD5); > } > } > > > > exit(0); > } > > > > > ------------------------------------------------------------------------------ > Find and fix application performance issues faster with Applications > Manager > Applications Manager provides deep performance insights into multiple > tiers of > your business applications. It resolves application problems quickly and > reduces your MTTR. Get your free trial! > https://ad.doubleclick.net/ddm/clk/302982198;130105516;z > _______________________________________________ > Mhash-dev mailing list > Mha...@li... > https://lists.sourceforge.net/lists/listinfo/mhash-dev > |
From: Sebastian H. <seb...@ya...> - 2016-05-09 19:01:00
|
Hi, Iam playing at the moment a little with has pre-processing. I started off using a obscenely slow bash script, and then went on using a perl one-liner. Now I though I could increase performance even more using a C implementation. My test setup looks like this perl: cat "$1" | perl -MDigest::MD5=md5_hex -lpe '$_ = md5_hex $_' >"$1".md5 c: cat "$1" | ./mhash_wrapper.md5 >"$1".md5 whereas I tested 2 types of C programs to do the job. (The c sources are attached below) The result was strange. Perl: 33 seconds C using fread (single-character-input): 78 seconds C using fgets (line-input): 128 seconds the password list has 18584390 passwords. Now for my understanding. How can I improve the performance of my kiddie-c-program using mhash - or can I use some cpu-depending compiler switches (sse/mmx) to increase performance? The mhash is not compiled statically - does this have a influcence C Number 1 using fgets #include <mhash.h> //mhash #include <stdio.h> #include <stdlib.h> //exit #include <unistd.h> //getopt #include <ctype.h> //getopt int main(int argc, char **argv) { int i; int len; MHASH td; unsigned char buffer; unsigned char *hash; char line[4096]; // td = mhash_init(MHASH_WHIRLPOOL); // td = mhash_init(MHASH_RIPEMD160); td = mhash_init(MHASH_MD5); if (td == MHASH_FAILED) exit(1); while (fgets(line, sizeof line, stdin) != NULL) { // read a line each time len = strlen(line); char *p = strrchr(line, '\n'); if (p != NULL) mhash(td, line, len - 1); // strip the new line else mhash(td, line, len); hash = mhash_end(td); // for (i = 0; i < mhash_get_block_size(MHASH_WHIRLPOOL); i++) { printf("%.2x", hash[i]); } // for (i = 0; i < mhash_get_block_size(MHASH_RIPEMD160); i++) { printf("%.2x", hash[i]); } for (i = 0; i < mhash_get_block_size(MHASH_MD5); i++) { printf("%.2x", hash[i]); } printf("\n"); td = mhash_init(MHASH_WHIRLPOOL); free(hash); } exit(0); } and C number 2 using fread #include <mhash.h> //mhash #include <stdio.h> #include <stdlib.h> //exit #include <unistd.h> //getopt int main(void) { int i; MHASH td; unsigned char buffer; unsigned char *hash; td = mhash_init(MHASH_MD5); if (td == MHASH_FAILED) exit(1); while (fread(&buffer, 1, 1, stdin) == 1) { // read from stdin until receive EOF if ( buffer != '\n' ) { mhash(td, &buffer, 1); } //dont calculate line break if ( buffer == '\n' ) { //received line break hash = mhash_end(td); for (i = 0; i < mhash_get_block_size(MHASH_MD5); i++) { printf("%.2x", hash[i]); } printf("\n"); td = mhash_init(MHASH_MD5); } } exit(0); } |
From: Hanno B. <ha...@hb...> - 2015-07-07 11:00:58
|
Hi, I discovered two use after free bugs in the test suite of mhash by testing it with address sanitizer (rebuilding with -fsanitize=address). These are in the files keygen_test.c and hmac_test.c, both issues are pretty similar. A pointer tmp gets allocated, later free'd, then zeroed, then used again and free'd again. Of course the first free is wrong if it is used again. In hmac_test.c it's pretty obvious: mutils_free(tmp); /* Test No 2 */ mutils_memset(tmp, 0, sizeof(tmp)); Not sure if anyone is still maintaining mhash, but I'll share the patch here in case anyone is interested. cu, -- Hanno Böck http://hboeck.de/ mail/jabber: ha...@hb... GPG: BBB51E42 |
From: thomas k. <tho...@gm...> - 2014-03-15 21:20:03
|
Hi, In building mhash 0.9.9.9, the keygen test segfaulted. Any other info I should send? |
From: Bryan C. G. <br...@ra...> - 2013-11-18 22:39:48
|
> -----Original Message----- > From: Peter Pentchev [mailto:ro...@ri...] > Sent: Friday, November 15, 2013 6:02 PM > To: ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ > Cc: mha...@li... > Subject: Re: [Mhash-dev] keygen_test fails with Seg Fault > > On Sat, Nov 16, 2013 at 12:03:13AM +0200, ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ wrote: >> Good afternoon everyone >> >> While trying to compile mhash-0.9.9.9 , I get a seg fault error during >> the keygen test ,in make check procedure . The error is "*/bin/sh: >> line 4: 8384 Segmentation fault (core dumped) ${dir}$tst*". All the >> other tests are a pass! >> >> I tried with the previous version and still the same. What exactly is >> going wrong? > > What OS/platform are you running this on? Can you send the contents of the config.log file? I'm seeing the same error on my 64bit VM. I just did some quick debugging as a break from work and came up with this: (gdb) break 126 Breakpoint 23 at 0x400cc9: file keygen_test.c, line 126. (gdb) break 127 Breakpoint 24 at 0x400cd3: file keygen_test.c, line 127. (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/bryan/Code/mhash/src/.libs/lt-keygen_test Breakpoint 23, main () at keygen_test.c:126 126 tmp = mutils_asciify(key, keysize); (gdb) c Continuing. Program received signal SIGSEGV, Segmentation fault. 0x00007ffff78a3cbd in ?? () from /lib/libc.so.6 I may have some more time to look into it tonight. Bryan |
From: Bryan C. G. <br...@ra...> - 2013-11-18 22:25:54
|
> -----Original Message----- > From: Bryan C. Geraghty [mailto:br...@ra...] > Sent: Monday, November 18, 2013 4:08 PM > To: 'Peter Pentchev'; 'ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ' > Cc: 'mha...@li...' > Subject: RE: [Mhash-dev] keygen_test fails with Seg Fault > >> -----Original Message----- >> From: Peter Pentchev [mailto:ro...@ri...] >> Sent: Friday, November 15, 2013 6:02 PM >> To: ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ >> Cc: mha...@li... >> Subject: Re: [Mhash-dev] keygen_test fails with Seg Fault >> >> On Sat, Nov 16, 2013 at 12:03:13AM +0200, ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ wrote: >>> Good afternoon everyone >>> >>> While trying to compile mhash-0.9.9.9 , I get a seg fault error >>> during the keygen test ,in make check procedure . The error is "*/bin/sh: >>> line 4: 8384 Segmentation fault (core dumped) ${dir}$tst*". All the >>> other tests are a pass! >>> >>> I tried with the previous version and still the same. What exactly is >>> going wrong? >> >> What OS/platform are you running this on? Can you send the contents of the config.log file? > > I'm seeing the same error on my 64bit VM. I just did some quick debugging as a break from work and came up with this: > > (gdb) break 126 > Breakpoint 23 at 0x400cc9: file keygen_test.c, line 126. > (gdb) break 127 > Breakpoint 24 at 0x400cd3: file keygen_test.c, line 127. > (gdb) run > The program being debugged has been started already. > Start it from the beginning? (y or n) y > Starting program: /home/bryan/Code/mhash/src/.libs/lt-keygen_test > > Breakpoint 23, main () at keygen_test.c:126 > 126 tmp = mutils_asciify(key, keysize); > (gdb) c > Continuing. > > Program received signal SIGSEGV, Segmentation fault. > 0x00007ffff78a3cbd in ?? () from /lib/libc.so.6 > > I may have some more time to look into it tonight. > > Bryan I was going to submit an issue and saw that the issue and a patch had already been submitted. This is now fixed. Bryan |
From: Peter P. <ro...@ri...> - 2013-11-16 00:17:18
|
On Sat, Nov 16, 2013 at 12:03:13AM +0200, ΜΑΝΩΛΗΣ ΡΑΓΚΟΥΣΗΣ wrote: > Good afternoon everyone > > While trying to compile mhash-0.9.9.9 , I get a seg fault error during the > keygen test ,in make check procedure . The error is "*/bin/sh: line 4: 8384 > Segmentation fault (core dumped) ${dir}$tst*". All the other tests are a > pass! > > I tried with the previous version and still the same. What exactly is going > wrong? What OS/platform are you running this on? Can you send the contents of the config.log file? G'luck, Peter -- Peter Pentchev ro...@ri... roam@FreeBSD.org p.p...@st... PGP key: http://people.FreeBSD.org/~roam/roam.key.asc Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13 If I had finished this sentence, |
From: ΜΑΝΩΛΗΣ Ρ. <man...@gm...> - 2013-11-15 22:03:21
|
Making check in include make[1]: Entering directory `/home/manolis/Downloads/mhash-0.9.9/include' make[1]: Nothing to be done for `check'. make[1]: Leaving directory `/home/manolis/Downloads/mhash-0.9.9/include' Making check in lib make[1]: Entering directory `/home/manolis/Downloads/mhash-0.9.9/lib' make[1]: Nothing to be done for `check'. make[1]: Leaving directory `/home/manolis/Downloads/mhash-0.9.9/lib' Making check in doc make[1]: Entering directory `/home/manolis/Downloads/mhash-0.9.9/doc' make[1]: Nothing to be done for `check'. make[1]: Leaving directory `/home/manolis/Downloads/mhash-0.9.9/doc' Making check in src make[1]: Entering directory `/home/manolis/Downloads/mhash-0.9.9/src' make check-TESTS make[2]: Entering directory `/home/manolis/Downloads/mhash-0.9.9/src' testing CRC32 . testing CRC32B . testing MD5 ....... testing SHA1 ... testing HAVAL256 .. testing HAVAL224 . testing HAVAL192 . testing HAVAL160 . testing HAVAL128 . testing RIPEMD128 ......... testing RIPEMD160 ......... testing RIPEMD256 ......... testing RIPEMD320 ......... testing TIGER ........ testing TIGER160 ..... testing TIGER128 ... testing GOST .. testing MD4 ....... testing SHA256 .... testing SHA224 ... testing SHA512 ... testing SHA384 ... testing WHIRLPOOL ......... testing SNEFRU128 ...... testing SNEFRU256 ... testing MD2 ....... everything seems to be fine :-) PASS: hash_test.sh MD5 HMAC-Test: Ok PASS: hmac_test /bin/sh: line 4: 8384 Segmentation fault (core dumped) ${dir}$tst FAIL: keygen_test Testing save/restore for algorithm CRC32: Ok Testing save/restore for algorithm MD5: Ok Testing save/restore for algorithm SHA1: Ok Testing save/restore for algorithm HAVAL256: Ok Testing save/restore for algorithm RIPEMD160: Ok Testing save/restore for algorithm TIGER: Ok Testing save/restore for algorithm GOST: Ok Testing save/restore for algorithm CRC32B: Ok Testing save/restore for algorithm HAVAL224: Ok Testing save/restore for algorithm HAVAL192: Ok Testing save/restore for algorithm HAVAL160: Ok Testing save/restore for algorithm HAVAL128: Ok Testing save/restore for algorithm TIGER128: Ok Testing save/restore for algorithm TIGER160: Ok Testing save/restore for algorithm MD4: Ok Testing save/restore for algorithm SHA256: Ok Testing save/restore for algorithm ADLER32: Ok Testing save/restore for algorithm SHA224: Ok Testing save/restore for algorithm SHA512: Ok Testing save/restore for algorithm SHA384: Ok Testing save/restore for algorithm WHIRLPOOL: Ok Testing save/restore for algorithm RIPEMD128: Ok Testing save/restore for algorithm RIPEMD256: Ok Testing save/restore for algorithm RIPEMD320: Ok Testing save/restore for algorithm SNEFRU128: Ok Testing save/restore for algorithm SNEFRU256: Ok Testing save/restore for algorithm MD2: Ok PASS: rest_test Checking fragmentation capabilities of MD5: OK Checking fragmentation capabilities of SHA1: OK Checking fragmentation capabilities of HAVAL256: OK Checking fragmentation capabilities of RIPEMD160: OK Checking fragmentation capabilities of TIGER: OK Checking fragmentation capabilities of HAVAL224: OK Checking fragmentation capabilities of HAVAL192: OK Checking fragmentation capabilities of HAVAL160: OK Checking fragmentation capabilities of HAVAL128: OK Checking fragmentation capabilities of TIGER128: OK Checking fragmentation capabilities of TIGER160: OK Checking fragmentation capabilities of MD4: OK Checking fragmentation capabilities of SHA256: OK Checking fragmentation capabilities of SHA224: OK Checking fragmentation capabilities of SHA512: OK Checking fragmentation capabilities of SHA384: OK Checking fragmentation capabilities of WHIRLPOOL: OK Checking fragmentation capabilities of RIPEMD128: OK Checking fragmentation capabilities of RIPEMD256: OK Checking fragmentation capabilities of RIPEMD320: OK Checking fragmentation capabilities of SNEFRU128: OK Checking fragmentation capabilities of SNEFRU256: OK Checking fragmentation capabilities of MD2: OK PASS: frag_test ============================================ 1 of 5 tests failed Please report to mha...@so... ============================================ |
From: Dennis C. <dc...@bl...> - 2013-06-27 03:59:11
|
On Solaris 10 and compiled with gcc 4.8.0 I see this during testing : Checking fragmentation capabilities of HAVAL256: /usr/local/bin/bash: line 4: 21260 Bus Error (core dumped) ${dir}$tst FAIL: frag_test ============================================ 1 of 5 tests failed Please report to mha...@so... ============================================ Looking into this I see : node002$ dbx ./src/.libs/frag_test time_1372300240-pid_21260-uid_16411-gid_1-fid_frag_test.core Reading frag_test core file header read successfully Reading ld.so.1 Reading libmhash.so.2.0.1 Reading libc.so.1 Reading libgcc_s.so.1 Reading libc_psr.so.1 program terminated by signal BUS (invalid address alignment) 0xffffffff7f005a28: mutils_word32nswap+0x0094: ld [%g1], %g1 (dbx) where =>[1] mutils_word32nswap(0xffffffff7fffeb09, 0x20, 0x0, 0x0, 0x0, 0x0), at 0xffffffff7f005a28 [2] havalTransform3(0x1001045d4, 0xffffffff7fffeb09, 0x100104680, 0x18c07f, 0xd2355034, 0x143367), at 0xffffffff7f029228 [3] havalUpdate(0x1001045d0, 0xffffffff7fffeb08, 0x81, 0xfffffffffffffff9, 0x0, 0xffffffff7fffeb89), at 0xffffffff7f040c4c [4] mhash(0x100101dd0, 0xffffffff7fffeb08, 0x81, 0xffffffff7ef4fc81, 0x0, 0x140), at 0xffffffff7f004684 [5] frag_test(0x3, 0x0, 0x182ba0, 0x7fffffff, 0x0, 0x0), at 0x100001160 [6] main(0x1, 0xffffffff7fffefd8, 0xffffffff7fffefe8, 0x100101da0, 0x100000000, 0xffffffff7ea00200), at 0x100001568 (dbx) quit Problem is here in lib/stdfns.c : /* * Byte swap a series of 32-bit integers. If destructive is set to false, the * output will be placed in a freshly malloc()'ed buffer and the original * data will remain intact. */ WIN32DLL_DEFINE mutils_word32 * mutils_word32nswap(mutils_word32 *x, mutils_word32 n, mutils_boolean destructive) { mutils_word32 loop; mutils_word32 *buffer; mutils_word32 *ptrIn; mutils_word32 *ptrOut; mutils_word32 count = n * 4; if (destructive == MUTILS_FALSE) { buffer = mutils_malloc(count); if (buffer == NULL) { return(NULL); } } else { buffer = x; } /* * This loop is totally useless for destructive processing of little-endian * data on a little-endian machine. */ for (loop = 0, ptrIn = x, ptrOut = buffer; loop < n; loop++, ptrOut++, ptrIn++) { *ptrOut = mutils_lend32(*ptrIn); } return(buffer); } First thought I have is "why use a non-standard datatype where uint32_t works fine?" followed by "why not use calloc to zero fill the buffer ? Anyways .. I see this mail list is essentially dead of traffic so I may just look into this, fix it and post the patch all to myself. Dennis |
From: Tib <ti...@ro...> - 2013-04-16 15:17:03
|
Hello, For your interest, I tried to develop a mhash lib ruby wrapper. (C native extension, no FFI). For those who like ruby, you can find it on Github: https://github.com/TibshoOT/ruby-mhash Feedback appreciated ! :) Regards, -- Tib - ti...@ro... RocknRoot - In Code We Trust |
From: Michael F. <Fo...@jh...> - 2012-04-06 15:09:14
|
Reporting this error as requested after running make check: For mhash ver 0.9.9.9 on RedHat Linux ver 6.2: Got this (partial) result for make check: MD5 HMAC-Test: Ok PASS: hmac_test /bin/sh: line 4: 3681 Segmentation fault (core dumped) ${dir}$tst FAIL: keygen_test ... ============================================ 1 of 5 tests failed Please report to mha...@so...<mailto:mha...@so...> ============================================ make[2]: *** [check-TESTS] Error 1 make[2]: Leaving directory `/home/foxmi/mhash-0.9.9.9/src' make[1]: *** [check-am] Error 2 make[1]: Leaving directory `/home/foxmi/mhash-0.9.9.9/src' make: *** [check-recursive] Error 1 Please let me know if you'd like more information and if you have a solution. Thanks, Mike |
From: Reini U. <ru...@x-...> - 2011-03-02 23:00:17
|
Bryan Geraghty schrieb: > I've been doing some work with the new Skein suite lately and wondered > what it would take to add it to the mhash library. But when I looked at > the SourceForge page, I noticed that the project is no longer under > active development. Has it been moved elsewhere or is this just a > technicality? A me too: I haven't looked lately to mdev but worked with cmph. Creating perfect hash functions, even minimal perfect hash functions in RAM is possible nowadays. See cmph, esp. BDZ http://cmph.sourceforge.net/bdz.html So you do not ask for a good HASH algorithm anymore, you create the best hash function on the fly and use it at runtime. IMHO such a feature should be added to mhash. -- Reini Urban http://phpwiki.org/ http://murbreak.at/ |
From: Bryan G. <br...@ra...> - 2011-03-02 22:42:01
|
Hello folks, I've been doing some work with the new Skein suite lately and wondered what it would take to add it to the mhash library. But when I looked at the SourceForge page, I noticed that the project is no longer under active development. Has it been moved elsewhere or is this just a technicality? Thanks in advance, Bryan C. Geraghty |
From: Jonathan D. <im...@ya...> - 2011-02-18 19:50:19
|
Thanks. Could have sworn I'd had it so that it wasn't defined if one already existed and had that check after all other includes had been dealt with. INT_MAX is, standards-notwithstanding, not universal. Neither are a number of other standard elements, which is why we need automake and autoconf. I'll remove the define and have a better check for it. --- On Wed, 2/16/11, Guan Xin <gua...@gm...> wrote: From: Guan Xin <gua...@gm...> Subject: [Mhash-dev] Remove the INT_MAX macro To: mha...@li... Date: Wednesday, February 16, 2011, 11:59 PM Hello, The INT_MAX macro defined in "mutils/mincludes.h" confuses preprocessor integer size with compiler integer size. These two sizes are often different. This #define is incorrect and unnecessary because INT_MAX is in the standard C header <limits.h> (or elsewhere if there is no <limits.h>). Regards, guanx -----Inline Attachment Follows----- ------------------------------------------------------------------------------ The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb -----Inline Attachment Follows----- _______________________________________________ Mhash-dev mailing list Mha...@li... https://lists.sourceforge.net/lists/listinfo/mhash-dev |
From: Guan X. <gua...@gm...> - 2011-02-17 07:59:43
|
Hello, The INT_MAX macro defined in "mutils/mincludes.h" confuses preprocessor integer size with compiler integer size. These two sizes are often different. This #define is incorrect and unnecessary because INT_MAX is in the standard C header <limits.h> (or elsewhere if there is no <limits.h>). Regards, guanx |
From: Karn K. <tie...@gm...> - 2010-08-28 03:06:29
|
Hello, While building Ur/Web ( http://www.impredicative.com/ur/ ) which has libmhash as a dependency against libmhash 0.9.9.9 the build failed because both libmhash and Ur/Web autotools were defining generic macros like PACKAGE_NAME. Debian has produced a patch for this ( Debian bug #473204 ), applying the patch to the 0.9.9.9 release of libmhash made Ur/Web build successfully. Debian bug report is here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=473204 Attached is the patch which was applied to the libmash 0.9.9.9 release and used in the build. Just wanted to ping upstream to let you know about this situation. Thanks for giving libmhash to the world! |
From: Jonathan D. <im...@ya...> - 2010-07-15 04:51:22
|
Documentation and the second-round SHA3 candidates are all forthcoming, along with the previously-submitted patches, just as soon as I get my new Linux box up and running. |
From: Hans de B. <jmd...@xm...> - 2010-07-14 20:34:44
|
Just wondering wetter mhash is still alive and how difficult it would be to add a multicore version of md6 or Tiger to the library? could someone point me to some docs or origin of this mutils.h thing? -- Hans |
From: Keisial <ke...@gm...> - 2010-01-31 22:52:31
|
Jonathan Day wrote: > Simply put, the last couple of years has been hell for me. I don't know specifically why none of the other developers have kept the momentum but I have no doubt there are reasonable reasons. If anyone feels they could keep a good pace, email me your sourceforge id and I'll add you as a release admin. Also, the cvs repo is a pain so there are many updates in releases absent from the repo. > I'm glad to see it's not completely unmaintained. The lack of answers was the most worrying piece imho. What version system do you use internally? It'd make sense to move from cvs. Sourceforge also offers SVN, Git, Mercurial and Bazaar, and there are many code hosters out there too. |
From: Jonathan D. <im...@ya...> - 2010-01-31 07:37:00
|
Simply put, the last couple of years has been hell for me. I don't know specifically why none of the other developers have kept the momentum but I have no doubt there are reasonable reasons. If anyone feels they could keep a good pace, email me your sourceforge id and I'll add you as a release admin. Also, the cvs repo is a pain so there are many updates in releases absent from the repo. |
From: Jonathan D. <im...@ya...> - 2010-01-31 06:27:34
|
Simply put, the last couple of years has been hell for me. I don't know specifically why none of the other developers have kept the momentum but I have no doubt there are reasonable reasons. If anyone feels they could keep a good pace, email me your sourceforge id and I'll add you as a release admin. Also, the cvs repo is a pain so there are many updates in releases absent from the repo. |
From: Keisial <ke...@gm...> - 2010-01-31 00:04:57
|
Aleksey wrote: > Hi Guys! > > There is a mistype on the http://mhash.sourceforge.net/ page. > The GOST algorithm is incorrectly listed as a 512-bit hash function. > > But GOST is 256-bit! Really! No jokes. :) > > It processes message by 256-bit blocks and generates 256-bit hash. > See http://en.wikipedia.org/wiki/GOST_%28hash_function%29 > and http://ehash.iaik.tugraz.at/wiki/GOST for details. They also still list an ALDER-32 algorithm which is in fact ADLER-32. ...which I reported here 16 months ago. This mailing list seems dead, most recent repository change was 2 years ago (and they were just taking /nmav/ out as maintainer!). |
From: Aleksey <ale...@gm...> - 2010-01-30 22:42:04
|
Hi Guys! There is a mistype on the http://mhash.sourceforge.net/ page. The GOST algorithm is incorrectly listed as a 512-bit hash function. But GOST is 256-bit! Really! No jokes. :) It processes message by 256-bit blocks and generates 256-bit hash. See http://en.wikipedia.org/wiki/GOST_%28hash_function%29 and http://ehash.iaik.tugraz.at/wiki/GOST for details. |
From: Aleksey <ale...@gm...> - 2010-01-29 21:47:41
|
Hello! I'm new here! :) The GOST R 34.11-94 algorithm in Libmhash doesn't work as expected. The following tests fails: GOST( <100000 characters of 'a'> ) = 5C00CCC2734CDD3332D3D4749576E3C1A7DBAF0E7EA74E9FA602413C90A129FA GOST( <128 characters of 'U'> ) = 53A3A3ED25180CEF0C1D85A074273E551C25660A87062A52D926A9E8FE5733A4 The bugfix-gosthash.patch file is attached. It reverts the problem code block back to the Markku-Juhani Saarinen's reference implementation (http://www.autochthonous.org/crypto/ ) and adds two tests from above to the src/hash_test.sh script. The problem occurs due to bug in cumulative sum calculation (128-bit integer arithmetics). In mhash-0.9.9.9/lib/gosthash.c, function gosthash_bytes(): when ctx->sum[] = ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffff9f + 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616161 mhash sum[] = 61616161 61616160 61616161 61616160 61616161 61616160 61616161 61616100 expected sum[] = 61616161 61616161 61616161 61616161 61616161 61616161 61616161 61616100 The cause is libmhash code: c = a + c + ctx->sum[i]; if ((a==0xFFFFFFFFUL) && (ctx->sum[i]==0xFFFFFFFFUL)) { ctx->sum[i] = c; c = 1; } else { ctx->sum[i] = c; c = c < a ? 1 : 0; } When c=1 and a=0xFFFFFFFFUL we obtain c=0, athough c=1 is expected! |