For my research, I've been isolating some operations of the benchmark to study their effects and performance and I've been stumbling across some peculiarities. I've been searching through the code unsuccessfully to find an answer and I hope you can nudge me in the right direction…
a) I'm issuing write requests (write_weight = 1, rest = 0) of 1MB size, which overwrite 1MB of data. Having 1 thread, DirectIO = 1 and op_delay = 980, I'm oberserving (with iostat) 1MB transfer to the device per second. However, the calculated throughput of the benchmark is ~15-30MB/s (doing the same for read and append requests, the throughput is calculated as 1MB/s). Even with the consideration of the mean response time of a request, which is 17ms, I have no idea how this throughput is calculated…
b) Doing the same for create requests, I observe (with iostat) that ~10-30MB traffic per second are sent to the device. It seems, that it doesn't really respect the op_delay, even if I set this value to 1000. Well, at least the throuput is calculated accordingly in this case…
Any help is greatly appreciated.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I think I found the reason for the aformentioned behaviour…
a) If I write 1MB in a 1GB file, it is accounted as an 1GB write leading to arbitrary high throughput results. I don't think this is supposed to be like that…
b) Create uses the size_weights of the filesystem, even if I don't specify min/max_filesize. I didn't know that…
I'm going to fix the first malfunction for me, maybe you are interested in the result… Unfortunately, the absence of code documentation and code comments won't support the progress…
Best regards.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i think i finally found the source of the problem… In fileops.c, I changed filesize to writesize in the writefile methods. Code:
/* Shared core between ffsb_writefile and ffsb_writefile_fsync.*/
staticunsignedffsb_writefile_core(ffsb_thread_t*ft,ffsb_fs_t*fs,unsignedopnum,uint32_t*writesize_ret,intfsync_file){structbenchfiles*bf=(structbenchfiles*)fs_get_opdata(fs,opnum);structffsb_file*curfile=NULL;intfd;uint64_tfilesize;char*buf=ft_getbuf(ft);intwrite_random=ft_get_write_random(ft);uint32_twrite_size=ft_get_write_size(ft);uint32_twrite_blocksize=ft_get_write_blocksize(ft);structranddata*rd=ft_get_randdata(ft);unsignediterations=0;curfile=choose_file_reader(bf,rd);fd=fhopenwrite(curfile->name,ft,fs);filesize=ffsb_get_filesize(curfile->name);assert(filesize>=write_size);/* Sequential write, starting at a random point */if(!write_random){uint64_trange=filesize-write_size;uint64_toffset=0;if(range){offset=get_random_offset(rd,range,fs_get_alignio(fs));fhseek(fd,offset,SEEK_SET,ft,fs);}iterations=writefile_helper(fd,write_size,write_blocksize,buf,ft,fs);}else{/* Randomized write */uint64_trange=filesize-write_blocksize;inti;iterations=write_size/write_blocksize;for(i=0;i<iterations;i++){uint64_toffset=get_random_offset(rd,range,fs_get_alignio(fs));fhseek(fd,offset,SEEK_SET,ft,fs);fhwrite(fd,buf,write_blocksize,ft,fs);}}if(fsync_file){if(fsync(fd)){perror("fsync");printf("aborting\n");exit(1);}}unlock_file_reader(curfile);fhclose(fd,ft,fs);*writesize_ret=write_size;returniterations;}
voidffsb_writefile(ffsb_thread_t*ft,ffsb_fs_t*fs,unsignedopnum){unsignediterations;uint32_twritesize;iterations=ffsb_writefile_core(ft,fs,opnum,&writesize,0);ft_incr_op(ft,opnum,iterations,writesize);ft_add_writebytes(ft,writesize);}
voidffsb_writefile_fsync(ffsb_thread_t*ft,ffsb_fs_t*fs,unsignedopnum){unsignediterations;uint32_twritesize;iterations=ffsb_writefile_core(ft,fs,opnum,&writesize,1);ft_incr_op(ft,opnum,iterations,writesize);ft_add_writebytes(ft,writesize);}
Best regards
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
first of all, thank you for this great benchmark.
For my research, I've been isolating some operations of the benchmark to study their effects and performance and I've been stumbling across some peculiarities. I've been searching through the code unsuccessfully to find an answer and I hope you can nudge me in the right direction…
a) I'm issuing write requests (write_weight = 1, rest = 0) of 1MB size, which overwrite 1MB of data. Having 1 thread, DirectIO = 1 and op_delay = 980, I'm oberserving (with iostat) 1MB transfer to the device per second. However, the calculated throughput of the benchmark is ~15-30MB/s (doing the same for read and append requests, the throughput is calculated as 1MB/s). Even with the consideration of the mean response time of a request, which is 17ms, I have no idea how this throughput is calculated…
b) Doing the same for create requests, I observe (with iostat) that ~10-30MB traffic per second are sent to the device. It seems, that it doesn't really respect the op_delay, even if I set this value to 1000. Well, at least the throuput is calculated accordingly in this case…
Any help is greatly appreciated.
Hi,
I think I found the reason for the aformentioned behaviour…
a) If I write 1MB in a 1GB file, it is accounted as an 1GB write leading to arbitrary high throughput results. I don't think this is supposed to be like that…
b) Create uses the size_weights of the filesystem, even if I don't specify min/max_filesize. I didn't know that…
I'm going to fix the first malfunction for me, maybe you are interested in the result… Unfortunately, the absence of code documentation and code comments won't support the progress…
Best regards.
Hi,
i think i finally found the source of the problem… In fileops.c, I changed filesize to writesize in the writefile methods. Code:
Best regards