| 
     
      
      
      From: Tim N. <ti...@si...> - 2021-05-03 21:41:46
      
     
   | 
Change https://sourceforge.net/p/openocd/code/ci/7dd323b26d93e49e409e02053e30f53ac8138cd5/ cut remote bitbang performance (when talking to the spike RISC-V simulator) approximately in half. This change "removes the file write stream, replaces it with socket write calls." Previously performance was better because the individual byte-size writes were buffered, and didn't result in a system call until fflush() was called. I assume the right way to fix this is to implement a buffer/flush mechanism inside OpenOCD, on top of the socket calls, so that the optimization also works on Windows. Is that right? Is anybody motivated to take that on? Tim  |