From: Kevin Uhlir <eku@dr...> - 2006-11-02 05:37:34
Has anyone else noticed that if you use nfs you must make the read size 1024
for it to work reliably?
The default read size for nfs seems to be 16k (discovered by tcpdump on the
host machine, not by looking at the gum kernel defaults, yet). This seems
to make the gumstix choke when reading files. It will do directory
operations just fine, (of course these are small packets), but when you want
to read a file of 20 or 30k, the gumstix reports:
nfs: server 172.30.1.205 not responding, still trying
nfs: server 172.30.1.205 OK
Which is all normal nfs stuff when the server is failing or something is
interfering with the data.
Tcpdump from the server revealed retries of 16k worth of nfs packets (10 or
11 ethernet packets). You could see the retries continue as the client saw
some of the packets and not others, and finally everthing gets through and
the client is happy.
My first thought was that the gumstix was overflowing from the large number
of packets all in a row that it needed to deal with.
First I backed off the default read size to 4096 (-orsize=4096), it worked
much better, but I did see delays from a few lost packets. I then changed
it to 1024, and I have reliable operation. I set wsize=1024 as well,
thinking its just a better idea to make read and write the same.
Version Info FYI.
BUILD_DATE='Wed Nov 1 22:18:24 CST 2006'
Any thoughts would be apprecated.
Get latest updates about Open Source Projects, Conferences and News.