Reading large file from USB HDD causes 100% CPU usage

Hi.

I am running ubuntu on BBB rev C and I have a USB disk attached. I am having trouble accessing large files on the USB disk over sftp and ftp. When I start a transfer it starts fine but after some minutes CPU is at 100% and the transfer slows to a crawl. I have tried different ftp servers but it makes no difference.

I think I have narrowed the problem down to a problem of reading from the USB disk.

If I make a large file:
dd if=/dev/zero of=file.txt count=1024 bs=4000000

(which seem to be no problem)
and try to read it with
time sh -c “dd if=file.txt bs=4k”

the same thing happens. After a while CPU reaches 100%.

I have tried using nice and oinice on the process but it doesn’t help.

I found a discussion of a similar problem (https://bbs.archlinux.org/viewtopic.php?id=112846&p=4) where it was suggested to set /sys/kernel/mm/transparent_hugepage/defrag to madvise.
But I could not figure out how to do that on the BBB (transparent_hugepage doesn’t exist and cannot be created).

Any ideas for what I can do?

As I understand

USB 2.0 will cause 8000 interrupts in 1 seconds , an interrupt per 125us
while read / write will cause some spin lock , specially ,when your write .

And erase 1 nand block will cause several millisecond
so when you write the usb stick . you always get no useful spin lock wait , but you can not skip it

what you can do is

Buy a faster speed usb stick . 330.gif