How to setup socket-based pipe in Beaglebone Black?

In Linux, we create a pipe using mknod(). I would like to create a buffered pipe( of length L, say), on which a remote process can write into using socket. How can we associate one end of pipe to a socket?

I may be misunderstanding what you are trying to do, but it seems you want the enhanced named-pipe behavior of Windows (working across a network connection) on Linux. Its one of the few Windows “enhancements” I’ve actually found useful over the years. But as far as I could tell, to get this behavior on Linux (posix) you need to write client/server processes on each end and connect them to your named-pipe in which case which case I fail to see the point of putting the local named-pipes into the mix instead of just reading/writing the network sockets.

If length is not too great, and you want it confined to a local sub-net (my usual case) I find UDP works great, but TCP ports automatically handle a many details and inherently behave FIFO like a pipe would with automatic error detection/correction and buffering with the cost of uncertain latencies.

There may be someway to make this work if you have a common NFS share mounted on the systems, but NFS brings along a lot of baggage you probably don’t need.

Hope this helps.
–wally.

I think because there are many different ways to tackle this type of job. What the end goal of the application is would be important to know.

On top of what Wally has said another two options could be sshfs in place of his suggestion for NFS. It pretty much meets all the same basic requirements, and has a much smaller system footprint. Also, there are websockets, which again can be used depending on what the final goal is. . . .

Thank you @Wally and @William.
My goal is to send continuous data stream from my system and my beaglebone should be receiving data serially and than process the data as per my algorithm without any data loss.
We are using sshfs to mount a directory on beaglebone to our system.

Thank you @Wally and @William.
My goal is to send continuous data stream from my system and my beaglebone should be receiving data serially and than process the data as per my algorithm without any data loss.
We are using sshfs to mount a directory on beaglebone to our system.

Is sshfs your end solution then ? Or do you still want some advice ? If you still want more advice, then more information will be needed. We do not need to know exactly what you’re doing, but would need to know how exactly you’re interacting with the data. But on high level cursory look, I’m betting websockets could be made to work. Which basically means, your application development could be incredibly simple - Depending on your Javascript skills.

Something else actually just came to mind which I can not believe I did not think of first. Netcat was designed specifically for this sort of thing . . . but if you’re unfamiliar with netcat, there are several good free books on the internet I believe.

@Wally and @William, Thank you both for advice. I will study about Netcat and see how it can be used for my application.

Dhanesh, netcat is pretty much a general purpose networking tool. It can take stdin as input via using the pipe symbol at the command line, as well as pipe that input on the opposite end to stdout.

So as an extremely simple example:

client side:
$ nc -l -p 5000 > /path/to/somefile

This will take input over the network from a local system attempting to connect to this system via netcat on port 5000. Then of course the command line redirection symbol, pipes whatever data comes in to a file.

server side:
$ cat /proc/cmdline | nc 192.168.7.2 5000

This pipes the ouput of a local system command( stdout ) to netcat, which in turns sends this data to a specified IP address, and port number.

For me, I think one of the really interesting thoughts behind this process is that on the beaglebone side of things, the data could be kept entirely in memory by using / creating a tmpfs file . . . size can not be overly large of course. But I’ve personally used file sizes of 256M with no ill effects. As the applications i personally ran on this test system used less than 100M total for all processes. Anyway, just something to think about.

Another thing I would like to mention, in case it’s not obviously to you. Is that if your application can take stdin input like many std linux commands. You would be able to pipe recieved data from netcat directly to your application . . .

Dhanesh, netcat is pretty much a general purpose networking tool. It can take stdin as input via using the pipe symbol at the command line, as well as pipe that input on the opposite end to stdout.

So as an extremely simple example:

client side:
$ nc -l -p 5000 > /path/to/somefile

From the nc MAN pages:

-l Used to specify that nc should listen for an incoming connection
rather than initiate a connection to a remote host. It is an
error to use this option in conjunction with the -p, -s, or -z
options. Additionally, any timeouts specified with the -w option
are ignored.

The example in the nc MAN page:

$ nc -l 1234

John,

This is the first and only time I will reply to you now, or in any future post you make on any subject- period.

Stop acting like a child. I’m sure that manpage might actually mean something on some Linux some where. But they mean nothing on the Linux I tested those commands on. I do not know if you pay attention or not, but I do not post commands on the groups here unless I’ve tested them personally, to prove that they work.

So in the end when you pull BS stunts like you just did here you a) make yourself look like an idiot, and b) do the OP a disservice by confusing the subject. In the past, I’ve been playing along with your little game, but I’m telling you right here and now that is going to stop. I will no longer respond to anything you have to say on this forum in the future.

william@eee-pc:~$ man nc
NC(1) NC(1)

NAME
nc - TCP/IP swiss army knife

SYNOPSIS
nc [-options] hostname port[s] [ports] …
nc -l -p port [-options] [hostname] [port]

william@eee-pc:~$ uname -a
Linux eee-pc 3.2.0-4-686-pae #1 SMP Debian 3.2.68-1+deb7u2 i686 GNU/Linux

william@eee-pc:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 7.8 (wheezy)
Release: 7.8
Codename: wheezy

william@eee-pc:~$ nc -l -p 5000 > /home/william/test.log
^C // no error . . .
william@eee-pc:~$

Yep, I see that. Seems to be a difference between Ubuntu nc and Debian nc.

Regards,
John

TCP/IP connection will guarantee no data loss at the cost of possibly greatly increased latency. I’ve done such in the past using UDP which, if client and server are on the same subnet, is about as deterministic as standard Ethernet gets, but is generally blocked by default on most firewalls. If the data leaves your subnet you will need error correction and likely just end up reinventing TCP badly.

I’ve done pseudo-simultaneous sampling systems (all the A/D channels are sampled at the maximum rate once per tick of a slower timer that sets the system sampling rate) using this method with the server sending each multi-channel sample to a different system via UDP for further processing at every tick of the sample clock. It works very well when the systems are on the same subnet and the data rate is not too high compared to the network overhead.

If your data rate is low enough the “file system” based solutions might be easier to code and troubleshoot, but you have extra overhead from the network transport and the file system layer to contend with. IMHO the easiest to code, troubleshoot, and document architecture is the way to go as long as all requirements can be met.

The UDP client/server solution can be pretty fast – I’ve controlled a 6-DOF motion base (Stewart Platform) with a 100Hz servo loop where one system calculated the desired motion profile from user input while another did the matrix calculations to set the actuator link lengths for the desired motion, and a third processed the visual scene presented to the user, all talking via UDP on an isolated network (only these three systems were connected).

Hey Walley,

I don’t think TCP/IP would be all that much slower than UDP when transmitting data over the wire on this board. The Beaglebone’s ethernet is incredibly fast for 10/100 fast ethernet. What’s more, yes by comparrison TCP connection can have more latency than UDP connections but latency does not usually matter. What matters most of the time is bandwidth. So the connection speed could be the same, it may just arrive a few milliseconds slower.

But, I’ll have to devise some sort of test using netcat to see what’s really up with netcat. That should not be too hard to do. I can say that NFS comes really close to the interfaced maximum theoretical speed. But NFS uses neither UDP, or TCP, and if memory serves it operates on layer2 on some level . . .

In a “streaming” or servo (or tracking) application latency matters a lot, bandwidth is usually pre-limited by the sampling rate. Its usually better to drop or distort a frame than stop and wait for a re-transmission.

In a soft real-time system a few errors once in a while usually doesn’t create too bad of a glitch, but when using TCP the retry for error correction can often make the glitch worse. Its all application dependent, but on a lightly loaded subnet I agree there is usually no significant difference between UDP and TCP within UDP’s limitation (maximum packet size can be a function of the subnet its on), but for things like the 6-DOF motion controller I was talking about, using UDP and just dropping any bad data and “catching up” next sample works far better than prolonging the glitch to wait for the “correct” data which is by now too “stale” to care about.

I’ve setup systems both ways using which ever was most appropriate Basically if a decent amount of data buffering is needed in the system, or you need to cross subnets TCP is almost always the way to go, but if quick response to changes is required UDP usually works better for soft-real time applications.

Yeah, I was going to say that there would be a big difference between a connections that has two way comms going over it versus just blasting data in one direction.

@Wally,

So yeah, I do not think data in one direction over netcat would be much of a problem for this hardware.

Host PC :
william@eee-pc:~$ nc -l -p 5000 > /home/william/test.log

Beaglebone:
william@beaglebone:~$ dd if=/dev/zero bs=1M count=1000 | nc 192.168.254.162 5000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 95.5816 s, 11.0 MB/s

NFS is still faster, but not by a whole lot.

Oh, and for the record, NFS is actually faster than iSCSI on this hardware as well :wink:

You might want to look at the NFS docs a bit more. NFS has used UDP forever, TCP was only added in version 4 if memory serves, and it had to be specifically enabled. Mike

You might want to look at the NFS docs a bit more. NFS has used UDP forever, TCP was only added in version 4 if memory serves, and it had to be specifically enabled.

I do not really care what transport NFS uses at the lower level. NFS in previous kernels was pretty much the fastest network storage protocol( on this hardware ). I’ve tested many in the last 3 years or so on this hardware. But here look . . .

dd to ramdisk just to show how fast dd from /dev/zero can be on this hardware
william@beaglebone:~$ df -h ramfs/
Filesystem Size Used Avail Use% Mounted on
tmpfs 256M 0 256M 0% /home/william/ramfs
william@beaglebone:~$ dd if=/dev/zero bs=1M count=200 of=/home/william/ramfs/test.log
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 1.00041 s, 210 MB/s

dd to NFS share
william@beaglebone:~$ df -h ti/
Filesystem Size Used Avail Use% Mounted on
192.168.254.162:/home/william/share 136G 41G 88G 32% /home/william/ti
william@beaglebone:~$ dd if=/dev/zero bs=1M count=1000 of=/home/william/ti/test.log
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 108.343 s, 9.7 MB/s

So something to note here. Apparently in this test, TCP is faster than UDP, since I’m using NFS v3 on the server side. Then since netcat is TCP . . . but these are also really basic “tests”, that may give a decent indication of what is fastest, but may not be entirely accurate for different situations.