I have 2 applications that pipe their data:
application1 | application2
Basically, application1 generates a log with events that application2 processes. The issue is that I frequently update application2. Meaning, I need to stop these, update the binaries, and restart it. In that small duration application1 can miss data.
I read about using named pipes using mkfifo
and thought that could be an ideal solution. Keep application1 running and have it write to the file backed named pipe so that when application is updated no data is lost and once application2 starts is gets the data.
Testing this with cat
to emulate reader/writer is works until there is no longer a reader. This is unexpected.
An alternative would be to use an actual file but that has issues:
- It remains on disk and does not act like FIFO
- It requires some form of rotation to prevent files growing super large
- AFAIK when the reader is at the end (tail?) if will need to probe behind a timer if the file has grown in size which increased the processing latency
I’m in control of the reader, current behavior is already that it will auto restart but I’m not in control of the writer. I can only pipe its output to something else.
- Can named pipes be configured in a way so that these are durable?
- I read about “pinning” to pipe by the writer, but I fail to get that to work
- Can I prevent a pipe from getting closed once the reader exits?
- Are there alternatives that behave like a pipe?