I want to run process inside c++ program and be able to capture it’s stdout
, stderr
during the process’s lifetime (I figured out the stdin
part). For that, I am using boost.process (1.81.0) on ubuntu 22.04 (but I want the solution to be cross-platform). Ultimately, I want to build my custom ssh (just for fun), so I need to be able to control shell’s stdout
and stderr
. I launch the test_program
inside process_control
and I want be able to see live stdout
and stderr
output, but it is captured only after test_program
terminates, which happens when I feed end
as an input. Here are the code samples of mentioned programs:
process_control.cpp
#include <boost/process.hpp>
#include <boost/process/pipe.hpp>
#include <boost/asio/io_service.hpp>
#include <thread>
#include <iostream>
int main() {
using namespace boost;
std::string output{};
std::string error{};
asio::io_service ios;
std::vector<char> vOut(128 << 10);
auto outBuffer{asio::buffer(vOut)};
process::async_pipe pipeOut(ios);
std::function<void(const system::error_code &ec, std::size_t n)> onStdOut;
onStdOut = [&](const system::error_code &ec, size_t n) {
std::cout << "onSTDOUT CALLED.n";
output.reserve(output.size() + n);
output.insert(output.end(), vOut.begin(), vOut.begin() + static_cast<long>(n));
if (!ec) {
asio::async_read(pipeOut, outBuffer, onStdOut);
} else {
std::cout << "STDOUT ERRORn";
}
std::cout << output << "n";
};
std::vector<char> vErr(128 << 10);
auto errBuffer{asio::buffer(vErr)};
process::async_pipe pipeErr(ios);
std::function<void(const system::error_code &ec, std::size_t n)> onStdErr;
onStdErr = [&](const system::error_code &ec, size_t n) {
std::cout << "onSTDERR CALLED.n";
error.reserve(error.size() + n);
error.insert(error.end(), vErr.begin(), vErr.begin() + static_cast<long>(n));
if (!ec) {
asio::async_read(pipeErr, errBuffer, onStdErr);
} else {
std::cout << "STDERR ERRORn";
}
std::cout << error << "n";
};
process::opstream in;
process::child c(
"test_program",
process::std_out > pipeOut,
process::std_err > pipeErr,
process::std_in < in,
ios
);
asio::async_read(pipeOut, outBuffer, onStdOut);
asio::async_read(pipeErr, errBuffer, onStdErr);
std::jthread t{[&ios] { ios.run(); }};
std::cout<<"STARTING LOOP: n";
do {
std::string input_command{};
std::cout << "ENTER INPUT: ";
std::getline(std::cin, input_command);
if (c.running()) { //to prevent sigpipe if process dies during input
in << input_command << std::endl;
}
std::this_thread::yield();
} while (c.running());
return 0;
}
test_program.cpp
#include <iostream>
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
int main() {
std::cout<<"Started program.n";
while(true){
std::cout<<"Somethingn";
std::cerr<<"error streamn";
std::this_thread::sleep_for(0.5s);
if(std::rand()%3==0){
std::cout<<"Waiting for input...n";
std::string input{};
std::getline(std::cin, input);
std::cout<<"Got input: ""<<input<<""n";
if(input=="end"){
break;
}
}
}
return 0;
}
And the example output is:
image
How to capture stdout and stderr during the process’s (in this case test_program
) life?
What am I doing wrong here?
I also want to merge stdout
and stderr
into one output and also keep the chronological order, but I guess that could be done with passing the same buffer.
I also tried redirecting streams in shell like this:
bash -c './test_program 2> stdout.txt 1> stderr.txt'
and it worked fine, but did not work when I tried the same in c++ code
process::child c(
"bash -c './test_program 2> stdout.txt 1> stderr.txt'",
process::std_in < in,
ios
);
and got output
STARTING LOOP:
ENTER INPUT: 2>: -c: line 1: unexpected EOF while looking for matching `''
2>: -c: line 2: syntax error: unexpected end of file
ls
end
or
std::vector<std::string> args{{"-c"},{"'./test_program 2> stdout.txt 1> stderr.txt'"}};
process::child c(
"bash", process::args(args),
process::std_in < in,
ios
);
and got output
terminate called after throwing an instance of 'boost::process::process_error'
what(): execve failed: No such file or directory
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
But redirecting to 2 separate files wouldn’t really work for me, since I would like to maintain chronological order. So when tried
bash -c './test_program 2> merged_output.txt 1> merged_output.txt'
I was not surprised that the output wasn’t looking good either.
EDIT:
I figured it out. For further reference, you can just simply create and use FILE
instance, like this:
std::unique_ptr<FILE, decltype(&fclose)> p_stdout{fopen("output.txt", "w+"), fclose};
process::child c(
"test_program",
process::std_out > p_stdout.get(),
process::std_err > p_stdout.get(),
process::std_in < in,
ios
);
and then open the same file in read mode
std::unique_ptr<FILE, decltype(&fclose)> read_file{fopen("output.txt", "r"), fclose};
to read from it. You have to open it every time you want updated state, so I am not sure whether this approach is clean. But it works.
2
Answers
Since
test_program
never outputs astd::flush
orstd::endl
(which also does a flush), its output when going to a pipe will be buffered internally and will only be flushed when it calls exit.If you want lines to be flushed sooner, use
std::endl
instead of'n'
(that’s what it is for) or explicitstd::flush
at the points you want it to be flushed.I’m not sure what you figured out. But I think the most important thing is that
async_read
doesn’t complete unless the pipe is closed or the buffer is full. That makes no sense considering what you’re doing.Removing some duplication, I’d use
async_read_some
:Live On Coliru (referring to essentially unmodified
test_program
)Local side-by-side demo to avoid the jumbling that happens on Coliru:
On the right is the
test_program
directly, on the left using Boost Process child process. The left side in text: