I have been using an OLD version of Open-Audit for years and love it, but figure it's time to upgrade.
I installed 1.12.10, and after a bit of trial and error, I have managed to get the audit working on my Linux machines...except for a few.
In the audit_linux script, I had to basically comment out any wget commands to just use curl. And since I'm using https, I had to add the -k option. Worked beautifully for most of the servers. The rest of them are giving me the curl error:
Warning: Failed to create the file add_system
curl: (23) Failed writing body (0 != 360) (or simply curl: (23) Failed writing body)
No firewalls in the way...no filesystem full...
Here is the command that gets used, which works perfectly for all other servers (Redhat 6.x, btw):
curl -k -o add_system --data "@clientsystem-20170309155926.xml" "http://openauditsystem/open-audit/index.php/system/add_system"
Thanks Alexander, I'd seen that stackoverflow post,too, but that doesn't do anything for me. All the -s option does here is make it so I don't see the error as well as the progress.
However, I just solved it...and it was that article that got me looking in the right direction.
So when audit.linux.sh runs, the add_system file (that I kept overlooking) that gets created actually told me "Invalid XML". After running the xml file through an online linter, I found that during the Services check, it was checking mysql status, and the init script that was installed from the application was using an old call to "log_success_msg" which was displaying the escape chars that give you the green OK or red Failed. Those were apparently invalid chars. I changed the line in the init script to simply echo "MySQL is running" or "MySQL is not running".
This might be something for the OA developers to look into.
thanks matt, we'll definitely look into that. ansi escape sequences all include the 'esc' ascii code, 0x1b, which isn't allowed in xml - at least not unencoded.
i suspect that your use of curl fails because it expects to work interactively and normally writes to stdout. wget on the other hand usually writes to files. depending on how curl is called (pipeline/backtics etc.) the calling party may very well close whatever filedescriptor curl uses as stdout before curl is done with it, and curl then complains about not being able to write the (response) body. there's a discussion of this aspect on stackoverflow.
my suggestion: change the curl calls to include
-s -o /tmp/curltmpfile; -s suppresses the progress bar output, and -o tells it to save any output to the given file.