How to read/handle large Automise log files?

I have some rather long running and complicated Automise processes, whcih create some very large log files (500Mb-1Gb) - how do most people read/access these logs?

Most of my scripts export the log to HTML when theres and error and emails me the log, but often its so large it can't be emailed. I've also noticed if I only export errors from the log then it doesn't give me and frame of reference leading up to the error (variable values or the actions immediatly before the error. I've made suggestions to extend the export to log action to allow for exporting X number of lines/actions before the error, but as that doesn't exist, I'm kinda stuck - waiting 5 hours for a script to run via the GUI to see the exact error is just not an option...

Thanks, TJ

Hi TJ

I would recommend exporting to xml and then compressing the file.

I have just created a project here that loops a couple of hundred thousand times writing messages to the log. After a few hundred thousand messages have been sent, I raise an exception. In the OnFailure action list I did the following: exported the log to html, exported the log to xml, created a zip of the xml file (using the highest compression level), then created zip containing the previous zip (again using the highest compression level).

Now when I look in the project directory I can see the following things:
- Automise Log file is 286MB
- HTML log I exported is 200MB
- XML log I exported is 105MB
- Zip file containing xml log is 1.3MB
- Zip file containing previous zip file is 255KB

When exporting the logs I exported the entire log (not just the errors). The advantage of exporting to xml is that if you want to investigate you can use the XML Transform action within Automise along with the HTML stylesheet (located at Automise 3\Stylesheets\ConvertLogToHTML.xsl) to convert this file to the HTML version of the log.

Let me know if you want me to send you this project so you can see for yourself.

Regards,
Steve

Thanks for the reply.

The first thing I'm just wondering is how unique those log entries are/were, given the great compression you got. In my real world scenario, my logs are larger and there is very little duplication (at least at the log level - this script is doing a complex database scrubbing and replication process, so it hits a lot of individual rows in a fair number of tables/views). I'll certainly try the XML export and compression tomorrow (its 2am now), but I do know that I have already tried all the formats with and without compression and do recall I wasn't that happy with the results of any of thr approaches (due simply to the shear size of the logs).

Hopefully this example and your kind helps the cause to have the log export action built out a little more to return only x no. of entries prior to the error taking place (or simply return x no. of entries or a period of time from the bottom of the log)..

TJ