Best way to run 24/7 scripts
Posted by ReliablePlay@reddit | learnprogramming | View on Reddit | 51 comments
Hey, let's say I have some python scripts that I am currently running manually every day. What would be the best way to make them run once a day without user intervention? I already have a remote 24/7 server running windows server. Should I just use task scheduler with try catch block for the whole code and add an email sender function on except for each script so that I get notified if something's wrong? Are there better ways to do that?
trelayner@reddit
alerting only when something goes wrong doesn’t work
worst case, your code didn’t run at all, maybe the server died
you really need the monitoring system to run independently from the server, and alert you if it hasn’t seen a successful run recently
anon7777A@reddit
You can use crontab or systemd timers if you use systemd
skeeter72@reddit
Task Scheduler with something like C:\Scripts\foo.py > C:\Scripts\foo.log 2>&1 to capture output.
ReliablePlay@reddit (OP)
What about email notification on error? Is my proposition with massive try catch good enough?
prawnydagrate@reddit
in the script, write a function which takes an error and sends an email using smtp
then whenever you encounter an error, call the function
don't use a massive try catch, instead just use try catch when you're doing something that could fail
ReliablePlay@reddit (OP)
Would it be a good idea to in addition to smaller try catches also wrap whole main in try catch to catch not recognized exceptions?
prawnydagrate@reddit
maybe write a function which tries smth and returns the result, otherwise calls the email function
then you can call the trying function instead of try-except blocks every time
Imperial_Squid@reddit
Something like this?
Ngl, feels over-engineered to save you all of 3 lines somewhere else, plus you now need to remember what coming across
fail_func
means every time.Putting reused code in functions is generally good practice, I don't know that this is enough functionality to make it worth it...
prawnydagrate@reddit
I was thinking smth like this, it saves time and reduces repetition especially if you have a lot of tasks that could fail
Imperial_Squid@reddit
That's fair! I think the
exit(1)
might be optional but otherwise I like it.Also:
lambda: my_func(param1)
, I don't know why I never considered using lambda as a way to pass in a function with its parameters already set but I really like that! I was thinking you'd have to do some args/kwargs packing and unpacking but this is a super elegant way to do the same thing 👌prawnydagrate@reddit
thanks, but lol somehow I didn't consider args kwargs stuff
wouldn't that be better than a new lambda on every line? or is lambda more python-like?
Imperial_Squid@reddit
Honestly I think it mostly depends on your taste personally, both are pretty valid way to code things, but you could also say one is better than the other depending on the project:
One good argument for using lambda over args/kwargs is that if you want any parameters to be used in the error catching function and not passed into the error causing function (eg, whether to send an email, whether to exit the program, etc), then those parameters need to be filtered out of the args/kwargs, which adds extra steps. So lambda expressions would support more complex behaviour in the error catching function easier.
On the other hand, args/kwargs is simpler in terms of it's syntax, and due to the lack of lambda expressions is probably easier to debug if things go wrong.
So if you're working on a more complex/mature project, I think it's worth putting in the lambda expressions version, especially for that complex error catching logic aspect. But if it's a relatively small/simple/one off/etc project, the args/kwargs version will serve you perfectly well.
prawnydagrate@reddit
ahh yeah that makes sense.
however miserable double's solution might actually be better than this whole idea
Imperial_Squid@reddit
Ehh, maybe.
It's definitely nice to have a catch all option, but what it gains in universality, it loses in customisability. Since the only things that get passed into the exception handler are the exception type, exception value and traceback, unless you're storing the state of the program somewhere, you might not know what was happening when the exception hook gets called.
Not to mention, it only gets called just before exiting the program entirely, so if you were mid way through doing something, you might not be able to finish it properly.
I like it as an option l, but I definitely think you should use it in conjunction with one of the above error function ideas.
Miserable_Double2432@reddit
You should use a try catch for things which could fail and you can do something about the failure.
You can use sys.excepthook to execute a function whenever there’s an uncaught exception. In this case to notify someone. (This is how sentry.io works)
prawnydagrate@reddit
hmm wow this actually might be the best solution
anonymousxo@reddit
wow yes ty
HonestyReverberates@reddit
Use Python's smtplib or an external library like yagmail for sending email notifications in case of errors.
Other suggestions, implement log rotation to avoid bloating your disk space using logging.handlers.RotatingFileHandler. Also, add a retry mechanism for issues like network failure.
You could also use APScheduler instead of task scheduler.
misplaced_my_pants@reddit
https://en.wikipedia.org/wiki/Cron if it's a unix system.
narco113@reddit
Check out Healthchecks.io
Laziest implementation is you drop a rest call at the end of your script to hit up a URL they provide you and if your script fails the call isn't ever made. Heathchecks.io would be configured to expect that unique URL call for a monitor you setup that if a rest call isn't made to it within the expected times you set it will send out an email alert to you (or sms, or teams webhook, or a dozen other methods of alert).
It's very impressive and I just started using it on my team to monitor dozens of Task Scheduler scripts we already have in production.
cottonycloud@reddit
I personally like to use a catch-all at the program entry point because I specifically want the program to terminate on any error.
skeeter72@reddit
If I was able to do so (i.e., probably not through your corporate firewall, if that's the case), I'd probably back the notification in the script with smtplib
CommunicationTop7620@reddit
cronjobs? for example using crontab in Linux
sproengineer@reddit
Some ideas:
Cron job - Linux built in system Kubernetes cronjob - same basic principles Agro workflow - can get fancy with the task at hand
sweet_dandelions@reddit
Use linux
LodosDDD@reddit
Get a rasberrypi if h want it local
Greedy_Novel_1096@reddit
AWS lambda has a great free tier. Combine it with event bridge scheduler. Good exposure to AWS
Miserable_Double2432@reddit
I would recommend having a separate job, or jobs, which verify if the others have executed correctly.
Google’s advice in the SRE book is to focus on symptoms, not causes. That is, you should think about how you can tell if your program has or hasn’t done the job it’s supposed to do, rather than trying to predict all the ways that it might fail.
For instance if you know that you should always have 12 new files after a successful run, and you only have 11, then notify the operator. For the notification you only need to know that something happened. What the issue was you can work out from the logs. (You should log the output of your scripts). This will catch problems where the job never even started, and therefore didn’t throw an exception.
It might be overkill, but I will also add that PagerDuty has a free tier, which is usually simpler than trying to get SMTP working reliably. PD’s notifications can go to email if your process requires it, but people will miss the email at some point. (Other incident response services are available)
randomjapaneselearn@reddit
you can log to disk any error, send it to email or whatever you want...
the point is that after that the script will exit.
you can catch that exit on task scheduler and run it again if it crashed, you will need to set up an event in the windows task scheduler and you need to "enable task history for all tasks" for the event to work.
few links:
https://stackoverflow.com/questions/53887864/how-get-task-scheduler-to-detect-failed-error-code-from-powershell-script#70437885
https://superuser.com/questions/615321/task-scheduler-event-when-an-application-ended
https://superuser.com/questions/1278486/acting-on-exit-code-in-windows-task-scheduler
you can also make a simple python script that opens the other and monitor if its running instead of one giant try catch.
depends on what you need to do
Qlearr@reddit
I believe running a pipeline on gitlab would do the trick
FancyJesse@reddit
Taak scheduler in Windows.
cron in Linux.
And rather than an email, if you use Discord or similar, use a webhook to get notified
mxldevs@reddit
Cron jobs. Mail API for sending notifications.
reverendloc@reddit
GitHub actions can be run on a daily schedule.
You can build a pipeline directly in GitHub and run it!
aplarsen@reddit
Task scheduler
Add some logging to a file
Put your code in a function and wrap that in a try...catch block that will notify you if something pukes
I use this pattern on dozens of tasks that run daily
Wooden-Donut6931@reddit
I did this in a php file. With a Timer and routine display.
Fishyswaze@reddit
Task scheduler, or if you don't mind paying, serverless function with a timer trigger like azure functions or aws lambda.
polymorphicshade@reddit
VM + Linux + Docker is a pretty standard way to host syuff like that.
It's relatively simple to wrap your python stuff in containers.
Plus, you won't have to deal with any Windows crap.
frobnosticus@reddit
I'm sure you could work node.js and a jvm in there as well too.
ReliablePlay@reddit (OP)
I forgot to mention it has to be on windows since its using windows apps as well
MissinqLink@reddit
Windows task scheduler it is
idubbkny@reddit
docker desktop
anonymousxo@reddit
Can you be more specific about what your scripts do
mishchiefdev@reddit
Just set a CRON job that runs the script at a certain inverval.
https://phoenixnap.com/kb/set-up-cron-job-linux
DOUBLEBARRELASSFUCK@reddit
Are any of those actually cron jobs? Does Task Scheduler use cron under the hood? That doesn't even seem plausible.
mishchiefdev@reddit
The answer is probably no since CRON is unix based. I better edit my comment because people are going to get confused. Sorry about that!
Zenalyn@reddit
Windows service on task scheduler
plastikmissile@reddit
Yes, Task Scheduler works just fine for this sort of thing.
OriahVinree@reddit
My home server is ubuntu server, I just use crontab
TheBadTouch666@reddit
I do this and in the script use logging functionality to write rolling 60 day log files as to what the script does every time it runs. One log file per day. Writing timestamp and success/failure every time it runs. You can log any information you want. Some of mine run every 5 mins. so you will have a line written to that day’s file every 5 mins.
Prudent_Jelly9390@reddit
Check out splinterware, way better built in task scheduler.
iamnull@reddit
Task scheduler, but be aware that it runs apps in an unusual environment. It can make debugging it very difficult, and the results aren't always what you expect. Similar situation with setting something up as a service. If you need to interact with graphical applications, this can make things REALLY challenging.
One of the ways I've worked around this is just an application that runs on startup. Checks time, if time is incorrect, sleep. If near enough, and last run is greater than some timeout, run the scripts, set a time for last run, then sleep.
As far as the email thing, just be sure you're handling errors and passing them up for your email handler.
A lot of this depends on what you're doing. If it can all be run through terminal, task scheduler should do the trick. If it needs to interact with a user session, things can get weird.