Best way to run 24/7 scripts
Posted by ReliablePlay@reddit | learnprogramming | View on Reddit | 33 comments
Hey, let's say I have some python scripts that I am currently running manually every day. What would be the best way to make them run once a day without user intervention? I already have a remote 24/7 server running windows server. Should I just use task scheduler with try catch block for the whole code and add an email sender function on except for each script so that I get notified if something's wrong? Are there better ways to do that?
randomjapaneselearn@reddit
you can log to disk any error, send it to email or whatever you want...
the point is that after that the script will exit.
you can catch that exit on task scheduler and run it again if it crashed, you will need to set up an event in the windows task scheduler and you need to "enable task history for all tasks" for the event to work.
few links:
https://stackoverflow.com/questions/53887864/how-get-task-scheduler-to-detect-failed-error-code-from-powershell-script#70437885
https://superuser.com/questions/615321/task-scheduler-event-when-an-application-ended
https://superuser.com/questions/1278486/acting-on-exit-code-in-windows-task-scheduler
you can also make a simple python script that opens the other and monitor if its running instead of one giant try catch.
depends on what you need to do
Qlearr@reddit
I believe running a pipeline on gitlab would do the trick
skeeter72@reddit
Task Scheduler with something like C:\Scripts\foo.py > C:\Scripts\foo.log 2>&1 to capture output.
CommunicationTop7620@reddit
cronjobs? for example using crontab in Linux
ReliablePlay@reddit (OP)
What about email notification on error? Is my proposition with massive try catch good enough?
narco113@reddit
Check out Healthchecks.io
Laziest implementation is you drop a rest call at the end of your script to hit up a URL they provide you and if your script fails the call isn't ever made. Heathchecks.io would be configured to expect that unique URL call for a monitor you setup that if a rest call isn't made to it within the expected times you set it will send out an email alert to you (or sms, or teams webhook, or a dozen other methods of alert).
It's very impressive and I just started using it on my team to monitor dozens of Task Scheduler scripts we already have in production.
cottonycloud@reddit
I personally like to use a catch-all at the program entry point because I specifically want the program to terminate on any error.
prawnydagrate@reddit
in the script, write a function which takes an error and sends an email using smtp
then whenever you encounter an error, call the function
don't use a massive try catch, instead just use try catch when you're doing something that could fail
prawnydagrate@reddit
maybe write a function which tries smth and returns the result, otherwise calls the email function
then you can call the trying function instead of try-except blocks every time
Imperial_Squid@reddit
Something like this?
Ngl, feels over-engineered to save you all of 3 lines somewhere else, plus you now need to remember what coming across
fail_func
means every time.Putting reused code in functions is generally good practice, I don't know that this is enough functionality to make it worth it...
anonymousxo@reddit
wow yes ty
skeeter72@reddit
If I was able to do so (i.e., probably not through your corporate firewall, if that's the case), I'd probably back the notification in the script with smtplib
FancyJesse@reddit
Taak scheduler in Windows.
cron in Linux.
And rather than an email, if you use Discord or similar, use a webhook to get notified
mxldevs@reddit
Cron jobs. Mail API for sending notifications.
reverendloc@reddit
GitHub actions can be run on a daily schedule.
You can build a pipeline directly in GitHub and run it!
aplarsen@reddit
Task scheduler
Add some logging to a file
Put your code in a function and wrap that in a try...catch block that will notify you if something pukes
I use this pattern on dozens of tasks that run daily
Wooden-Donut6931@reddit
I did this in a php file. With a Timer and routine display.
Fishyswaze@reddit
Task scheduler, or if you don't mind paying, serverless function with a timer trigger like azure functions or aws lambda.
polymorphicshade@reddit
VM + Linux + Docker is a pretty standard way to host syuff like that.
It's relatively simple to wrap your python stuff in containers.
Plus, you won't have to deal with any Windows crap.
frobnosticus@reddit
I'm sure you could work node.js and a jvm in there as well too.
ReliablePlay@reddit (OP)
I forgot to mention it has to be on windows since its using windows apps as well
MissinqLink@reddit
Windows task scheduler it is
idubbkny@reddit
docker desktop
anonymousxo@reddit
Can you be more specific about what your scripts do
mishchiefdev@reddit
Just set a CRON job that runs the script at a certain inverval.
https://phoenixnap.com/kb/set-up-cron-job-linux
DOUBLEBARRELASSFUCK@reddit
Are any of those actually cron jobs? Does Task Scheduler use cron under the hood? That doesn't even seem plausible.
mishchiefdev@reddit
The answer is probably no since CRON is unix based. I better edit my comment because people are going to get confused. Sorry about that!
Zenalyn@reddit
Windows service on task scheduler
plastikmissile@reddit
Yes, Task Scheduler works just fine for this sort of thing.
OriahVinree@reddit
My home server is ubuntu server, I just use crontab
TheBadTouch666@reddit
I do this and in the script use logging functionality to write rolling 60 day log files as to what the script does every time it runs. One log file per day. Writing timestamp and success/failure every time it runs. You can log any information you want. Some of mine run every 5 mins. so you will have a line written to that day’s file every 5 mins.
Prudent_Jelly9390@reddit
Check out splinterware, way better built in task scheduler.
iamnull@reddit
Task scheduler, but be aware that it runs apps in an unusual environment. It can make debugging it very difficult, and the results aren't always what you expect. Similar situation with setting something up as a service. If you need to interact with graphical applications, this can make things REALLY challenging.
One of the ways I've worked around this is just an application that runs on startup. Checks time, if time is incorrect, sleep. If near enough, and last run is greater than some timeout, run the scripts, set a time for last run, then sleep.
As far as the email thing, just be sure you're handling errors and passing them up for your email handler.
A lot of this depends on what you're doing. If it can all be run through terminal, task scheduler should do the trick. If it needs to interact with a user session, things can get weird.