Are any of you actually using AI/ChatGPT for IT asset management tasks? What's working?
Posted by Big_Daddyy_6969@reddit | sysadmin | View on Reddit | 10 comments
Been in IT ops for about 6 years, currently managing devices for ~300 remote employees across 14 countries. Last month I started experimenting with prompts for offboarding checklists and procurement justifications after spending an entire Friday manually updating a spreadsheet that should have taken 20 minutes. Some of it's been genuinely useful, some of it is clearly just me talking to a very confident robot. Curious if others have found repeatable use cases or if it's still mostly hype for ITAM work specifically.
CraigSwill@reddit
Great question: As one of the leaders at Retriever, we're currently asking our customers whether they want to incorporate ChatGPT into the offboarding process. Many of our customers already use our API to automate asset retrieval from offboarded employees.
Local-Skirt7160@reddit
We use jira for onboarding and offboarding, and when it comes to asset we have SureAsset in place which takes care of assigned assets, repairs, depreciation, warranty, repair and also software licenses assigned to that person so in nutshell we just need to check SureAsset once we get jira ticket for offboarding and recover all the assets. its prety simple.
mattberan@reddit
Our approach to this has been to take missing information and slowly enrich the ITAM and CMDB data. Server is missing the warranty expiration date? Maybe we can find that and fill it in.
Totally unique and interesting way to think about AI helping with ITAM!
bitslammer@reddit
I'm not sure I see how AI fits into these processes.
In our org onboarding, offboarding and similar tasks are highly automated with some 'self service' mixed in for IAM. We have our HR systems, ServiceNow (which is our ticketing, change mgmt, CMDB, etc.) and our IAM system all integrated. When someone is hired, moves or leaves a simple request for their manager kicks off all the necessary processes.
NoTravel407@reddit
How hard was this to do? Sounds good. Did you need to write some of the integrations, or mostly just needed configuration?
bitslammer@reddit
This predated my time, but the Tenable to ServiceNow integration is pretty straightforward as getting the date over. It's deciding what to do with that data and how that is the real work. There are some defaults, but it's really more of a build it out to your needs type solution.
vgayathri@reddit
Mostly using it to generate the logic for edge cases β things like "if this user is a contractor with an active benefits window, defer the app deactivations but still revoke email access immediately." That kind of branching is tedious to write by hand and AI gets you 80% there fast. Where it falls apart: anything that requires actually hitting an app. The long-tail apps with no API, no SCIM, just a web admin console β AI can describe the steps but can't execute them. That last mile is still the hard part.
Bubby_Mang@reddit
Naur. I've built our own MCP and agent but so far it only really excels at advanced searches through ticketing system since the bootstrap search feature in jitbit sucks.
We did have some success in analyzing parameter controls in code, but that's about it.
AI seems to be more of a philosophical tool to help make tools for us so far, beyond of course helping people communicate properly if they weren't capable of that already.
Inanimate_CarbonR0d@reddit
What tasks? Wtf dude. Just get a good mdm and patch management and itβs almost set and forget (if you get the set part right, and a little upkeep here and there π )
barrulus@reddit
I think where it does well with this is helping to set up repeatable helper scripts. Python or PowerShell scripts that can automate the tasks for you. I am hesitant to use an LLM to do the work itself where the actual work (updating a spreadsheet) can be more reliably handled procedurally.
Use the LLM to help define what your all day friday actually entailed, then use it to create a script that can simplify the process, data capture, validation, cross checks, check lists whatever... Then you can use that repeatedly without a) using tokens and b) risking hallucinations
I have used LLM's quite a lot in equipment asset management, particularly around asset identification, serial/model/part number extraction, general text extraction as an additional layer to fix shortfalls from OCR technology when reviewing site inspection photographs and various scanned paper work associated with the site.
Again, this is part of a procedural pipeline and the pipeline includes checks and balances to ensure data anomalies are flagged and handled appropriately.