My idea is: AI form submission evaluation.
If people and bots are trying to input garbage into your system.
You need to evaluate if the submitted content is actually relevant to what you expect or is it some random people trying to exploit your system by adding non relevant content.
I think these groups of people would benefit from this idea:
Enterprises that use the forms extensively
Why I think they would benefit from this idea:
This works for both bots and people trying to exploit your system.
A LLM can evaluate if the submission is relevant or not and not validate.
Any code or resources to support this idea:
I’ve built a slimiar thing in Drupal:
https://www.drupal.org/project/ai_validations
Are you willing to work on this idea?
Perhaps
What skills and resources do you need to explore this further?
Not sure