Can't provide any details due to NDA, sorry.
We're a team that working on innovative AI solutions (outsourcing mostly) using NLU, NLP, NLG.
One of our clients ordered a product for content generation to simplify content creation within the company. After a few milestones it appeared that the customer is going to remove human-in-the-loop to validate content quality and is going to use the product not in an ethical way, as we initially thought, by bulk content generation. And it looks like that there is a high chance of misuse for academic writing purposes as well.
I feel quite bad about it, we already delivered early version of the product and I'm not quite sure what to do next. Such misuse is way out of our ethical standards. Thinking about stenography for a "signature" of any text created with the product and sending it to appropriate organizations, like online plagiarism detection platforms.
Where to ask for advice?
Is there any organization that can help and give an advice, like AI safety hotline?
Can't provide any details due to NDA, sorry. We're a team that working on innovative AI solutions (outsourcing mostly) using NLU, NLP, NLG.
One of our clients ordered a product for content generation to simplify content creation within the company. After a few milestones it appeared that the customer is going to remove human-in-the-loop to validate content quality and is going to use the product not in an ethical way, as we initially thought, by bulk content generation. And it looks like that there is a high chance of misuse for academic writing purposes as well.
I feel quite bad about it, we already delivered early version of the product and I'm not quite sure what to do next. Such misuse is way out of our ethical standards. Thinking about stenography for a "signature" of any text created with the product and sending it to appropriate organizations, like online plagiarism detection platforms.
Where to ask for advice? Is there any organization that can help and give an advice, like AI safety hotline?
Thanks