Research papers found carrying hidden white text giving instructions not to highlight negatives as concern grows over use of large language models for peer review
Those authors should be screened loads forever more. They do not respect the purpose of the scientific process if they are solely trying to push themselves forward.
This is the biggest issue - peer review is supposed to be about critical analysis and domain expertise, not just following promts blindly, and no AI today has actual scientific understanding to catch subtle methodological flaws.
Those authors should be screened loads forever more. They do not respect the purpose of the scientific process if they are solely trying to push themselves forward.
Or maybe AI shouldn’t review things? Who knows what they are hallucinating.
This is the biggest issue - peer review is supposed to be about critical analysis and domain expertise, not just following promts blindly, and no AI today has actual scientific understanding to catch subtle methodological flaws.
Yeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.