The recent survey doubles the count of known geoglyphs, bringing the total count to over 700, while showcasing how AI can be used in cultural heritage contexts.
That’s neat, but how do we know they aren’t an AI hallucination backed up by the human brain’s tendency to see patterns? For example, would the same AI search have found the face on Mars?
Hallucinations are an issue for generative AI. This is a classification problem, not gen AI. This type of use for AI predates gen AI by many years. What you describe is called a false positive, not a hallucination.
For this type of problem you use AI to narrow down a set to a more manageable size. e.g. you have tens of thousands of images and the AI identifies a few dozen that are likely what you’re looking for. Humans would have taken forever to manually review all those images. Instead you have humans verifying just the reduced set, and confirming the findings through further investigation.
That’s neat, but how do we know they aren’t an AI hallucination backed up by the human brain’s tendency to see patterns? For example, would the same AI search have found the face on Mars?
Hallucinations are an issue for generative AI. This is a classification problem, not gen AI. This type of use for AI predates gen AI by many years. What you describe is called a false positive, not a hallucination.
For this type of problem you use AI to narrow down a set to a more manageable size. e.g. you have tens of thousands of images and the AI identifies a few dozen that are likely what you’re looking for. Humans would have taken forever to manually review all those images. Instead you have humans verifying just the reduced set, and confirming the findings through further investigation.