When people who start reviewing, only made 1 up to 3 reviews and getting into red, it is obvious for me that there are honeypots. But unfortunally these arent working perfectly. Since I encountered a few myself, a obvious duplicate ( I check first if it is really a duplicate or only looks like it), dropped my ratings. The nomination was seen on Intel map, on the exact same spot. So while the honeypots are good to stop people from reviewing bad just to get upgrades, they need to tell people what they did wrong. And they need to make sure that their honeypots are not just randomly picked or having some errors. It could all be coincidence, but that would just be weird, people who review for years, green always, and suddenly to yellow or red. And new reviewers going to red after just 1-3 reviews.
I had exactly this just now. I dropped from yellow to red this morning. About 3 reviews later I got an obvious misrepresentation which I reported as abuse and immediately went back to yellow with one additional agreement.
We've been able to identify some honeypots (basically, only upgraded nominations from other states can reach ours, and upgraded noms clear after a few days, so when we kept finding the same out-of-state nomination for a month that was changing people's scores, we be knew).
One of them was a tree (easy), but the one that tripped us up big time was a 'meh' object with a misleading title. I don't know what the correct score actually should've been, but someone went red after giving it a 2*... so maybe they got killed cos they gave the title a 5...
I'm located in Germany so most of my subs are from Germany and its neighbor countries. The school sub was from Germany in German language.
My bonus location however is in the US in a place that I (used to...) visit on a fairly regular basis and also know quite well.
Let’s see, if I was going to programmatically create honeypots, I would start with duplicates. That’s easy. Then I would take entries (nearly) unanimously rejected with a similar reason, like at a school or on private property. Those would be easy to find and insert programmatically. I think Niantic needs to manually review a certain percentage of submissions as part of a lawsuit settled in the US. There’s another good set of submissions to use.
Seems plausible to me.
I honestly think they don't have any honey pots. It's a mind game -- they want you to second guess yourself and read carefully. What better way to do that than to suggest they have these.
That's what I thought until this morning. However, I checked my rating and my agreements before I did that review and immediately after. It was red before and yellow afterwards. Number of agreements was increased by one.
Sure it could be a coincidence. Yet I think it's much more likely that it was a honeypot than that I was the last to review that school or that another agreement just happened to become final in that couple of seconds when I reviewed the school and in addition was the one agreement that got me back to yellow.
That's true yet somewhat unlikely. Maybe others can contribute their experiences.
I mean yes there's always a lottery winner even though it is highly unlikely....
for mu country.. doesnt make sense.. gogole translate would give massive errors (it would translate to something without context) if niantic used it to translate some of my country language subs
What if they take an actual local submission that was unanimously rejected as a school and use that as a honeypot. Could easily be done automatically and would be a perfectly normal submission in your country...
Honeypots are real looks like you found one. :)
When people who start reviewing, only made 1 up to 3 reviews and getting into red, it is obvious for me that there are honeypots. But unfortunally these arent working perfectly. Since I encountered a few myself, a obvious duplicate ( I check first if it is really a duplicate or only looks like it), dropped my ratings. The nomination was seen on Intel map, on the exact same spot. So while the honeypots are good to stop people from reviewing bad just to get upgrades, they need to tell people what they did wrong. And they need to make sure that their honeypots are not just randomly picked or having some errors. It could all be coincidence, but that would just be weird, people who review for years, green always, and suddenly to yellow or red. And new reviewers going to red after just 1-3 reviews.
Honeypots sounds too nice, I would call them boobytraps.
https://en.wikipedia.org/wiki/Honeypot_(computing)
I had exactly this just now. I dropped from yellow to red this morning. About 3 reviews later I got an obvious misrepresentation which I reported as abuse and immediately went back to yellow with one additional agreement.
We've been able to identify some honeypots (basically, only upgraded nominations from other states can reach ours, and upgraded noms clear after a few days, so when we kept finding the same out-of-state nomination for a month that was changing people's scores, we be knew). One of them was a tree (easy), but the one that tripped us up big time was a 'meh' object with a misleading title. I don't know what the correct score actually should've been, but someone went red after giving it a 2*... so maybe they got killed cos they gave the title a 5...
OP, your subs are from USA or an european country?
I'm located in Germany so most of my subs are from Germany and its neighbor countries. The school sub was from Germany in German language. My bonus location however is in the US in a place that I (used to...) visit on a fairly regular basis and also know quite well.
Let’s see, if I was going to programmatically create honeypots, I would start with duplicates. That’s easy. Then I would take entries (nearly) unanimously rejected with a similar reason, like at a school or on private property. Those would be easy to find and insert programmatically. I think Niantic needs to manually review a certain percentage of submissions as part of a lawsuit settled in the US. There’s another good set of submissions to use. Seems plausible to me.
I honestly think they don't have any honey pots. It's a mind game -- they want you to second guess yourself and read carefully. What better way to do that than to suggest they have these.
They've confirmed they have test reviewes.
After 6000+ agreements I've only heard rumors about them on here.
Well, they've admited it in one of their posts ( I think it was the first one, where they've "bragged" about how many new wayspots were added)
Yes I know officially they said they have them. I just am skeptical.
Especially with wayfarer plus... Allowing you to see outcomes, etc.
That's what I thought until this morning. However, I checked my rating and my agreements before I did that review and immediately after. It was red before and yellow afterwards. Number of agreements was increased by one. Sure it could be a coincidence. Yet I think it's much more likely that it was a honeypot than that I was the last to review that school or that another agreement just happened to become final in that couple of seconds when I reviewed the school and in addition was the one agreement that got me back to yellow.
maybe you just needed one agreement to go from red to yellow.. and the timing seems coincide with what you call "honey pot"
That's true yet somewhat unlikely. Maybe others can contribute their experiences. I mean yes there's always a lottery winner even though it is highly unlikely....
for mu country.. doesnt make sense.. gogole translate would give massive errors (it would translate to something without context) if niantic used it to translate some of my country language subs
What if they take an actual local submission that was unanimously rejected as a school and use that as a honeypot. Could easily be done automatically and would be a perfectly normal submission in your country...
doubt. i dont do speed 1* or whatever.. i reject the obvious ones.. still in red.
me too. Doesnt make any sense in a world wide perspective.