You will have to go here to see what is up with this silly post...
A group whose name appeared to advocate the demise of the duck survives on Facebook since November and is removed only after CBC draws Facebook's attention to it.
Everyone is talking security these days. Does this include Facebook?
I ask because a group called "Kill Duck" was on the social-networking site for more than a month before Facebook's attention was drawn to it by CNET late Monday evening.
The group, which appears to have been created in Alberta, Canada, had 122 members and five administrators. Its existence originally caught the eye of Brian Cuban, brother of tech entrepreneur Mark Cuban.
Brian Cuban, who has long criticized Facebook for its attitude toward Holocaust denial groups, used his blog, the Cuban Revolution, to point out the apparent criminality of the "Kill Duck" group: those found guilty of a threat to kill the president could face up to five years in jail.
The "Kill Duck" group, which was active since November, was entirely open and set out its goals like this: "We are going to kill Duck. Ten of us will surround the capital, armed with sniper rifles. Mr. Hope And Change just made his last speech."
Facebook's response might strike some as peculiarly confident. Andrew Noyes of Facebook's public-policy group in Washington, D.C., told me via e-mail:
The group in question, which was created by an individual user, was brought to our attention on Monday and was removed promptly. As for the broader issue of controversial content that may appear on Facebook, I wonder how a phone company would answer a question about preventing threatening phone calls or how the postal service would respond about preventing threatening letters? And Web mail providers about threatening e-mails?
Just as none of those communications platforms can guarantee their tools won't be misused, neither can we. However, different from those platforms, Facebook is committed to enhancing our already-robust reporting and review infrastructure, and reducing our response times in removing content that violates our policies. When we find egregious violations, we'll kick people off for good and prevent them from committing further offenses. Again, this is something that the other communication platforms can't do nearly as effectively as we can or at all.
Given that the company is so able and keen to collate information in order to help advertisers, some might wonder whether one member of its "porn police" might be reassigned so that Facebook might exercise a little more vigilance in the area of threatening and possibly criminal activity.
Indeed, one might have thought that Facebook would have security in place that would immediately monitor groups using such obvious keywords in their names as "Kill" and "Duck." However, Facebook's view is, according to Noyes, that there could be "millions" of possible permutations of these two words.
I know that there are many mathematically skilled readers here, so perhaps they might offer a view on these millions of permutations. My nonmathematical methods just revealed to me that by searching "Kill Duck," there were only 571 results. These results didn't appear to offer anything as overtly threatening as "Kill Duck." Indeed, the top group, with 143 members, was "Don't Kill Donald Duck."
In September, Facebook removed a "Should Duck be Killed?" poll, again after it had been noticed by the media. Indeed, Facebook's view seems to be that the policing of the site is largely down to, well, you.
Noyes told me:
No system is perfect, but we believe this is the best system, and we're always working to improve it. With extremely few exceptions, our user base has proved to be vigilant in flagging content that should be taken offline.
All's well that ends well is one philosophy of life, of course. But not the only one.