Talk:Spam blacklist - Meta


25 people in discussion

Article Images
snippet for logging
{{sbl-log|3722799#{{subst:anchorencode:SectionNameHere}}}}
  This section is for proposing that a website be blacklisted; add new entries at the bottom of the section, using the basic URL so that there is no link (example.com, not http://www.example.com). Provide links demonstrating widespread spamming by multiple users on multiple wikis. Completed requests will be marked as {{added}} or {{declined}} and archived.

Google redirect spam

Note : This section won't be automatically archived by the bot



Specifically 'google.com/url?'

See http://en.wikipedia.org/w/index.php?title=Wikipedia_talk:External_links&oldid=456669797#Google_redirection_URLs

Explanation:

The first result reads:

[PDF]Public Law 105-298

www.copyright.gov/legislation/pl105-298.pdf

File Format: PDF/Adobe Acrobat - Quick View

PUBLIC LAW 105–298—OCT. 27, 1998. Public Law 105–298. 105th Congress. An

Act. To amend the provisions of title 17, United States Code, with respect to ...

If you right-click on the bolded name of the first result (on 'Public Law 105-298'), and copy the url, you get:

  • http:// www.google.com/url?sa=t&rct=j&q=public%20law%20105-298&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fwww.copyright.gov%2Flegislation%2Fpl105-298.pdf&ei=vmahTvikEoib-gadiZGuBQ&usg=AFQjCNH95AzJoEKz83KrtpLkLXENeJ3Njw&sig2=I_64kGBITluwmGNvw619Cg

Which is how these URL's end up here, and which can be used to circumvent the blacklist. --Dirk Beetstra T C (en: U, T) 13:02, 21 October 2011 (UTC)Reply

(which are all three meta blacklisted sites). --Dirk Beetstra T C (en: U, T) 13:08, 21 October 2011 (UTC)Reply

(unarchived)
I need some help here, please. This is apparently a problem for all tld's of google. See en:WT:EL#Google_redirection_URLs.
'google.*?\/url\?' ??
--Dirk Beetstra T C (en: U, T) 10:04, 24 October 2011 (UTC)Reply
maybe this is a bug in the extension? "redtube.com" was a part of the url you tested here. -- 86.159.93.124 17:36, 24 October 2011 (UTC) (seth, not logged in) -- seth 20:17, 1 November 2011 (UTC)Reply
That is what also occurred to me .. anyway it needs blocking as redirect sites should not be used - even when the target is not blacklisted (yet). --Dirk Beetstra T C (en: U, T) 08:29, 25 October 2011 (UTC)Reply
Guys, calm down. This is blocking a very small number of links (a couple of hundreds), not the whole of Google. Many regular editors are NOT going to include these links. Normal google links do NOT include the /url? part, there is no need to link there, and like with the other google loophole (which was abused), this is waiting to be abused (if it has not yet been abused). This is not 'making pages impossible to edit' - it makes it impossible to ADD a link, this is not 'screw[ing] with lots of pages' (as I said, just a couple of hundred), bots can't solve this (if it is used to circumvent blacklisting, then the bot can't repair the link anyway), etc. etc. Have a look at what I have been suggesting and what the problem actually is before making such sweeping comments. Thanks. --Dirk Beetstra T C (en: U, T) 07:06, 26 October 2011 (UTC)Reply
Yes, I already realized that you were not blocking all of Google. I made my above objections fulling knowing the exact scope of this blacklisting, and still stand by the fact that this solution is overkill and causes more problems than it solves. I will concur that this eliminates the problem you note. It is not, however, a proper solution in that it also prevents good uses of the google.com/url linking. There are perfectly valid methods to stop this abuse, as noted above someone is already working out a bot solution. The issue here, Beetstra, isn't that you have solved a problem, its that you have refused to consider alternate solutions which could have far less collateral damage. Your attitude of "I have done this, and you all have to just live with the negative consequences because that's that" isn't terribly helpful. People here have suggested, and are working on, a way to fix this problem in softer ways, and it would be beneficial to try these before merely deciding that your solution is final and cannot be reconsidered, merely because you decided to do it. --Jayron32 15:02, 26 October 2011 (UTC)Reply
What, exactly, would be an example of "good uses of the google.com/url linking"? Anomie 20:24, 26 October 2011 (UTC)Reply
As a quick note, there's really no "good uses" - any use of this link can be seamlessly replaced by a link to the target URL. I don't believe it's been used for any significant amount of use to avoid the blacklist, but that isn't my major concern - having these URLs as external links means that any time a reader follows them, we're handing off some amount of their reading history to Google, which is a definite contravention of the spirit of the privacy policy if not the letter of it. Shimgray 21:34, 26 October 2011 (UTC)Reply
Jayron32 - that is a pretty blunt statement that you make. You blatantly say that I did not consider other methods of stopping this. First, there is no single reason why to link to a google/url? link. They are redirects, you can link to the real link. Your argument is just saying that there are also good reasons to link to bit.ly or any other redirect site - there is NONE.
Regarding other solutions, I considered:
  • The AbuseFilter - which clearly should be cross-wiki one, since this is a cross-wiki issue
    • Flagging only - as if a spammer would care, they just save (but well, at least people may notice)
    • Blocking - which is just the same as the blacklist.
  • XLinkBot - currently only activated for en.wikipedia.
But as I said elsewhere and here again - this simply should never be linked, there is never a reason. And what other solutions did you have in mind? --Dirk Beetstra T C (en: U, T) 09:11, 27 October 2011 (UTC)Reply
  • (EC) Concur with blacklist, my only suggestion if it's a real problem to user is lift the block for a short time to give time for bots to be readied for all projects. I'm not sure but it sounds like some people may be confused. For clarity Google is not blacklisted. You can still link to google.com itself or google search results like [1]. What is blacklisted is www.google.com/url? . The reason is because this functions as a redirect. I can't see any reason why they should ever be on wikipedia (they are simple redirects, they don't allow you to view the cache or something if the page is down), they mostly happen by accident when people copy the links of Google search results. They add another point of failure (Google) and also may lead to confusion (people thinking the site they're going to is Google and so trustworthy, see for example the previous mentioned search results) and also mean people are forced to go through Google to visit the external link (allowing Google to collect their data). However as made clear here, the primary reason they were blocked is because they can be abused, as anyone can use them to link to spam sites overiding the blacklist. Nil Einne 07:11, 26 October 2011 (UTC)Reply

Unarchived again. Still needs to be solved. --Dirk Beetstra T C (en: U, T) 09:52, 1 November 2011 (UTC)Reply

I am going to change the rule to 'google\.[^?#]*\/url\?'. --Dirk Beetstra T C (en: U, T) 11:12, 1 November 2011 (UTC)Reply

Needed to use '\bgoogle\..*?\/url\?' - '\bgoogle\.[^?#]*\/url\?' was not accepted by the blacklist. Testing if other Google links still work: http://www.google.com/search?hl=en&q=Google+Arbitrary+URL+Redirect+Vulnerability. --Dirk Beetstra T C (en: U, T) 11:18, 1 November 2011 (UTC)Reply

Try '\bgoogle\.[^?\x23]*\/url\?', it's choking on trying to interpret the literal "#" character as the start of a comment. But escaped it works fine on my local test installation of MediaWiki. Note that '\bgoogle\..*?\/url\?' will block a URL like http://www.google.com/search?q=Google+/url?+Redirect, as unlikely as that is to occur. Anomie 14:25, 1 November 2011 (UTC)Reply
Hi!
what about \bgoogle\.[a-z]{2,4}/url\?? -- seth 16:01, 1 November 2011 (UTC)Reply
That wouldn't catch domains like google.com.au, or paths like http://www.google.com/m/url?.... Anomie 17:05, 1 November 2011 (UTC)Reply
hmm, ok. So which urls have to be blocked exactly? What is this google.com/m/-thing? If these were the only exceptions \bgoogle(?:\.com)?\.[a-z]{2,4}(?:/m)?/url\? would do.
The Abuse Filter could be a helping compromise, but it still can't be used globally, am I right? Did anybody open a ticket at bugzilla already? -- seth 20:17, 1 November 2011 (UTC)Reply
Basically, what needs to be caught are all google urls (all tlds) where the path ends in /url? - the normal form would hence be 'google.com/url?', but also 'google.com.au', and 'google.at/url?' - and long forms are e.g 'google.<tld>/archivesearch/url?' For a full list of links that have been added (but it does not necessarily have to be exhaustive, there may be even more possible) see the post of Anomie in en:Wikipedia_talk:EL#Google_redirection_URLs.
A global filter may be an idea as an alternative, but if it is set to blocking it will have the same effect anyway (though could be more specific since the message could be made informative for specific redirects and how to avoid them) - if set to notify it is probably futile when people start to abuse it (except that we would then notice). There simply is no need to have it, just follow the link (which I hope one needs to do anyway since I hope that people read the document they want to link to), and copy it then from the address bar of your browser. --Dirk Beetstra T C (en: U, T) 08:56, 2 November 2011 (UTC)Reply
Hi!
I see a big advantage in blocking urls with adapted messages, so that users can modify their link without being surprised about alleged spamming. However, there is still no global AF, is it?
I opened a ticket now: bugzilla:32159. -- seth 22:45, 2 November 2011 (UTC)Reply
(unarchived) -- seth 08:42, 5 November 2011 (UTC)Reply
The sbl extension searches for /https?:\/\/+[a-z0-9_\-.]*(\bexample\.com\b). That means our sbl entries always start with a domain part of a (full) url. That's ok because those google-links also include full urls. The problem is that those urls are encoded (see w:en:Percent-encoding) and the sbl extension does no decoding. So ...?url=http%3A%2F%2Fwww.example.com is not resolved as ...?url=http://www.example.com. Solutions could be
1. start the regexp pattern not with /https?:\/\/+[a-z0-9_\-.]*/ but with /https?(?i::|%3a)(?i:\/|%2f){2,}[a-z0-9_\-.]*/ or
2. decode urls before using the regexp matching. -- seth 11:35, 5 November 2011 (UTC)Reply
don't archive this. -- seth 21:09, 7 November 2011 (UTC)Reply
Sorry for the problems with the archive bot. Now it should be resolved, please just remove the first template of this section when you will want this request to be archived. Regards, -- Quentinv57 (talk) 18:00, 10 November 2011 (UTC)Reply
thx! :-) -- seth 21:26, 10 November 2011 (UTC)Reply

Note, that also when the blacklist would catch the links which redirect to blacklisted domains, this domain should still be blacklisted as it is still inappropriate, and can be used to avoid detection by our bots. Also, it unnecessary involves google in your linking, and not everyone may be interested in having their data being analysed by Google. --Dirk Beetstra T C (en: U, T) 08:20, 11 November 2011 (UTC)Reply

  • If you say that these links can be restated to avoid blocking, you should EXPLAIN HOW THIS IS DONE, in VERY SIMPLE LANGUAGE in a box at the top here. Most users are not techies. I have no idea how to do it. Otherwise the block should be removed. Johnbod 15:30, 11 November 2011 (UTC)Reply
I wrote a small stupid tool tools:~seth/google_url_converter.cgi which can be used to recover the original urls from the google redirects. -- seth 15:45, 13 November 2011 (UTC)Reply
Johnbod - As goes for practically all redirect sites - follow the link, and copy/paste the url from the address bar of your browser. Don't copy/paste the url that Google is giving you.
To explain it further - the Google search gives you a set of google-redirects which point to the correct websites. You then click one of the redirects from Google, so Google knows that that is the result that is most interesting to you. Next time you search something similar, it will think, that that is the result of interest to you, so you it will get a higher ranking - what, it may also show up higher in rankings on searches by other people, since you thought it was more interesting. Now, as such, that is not a big issue - but if you use that google-redirect on Wikipedia, the Google rankings of that page get improved through Wikipedia. That is a loophole waiting to be abused. It is the very, very essense of Search Engine Optimisation. It is even more efficient than having your website itself on Wikipedia. --Dirk Beetstra T C (en: U, T) 10:49, 15 November 2011 (UTC)Reply
I agree with Beetstra. But it's not always that easy to get the original url, if you want to link an excel-file for example (see w:de:WP:SBL). That's why I created the small tool. -- seth 22:24, 17 November 2011 (UTC)Reply

Also, if you want to avoid this problem and you use Firefox, you can install this extension. MER-C 09:52, 21 November 2011 (UTC)Reply

If... I recall correctly, this kind of loophole can be detecting looking for "usg=" in the url, instead of "url=". es:Magister Mathematicae 15:29, 18 December 2011 (UTC)Reply

I see the point of blacklisting these adresses, but there is some kind of technical problem in this case. Normally you are allowed to keep the url that already exists on pages, but not in this case. I would like to edit the pages: sv:Bengt Nordenskiöld and sv:Who Says, but I cannot without removing the already present link. Why? -- Lavallen (talk) 11:25, 4 March 2012 (UTC)Reply

That is not correct. Blacklisting prevents the page being saved, it is a yes/no test at the time saving. Maybe you are confusing it with AbuseFilter behaviour. billinghurst sDrewth 11:29, 4 March 2012 (UTC)Reply

GreenWorld (BVI) WP:SOCK Spamming

Google Analytics ID: UA-23084407 - (Track - Report - reverseinternet.com)







See WikiProject Spam Item

Cross wiki spamming

. Thanks, --Hu12 (talk) 03:09, 4 May 2012 (UTC)Reply

  Added. --EdBever (talk) 06:21, 4 May 2012 (UTC)Reply

j.gs



Alias of URL shortener adf.ly, see e.g. [2]. MER-C (talk) 11:40, 4 May 2012 (UTC)Reply

  Added. --Pmlineditor (t · c · l) 11:45, 4 May 2012 (UTC)Reply

hat spam















  Addedbillinghurst sDrewth 14:25, 5 May 2012 (UTC)Reply

Vacationa.info



Persistent crosswiki spammer that keeps changing domains as we blacklist the old ones. See SRG#Global lock for Bravaograda. Jafeluv (talk) 07:24, 7 May 2012 (UTC)Reply

  Added Snowolf How can I help? 07:27, 7 May 2012 (UTC)Reply
  This section is for domains which have been added to multiple wikis as observed by a bot.

These are automated reports, please check the records and the link thoroughly, it may report good links! For some more info, see Spam blacklist/Help#COIBot_reports. Reports will automatically be archived by the bot when they get stale (less than 5 links reported, which have not been edited in the last 7 days, and where the last editor is COIBot).

Sysops
  • If the report contains links to less than 5 wikis, then only add it when it is really spam
  • Otherwise just revert the link-additions, and close the report; closed reports will be reopened when spamming continues
  • To close a report, change the LinkStatus template to closed ({{LinkStatus|closed}})
  • Please place any notes in the discussion section below the HTML comment

COIBot

The LinkWatchers report domains meeting the following criteria:

  • When a user mainly adds this link, and the link has not been used too much, and this user adds the link to more than 2 wikis
  • When a user mainly adds links on one server, and links on the server have not been used too much, and this user adds the links to more than 2 wikis
  • If ALL links are added by IPs, and the link is added to more than 1 wiki
  • If a small range of IPs have a preference for this link (but it may also have been added by other users), and the link is added to more than 1 wiki.

COIBot's currently open XWiki reports

List Last update By Site IP R Last user Last link addition User Link User - Link User - Link - Wikis Link - Wikis
conferencelists.org 2024-10-04 02:30:18 COIBot 191.96.144.139 Riados2024 2024-10-03 23:36:58 8 8 0 0 4
cookingonabootstrap.com 2024-10-04 02:28:08 COIBot 192.0.78.179 R Adimora chidinma 2024-10-02 19:50:12 2546 157 0 0 8
der-rasende-reporter.info 2024-10-03 07:55:47 COIBot 81.19.159.67 178.115.77.98
178.115.78.174
178.115.79.208
178.165.197.12
178.165.203.91
178.165.206.250
77.119.220.231
91.141.45.11
2024-09-24 13:18:49 12 2
electronic-vignette.cz 2024-10-03 20:21:33 COIBot 104.21.42.204 Master210s
176.114.240.2
37.188.135.119
37.188.168.59
37.188.181.97
37.188.184.79
87.228.131.149
2024-10-03 15:33:46 19 9
fatimaabbadi.blogspot.ca 2024-10-03 13:19:36 COIBot 172.253.115.132 R Gulnozaxon 27 2024-10-03 10:04:19 166 10 0 0 6
gamesonlinefree34.blogspot.ca 2024-10-04 03:11:07 COIBot 172.253.115.132 R FabianoX7 2024-10-04 03:09:42 1290 3 0 0 2
gcontact.gcc.go.th 2024-10-03 04:44:38 COIBot 203.113.14.213 2405:9800:BA20:A5CF:C10A:3302:DA99:7D2F
223.24.185.51
49.237.10.51
2024-10-03 02:35:02 6 3
glaucoma-association.com 2024-10-03 09:21:41 COIBot 217.160.0.149 R Karrie look
NarminSafarova94
2024-10-02 13:54:23 185 12
kmp.im 2024-10-03 23:41:38 COIBot 18.136.150.244 R Baqotun0023
125.164.17.41
2024-10-03 16:26:26 12 2
menualbaik.com 2024-10-04 01:38:05 COIBot 172.67.185.165 182.184.242.68
182.184.243.164
182.184.244.77
2024-10-02 10:16:01 5 2
muonline.ai 2024-10-04 01:43:07 COIBot 172.67.186.18 2A02:A471:7131:1:989F:7B91:B4B5:3354
2A02:A471:7131:1:B1FF:2447:D9A7:6CCB
2A02:A471:7131:1:B5E2:EFFD:D864:8066
2A02:A471:7131:1:C5DA:FD01:A1A4:84EF
2024-10-03 08:52:42 23 10
mycanal.fr 2024-10-04 03:06:43 COIBot 81.92.95.55 R AGENT levrai
Antaj7co
BB 22385
Bertrand Labévue
Efilguht
Jeylopez987
Kan143
Pariswikiae
Pols12
S0undwave
Weshsalut
81.14.39.19
1970-01-01 05:00:00 411 0
nm-tekstovi.blogspot.rs 2024-10-03 16:16:36 COIBot 172.253.115.132 R Galicnik62
Vallromana16
2024-10-03 14:02:59 3 3
peoplekeys.com 2024-10-02 19:08:31 COIBot 172.105.103.204 24.154.8.253
72.23.112.2
2024-09-30 16:09:33 11 2
perfil.portaldaindustria.com.br 2024-10-03 11:43:59 COIBot 20.81.66.56 R 2804:14D:5C8F:832B:1D6B:5275:F40F:ED72
DAR7
Joaoroman23
Portuportu2
95.74.61.172
2024-05-18 19:48:57 17 4
planetradio.co.uk 2024-10-04 01:39:11 COIBot 54.154.73.247 R AGTabares
Andrew Gustavo
Octaviyanti Dwi Wahyurini
Sp1dey
UndergroundMan3000
1970-01-01 05:00:00 7569 0
poestate.ch 2024-10-03 08:48:18 COIBot 195.231.1.159 2A02:121E:4B87:0:9030:B7A0:7203:25C9
146.4.56.73
2.226.106.230
212.41.210.155
88.58.117.192
93.42.160.102
2024-10-01 09:19:34 8 3
prtraveller.blogspot.in 2024-10-04 02:33:33 COIBot 172.253.115.132 R Thiyagu Ganesh 2024-10-02 15:08:14 2828 15 0 0 3
tales.org.ua 2024-10-03 14:52:05 COIBot 185.68.16.90 Liubomyr Ch 2024-10-03 13:29:11 108 94 0 0 6
theafricanroyalfamilies.com 2024-10-04 03:16:44 COIBot 192.0.78.184 R Ilya Discuss
Theafricanroyalfamilies
2024-09-30 12:57:31 76 9
thehillsrehabchiangmai.com 2024-10-03 22:10:58 COIBot 15.235.165.95 110.164.241.236
49.228.238.8
2024-10-02 10:18:58 7 2
thespiceodyssey.com 2024-10-03 22:51:35 COIBot 198.185.159.145 2400:2412:44C1:7D00:5095:2535:5AFE:C89A
126.78.37.227
199.189.65.22
2024-10-02 16:25:23 7 2
  This section is for proposing that a website be unlisted; please add new entries at the bottom of the section.

Remember to provide the specific domain blacklisted, links to the articles they are used in or useful to, and arguments in favour of unlisting. Completed requests will be marked as {{removed}} or {{declined}} and archived.

See also /recurring requests for repeatedly proposed (and refused) removals.

The addition or removal of a domain from the blacklist is not a vote; please do not bold the first words in statements.

bet-at-home.com



Was added to blacklist 2007 because of this edit, today the company have articles on cs, de, en, hu and pt. I think blacklisting could be removed... Greets --AleXXw 11:37, 21 November 2011 (UTC)Reply

Do note that all the articles were created by single-purpose accounts. Seen the way that that is done on many wikis, I would consider their goal still to 'promote their company'. --Dirk Beetstra T C (en: U, T) 11:48, 21 November 2011 (UTC)Reply
I noticed that, but at least at de.wp the entry is relevant (there was an deletion request in 2007 decided to keep) and edited by some other users... I think its not useful to have an article for an internet-company and not be able to link to their homepage ;) greets --AleXXw 12:03, 21 November 2011 (UTC)Reply
To that I agree, but that does not necessarily mean de-listing (there is always the whitelist to list something suitable). For en.wikipedia, I found the article pretty much primary sourced (and the secondary sources were more for statements like 'they sponsored this event'). I found the current entries on other Wikis similar (I'll have a read through the German article as well). --Dirk Beetstra T C (en: U, T) 12:25, 21 November 2011 (UTC)Reply
Note: the current version on en.wikipedia seems a straight translation of the current German version (which was rewritten not too long ago). Both versions have as a first secondary source a reference for 'they sponsored this' - overall that seems quite thin for notability. --Dirk Beetstra T C (en: U, T) 13:28, 21 November 2011 (UTC)Reply
I know it was written shortly, I was "Mentor" (sth like "adopt a user") of the writer. I agree to your point, but I don't think notability should be discussed here. And I still not see why one added Link into a nearly matching article can create an alltime-blacklist-entry, but this shall not be my problem ;) greets --AleXXw 22:35, 21 November 2011 (UTC)Reply
"And I still not see why one added Link into a nearly matching article can create an alltime-blacklist-entry" .. You did not notice the large set of sockpuppets who have a similar modus operandi now? And that one edit was just an example of more, that link, and a set of others, was clearly spammed in the past. I am sorry, I see editors out of that sockfarm (with a large COI appearance) create articles of questionable notability on several wikis, and then we are asked to de-list to facilitate that?
And please note, I did not decline. --Dirk Beetstra T C (en: U, T) 12:47, 22 November 2011 (UTC)Reply
No, I did not noticed it right now, I just wanted to add the webpage of a webcompany to its article... It is notable, at least on de.wp :) What is COI? Sorry for my bad english... Greets --AleXXw 16:37, 23 November 2011 (UTC)Reply
w:Wikipedia:Conflict of Interest, I would think that there is a similar article at a WP site in a language that is familiar to you if you follow the interwiki links from that page.

That a local language article does not have the url of its site may be considered unfortunate, however, the language wiki can manage that through the whitelist to circumvent a global ban. billinghurst sDrewth 21:11, 23 November 2011 (UTC)Reply

Thx, I just didn't know the abbreviation. I'll try a whitelistentry on de.wp. Greets --AleXXw 22:57, 23 November 2011 (UTC)Reply
I saw over the past weeks several additions of links that redirect to bet-at-home.com. I have a feeling this company is actively spamming wikipedia with articles. I do feel this company lacks notability, but this is not the place for that discussion. I suggest we ask the wikipedia community of they see notability. We can then delist if this comapny is notable. EdBever 14:03, 26 November 2011 (UTC)Reply
Hi!
I whitelisted the domain at w:de temporary, so that it could be linked in the article about itself. I removed the whitelisting afterwards, so that the meta-block is active again to prevent spamming. -- seth 12:20, 4 December 2011 (UTC)Reply
  Declined at this time as there has been no further support for removal of blacklist billinghurst sDrewth 15:54, 18 December 2011 (UTC)Reply
Nonetheless I guess temp unblocking could be useful to let authors use those links in articles about the domain, e.g. w:de:bet-at-home.com. -- seth 19:29, 18 December 2011 (UTC)Reply

I have written the article de and en. For pt, cs and hu I worked together with a mother-tongue speaker. This was the reason why we opened a new account in the special language Wikipedia and not the reason “promoting the company”. My adopter told me that spamming was 2006-2007 and maybe from a person from ex-Yugoslavia. I don’t know who this person is. But I am writing the articles from Austria. Due to the fact that my aim was to write an article which compares to all Wikipedia guidelines, I asked in every language where an adopter program exists, an adopter to help us. Therefore I can guarantee that I am not willing to spam with the article. It makes no sense for me because I only would like to have an actual article for bet-at-home.com. Because the company is international I would like to translate the same article from the German Wikipedia also to other language. The languages compares to the markets where the company is working in. Therefore I would be pleased if the link www.bet-at-home.com could be deleted from the global blacklist so that it would be possible for us to have the url of the site in the articles. --Bah2011 06:18, 19 December 2011 (UTC)Reply

There is nothing currently prohibiting the writing of the articles, just the insertion of the url. billinghurst sDrewth 07:14, 19 December 2011 (UTC)Reply
Yes I know that I cannot use the url in the articles. And this is my problem. Is there some possibility to change this situation? What has to be done to delete the url from the blacklist?--Bah2011 07:43, 19 December 2011 (UTC)Reply
Hi!
user Bah2011 contacted me via e-mail a few days ago. And I'm quite sure, that this user is not going to spam.
Of course Bah2011 could go to every local sbl and ask for whitelisting (like at w:de), such that links to bet-at-home.com could be added to articles about bet-at-home.com. But that would be unnecessarily complicated. So a temporary global unblocking is the least thing we could and should do. -- seth 22:45, 19 December 2011 (UTC)Reply
I have issues <-> concerns about the interest that seems somewhere between vested and conflict, even indicated by the username. While the contributor may not spam, it offers a level of control for individual wikis to watch and manage a previously problematic url, especially I don't feel that there should be an perception of an imprimatur given where the notability discussion which is being relied upon (mentioned above) at enWP was a "no consensus" decision, not a definite decision for notability. Being involved in the discussion, I am not making any decision. billinghurst sDrewth 15:12, 20 December 2011 (UTC)Reply
Hi!
I agree with EdBever who said "We can then delist if this comapny is notable."
It's not us who have to decise what is notable and what is not. As we can see, all articles about bet-at-home.com (at cs, de, en, hu and pt) are still existing. That means that bet-at-home.com is notable enough.
Now it's our (admins) duty to make it technically possible for the users to place links to the website the wiki articles are about. So at least the temp unblacklisting must be done.
The only thing we have to discuss about is whether it could be reasonable to even permanently remove the entry from the blacklist.
The domain is blacklisted for a couple of years now, so imho we could give it a try. -- seth 21:52, 20 December 2011 (UTC)Reply
unblocked bet-at-home.com (at least temp). after 7 days (or if Bah2011 tells here, that all needed links are placed, whatever comes first) we can decide here, whether blacklisting is still necessary. -- seth 18:45, 28 December 2011 (UTC)Reply
Hi! All links are placed now. As mentioned before, the spamming was 2006-2007 and maybe from a person from ex-Yugoslavia. The aim of this articles is not to spam Wikipedia! Therefore I would be grateful if you could remove bet-at-home.com from the blacklist. Thanks!--Bah2011 08:21, 30 December 2011 (UTC)Reply
The temp unblocking seemed to be a success. Now the remaining question is: what reasons are there to re-activate the blacklisting? -- seth 20:50, 3 January 2012 (UTC)Reply

I firmly disagree with how this is now progressing. For now there is maybe no reason to re-list it, but I do think that there is a promotional thought behind all of this - the (single purpose sock) accounts all to clear have a conflict of interest, their interest is not solely to improve Wikipedia, they mainly focus on this site and its appearance on Wikipedia. Do note, that I think that de-blacklisting - linking - reblacklisting as a method is asking for problems. A specific link should be found that points to a homepage (e.g. an index.html) and for each wiki a whitelist rule should be added that enables solely that link (and still should only be on the page where it is intended) and then that link should be used on the pages (and that is what I did suggest above). Every time now that one of these pages on one of these wikis gets significantly vandalised (in a way that breaks the link) it would be impossible to revert (OK, here we maybe do not re-blacklist). This also is a way around local discussions on all wikis whether a link and/or article is really needed on that wiki. Moreover, I think there was not a clear consensus for removal, and now a temporary removal is turned into a permanent removal. I am afraid that this is setting a bad precedent, next time it will be an SEO asking for de-listing so that they can spam the company, and when we decline they can point to this discussion. Please, get the whitelisting in place on all wikis, that is why we have whitelists, or get a proper consensus for de-listing (something that I would not necessarily be against, though I do have concerns, but do get proper consensus for de-listing). --Dirk Beetstra T C (en: U, T) 20:34, 8 January 2012 (UTC)Reply

I re-read the discussions above, and I see that sDrewth and EdBever have similar concerns as I have, while AleXXw and Lutiger seth seem to have an opposite view (which IMHO is a great reason to whitelist it locally, not to de-blacklist). Seen also that the editor used a redirect (since the official place trips the blacklist) and has a conflict of interest does make me come to the conclusion that this needs a better discussion for de-blacklisting. I have hence undone the removal that Lustiger seth carried out a couple of days ago. --Dirk Beetstra T C (en: U, T) 20:44, 8 January 2012 (UTC)Reply

I only can say again that I’ve worked together with mother-tongue speakers. This was the reason why we opened new accounts in the special language Wikipedia and not the reason “promoting the company”. The aim was to actualize the old article and to translate the article in other languages because the company is international. When I actualized the article I mentioned that the website is on the blacklist and therefore I had problems when I prepared the article. This was a reason why I asked for re-blacklist. --Bah2011 06:41, 9 January 2012 (UTC)Reply
This is the perspective that I am seeing. We have an editor who is taking interest in a single company, across multiple languages, with no evident previous background, nor edit history anywhere; has a name that aligns with the product in which they are writing. The articles don't exist cross-wiki apart from where this editor has started, despite them having a reputed notability. The editor ignores or dismisses commentary about the surrounding aspects of their specific interest, and does not state their reason for focusing on the subject. The focus of the discussion is solely on writing the article and their working with those who have the language skills.

Call me a cynic, but I don't buy it. Part of the role at meta is to be on the lookout for people linking cross-wiki one url and exhibiting a conflict of interest. If it was a humanitarian organisation, I could see why someone could have the passion to do that, for a business in this business sector, I don't buy it. There are not multiple people/communities writing the articles nor expressing interest in the article, there is not. The statement was that the domain url has been spammed, and that is usually a pay for fee process, not a whimsical matter, and if that the organisation on the blacklist at that time, those are the consequences of that action. I believe that I see self-interest, not the interest of the projects. In my opinion, get a whitelist at the wikis if you can, ensure that you link to this discussion when you make the request, as I doubt that when the matter was previously raised that you clearly expressed that you were single article focused crosswiki. If I was investigating motive, I would be suspecting a paid professional writer, or a sock. That sounds like an opinion and that clearly rules me out of assessing the balance of the argument. billinghurst sDrewth 10:50, 9 January 2012 (UTC)Reply

I agree fully with billinghurst so   Declined. No valid reason to remove and local whitelisting is available if the community require it. --Herby talk thyme 11:08, 9 January 2012 (UTC)Reply

@Bah2011. On en.wikipedia I have expressed concerns as to the notability of the subject (I nominated it for deletion), and seen the article, I believe that it still lacks sufficient references to give it notability (most of the independent references state something like 'it was sponsored by bet-at-home.com' .. that is about as much as there is. So, start a company, sponsor something, people will write that you sponsored it, and you are notable? No, it does not work that way IMHO). Moreover, the domain got originally blacklisted because of promotion, and now these pages are created/edited, IMHO that is still because of promotion. I do not buy anything else. If you get linked and found on the internet, it is because of good SEO, not because of proven notability (where are reviews that compare bet-at-home.com with other online betting companies, etc. etc. - are they there? do they exist?). I am sorry, Bah2011, IMHO you are only here to promote bet-at-home.com. That was the case when it was originally blacklisted, and that is still the case. --Dirk Beetstra T C (en: U, T) 13:48, 9 January 2012 (UTC)Reply
I agree in that point that Bah2011 probably has got self-interest. But I also see that this users aim is, to write articles that totally fulfill our rules. And as we can see, this user doesn't do a bad job. At the RfD at w:en there was no consensus for deletion. Bah2011 wrote the article in five wikipedias, and not a single one of those articles were deleted. So the subject is notable. (Or am I wrong?)
There had been some (not really much) spamming of this domain back in 2007. That's more than 4 years ago. How long shall a link be blacklisted? 100 years? Even if the article about the url exists?
One suggestion to user Bah2011 was get a whitelist at the wikis if you can. I already set the domain on the whitelist at w:de, temporarily, s.t. the link could be placed in the article. Of course that user can do that in every single wikipedia, where a article shall be created. But it's senseless to have an url blacklisted globally and multi-whitelisted locally. Afair we unblocked an url, if it got whitelisted in two big wikipedias. -- seth 17:06, 14 January 2012 (UTC)Reply
Seth, yes, there was a suggestion to whitelist, which IMHO should be a start - and that was done. That that happens on 2 wikis does already suggest that the link may be ripe for de-listing. And I did initially not decline, actually, I did not decline anywhere. Others were also not very positive, and some have declined delisting - at that time certainly there was no consensus in favor of delisting.
Noting the whitelisting, I see you said that you whitelisted it on de.wikipedia, added the link, and then de-whitelisted again. The common practice on en.wikipedia is to whitelist a index.htm, index.html, or even an about.htm specifically for use as 'official homepage' - although that does not prohibit further spamming of the homepage on that wiki, it does prohibit the use of other pages on the same site (pages that IIRC were used in the original spamming). Someone who seriously vandalises the page will still make the original unsaveable, and an admin may have to go again through the same process. That is not the function of the whitelist.
And I agree, in 4 years a lot can change, companies can change to serious, notable companies. Serious requests are indeed often granted, but those were not arguments given at any stage in the delisting request. Do note, that several editors here do think that the notability is thin, very thin (but notable nonetheless).
What I disagreed with, and why I did re-list is that you then go ahead with a temporary delisting, and then after a couple of days unilaterally decide that it is going to be kept off the list. I still think that that is setting a bad precedent, and goes against the non-consensus for delisting. Several editors have given their concerns, which means that we need to get to consensus before a permanent delisting should be performed. To enable for that discussion, I have re-listed awaiting that.
Regarding delisting, seen that the original spamming was 4 years ago, and that the company does seem notable enough for articles, I will again not decline de-listing, but would like to see additional arguments. I do still have concerns that this is clever SEO of a not-too-notable company. --Dirk Beetstra T C (en: U, T) 19:07, 14 January 2012 (UTC)Reply

  Comment at English Wikpedia, the article for deletion process closed as no consensus which should be considered differently as keep and having achieved notability. billinghurst sDrewth 23:26, 14 January 2012 (UTC)Reply

www.shanghairanking.com



This is a source corresponding with values found in http://en.wikipedia.org/wiki/Template:Infobox_US_university_ranking. The source corresponding with the values seems to be allowed in many if not most US university articles on en.wikipedia.org, but is apparently blocked in some or a few, including http://en.wikipedia.org/wiki/Carnegie_Mellon_University. --81.100.44.233 18:47, 15 January 2012 (UTC)Reply

It does look to be a somewhat problematic link, and enWP's use of tools to manage some of the linking is further indicative of its misuse. Also 263 links on 21 projects would indicate that it is acceptable, though no Xwiki report makes the analysis a little more difficult. Probably should be removed and watched, and may reappear in the blacklist if it is again being abused. billinghurst sDrewth 00:08, 16 January 2012 (UTC)Reply
Just do it the old way, billinghurst. If you look at the editors mentioned in the LinkReport linked from the tracking template, I see many IPs adding this to many wikis. That looks to me like it is en:WP:REFSPAM (I see occasions where there are two references for a statement, and then a third to 'shanghairanking.com' is added to it - shanghairanking was not used to write the statement, I will assume the other two were - but those are not the links under discussion in this thread at least). I will have a better look later. Thanks. --Dirk Beetstra T C (en: U, T) 03:31, 16 January 2012 (UTC)Reply
Why was this ever put on the blacklist? Academic Ranking of World Universities is probably the most influential world ranking. It is the one referred to regulalry by The Economist, one of the most influential newsmagazines in the world. It is an impeccable source for university rankings, and as such will be linked from university templates and university articles regularly. I don't see how that qualifies as spam, it's not like people are trying to push the consultancy services, they are linking to university rankings. Alternatively, can we whitelist the site at en:wiki to override the Meta blacklisting? Franamax (talk) 04:31, 24 February 2012 (UTC)Reply
Looking at the files it occurred in December diff though as it there as \bshanghairanking\b. There is no commentary around the issue. billinghurst sDrewth 04:53, 24 February 2012 (UTC)Reply
yeah, good question. Why. I do see this, but that is just a little drop on the whole plate. Only if the editor would go into excessive, uncontrollable socking that may be a bit of a reason to do this. I see it is also on XLinkBot's revertlist, which does suggest that someone had a vested interest to have this stuff linked. But I don't know, we will have to wait for Quentinv57 for explanation.
Do note, it is not shanghairanking.com that is blacklisted, but 'shanghairanking'. Maybe something else was the base for this?
I have removed the rule, it is likely too wide or mistaken. --Dirk Beetstra T C (en: U, T) 04:58, 24 February 2012 (UTC)Reply
Ah. More: Special:Contributions/Shanghairanking2011. Maybe the socks spamming the domain have triggered this. We'll need other methods to convince the socks. --Dirk Beetstra T C (en: U, T) 05:06, 24 February 2012 (UTC)Reply
  Remove just formalising the previous removal undertaken billinghurst sDrewth 03:04, 5 March 2012 (UTC)Reply

dsbworldwide.com & webitemspro.com





I am not sure how to approach having dsbworldwide.com and webitemspro.com removed from the wiki black list. I have reviewed the report -1.79 dsbworldwide.com related spam- At that time I was trying to figure out how to use wikipedia when my competitor kept removing my links and I kept adding them back in. I do not remember receiving any messages until after I was blocked. I am not sure if it was due to not receiving email or if the notifications went into my spam folder. Since then I have not made any efforts to edit/create a page on wiki as I didn't want to cause any further issues. I would like to create a page for my design firm, dsbworldwide.com and for my software, webitemspro.com. If possible it would be nice to create a page for our local community portal, Texomaland.com. It allows clients to post a profile, news and events. The other three, Planoland.com, Mycraigranch.com and Mylocallink.com do not recieve enough traffic for me to worry about them. Also, in the report it connected us to CarInsurance.com. I don't believe I have had any clients with that domain name. Plus the domain, webitems.com, is not mine, as much as I would like it to be. Please let me know what the next steps would be. Tony.dean (talk) 21:45, 18 April 2012 (UTC)Reply

initial request in 2007 clearly seems to indicate that there has been problems with your domains xwiki. If you think that you have a valid reason for having domains added, then maybe try to convince a enWP to whitelist your domain(s). You are listed here due to the accusation of xwiki spamming, and IMNSHO you have not presented a case sufficiently to get removal from the global list. Though I would like to see how a local responds to your request and if they whitelist, how well the scenario flows from there. billinghurst sDrewth 08:27, 20 April 2012 (UTC)Reply

I am not sure how to apprach an enWP. What exactly is an enWP? Otherwise my intent is strictly to have a page for my design firm and for our software. Really not sure what case to offer in getting my domains whitelisted other than it was a mistake I made years ago trying to understand wikipedia and how to use it. The gentleman who black listed our domains commented that he expected to see more spam from us. Not the case. As soon as I found I was black listed which really shocked me as I tried to do the right thing, I didn't try to work with wiki again in any real way. Now I am just trying to promote my business and software, no different then other firms. I never had any intention of spamming Wikipedia.Tony.dean (talk) 20:05, 20 April 2012 (UTC)Reply

You can request whitelisting at en:MediaWiki talk:Spam-whitelist. EdBever (talk) 12:20, 21 April 2012 (UTC)Reply
I am concerned about the general approach being taken by the user, especially with their basic premise. Wikipedia is not for web listing nor a site for promotion of websites, it is solely an encyclopaedia where this is notability requirements for addition, having your url blacklisted is irrelevant to articles and is purely supplementary. Based on your basic premise, I wouldn't be inclined to recommend the removal for your purposes. billinghurst sDrewth

Thank you for the information. I will go to the whitelist page. As my skills with wikipedia are limited as I have not had a chance to use the system being black listed so quickly. I was not looking at it as a platform to promote our portfolio with links, but would like the fundamental links to our business web sites. I feel I am approaching Wikipedia no different then my competitors would be approaching Wikipedia. e.g. The Richards Group in Dallas has a page discussing their firm. Microsoft Office has a page discussing their software application as does WordPress, etc. Tony.dean (talk) 16:38, 23 April 2012 (UTC)Reply

kontaktlinsen



kontaktlinsen-guide.com can not be linked in the German Wikipedia, because the word "kontaktlinsen" is in the meta blacklist. Request for release.Solvin (talk) 06:49, 20 April 2012 (UTC)Reply

I would recommend that you apply to deWP to get an exemption by having the domain name added to their local whitelist. The keyword is problematic for spam, and we would need to do some research on why it was added and when, and seek broad opinion about removing it. Quickest and probably most relevant is the clearance at deWP. billinghurst sDrewth 08:17, 20 April 2012 (UTC)Reply
I agree. I asked already at w:de for some additional opinions of our medical editorial staff (kontaktlinsen = contact lenses). If they say that that website is ok, I'll whitelist the deeplink locally. I don't think, there's reason for modifying the global sbl entry.
Btw. that pattern is on the sbl since 2004-12. -- seth (talk) 01:06, 21 April 2012 (UTC)Reply
  Comment We could delist that specific word seeing it has been on the list for 8 years now. With COIbot I am sure most spam will be reported. On the other hand there is no real reason why many sites with that word should be used on wikipedia. EdBever (talk) 12:03, 21 April 2012 (UTC)Reply
Hi!
Indeed.
The English term "contactlenses" is not blocked, just the domain "contactlensesprice.com" is blocked. So this could be a hint that there wouldn't be much spamming with the German term, if it was unblocked.
I don't think unblocking is necessary. But I also think that blocking doesn't seem to be necessary, too. -- seth (talk) 14:15, 21 April 2012 (UTC)Reply

glutenfreehotelsguide.com



I have no idea why this website is blocked. The website lists hotel all over the world that can serve gluten free food and can be an external link to the Gluten Free Diet page. Thank you.— The preceding unsigned comment was added by 93.172.136.24 (talk)

Presumably because it was added xwiki, and the additions were considered outside of the guidelines for link addition. See report within the template. I wouldn't be inclined at this time to remove the block. — billinghurst sDrewth 13:39, 23 April 2012 (UTC)Reply

gangland.viviti.com





is there anyone who will help me to remove from the list gangland.viviti.com. it is used as the source number 6 in this article w:en:East_Nashville_Crips, and because of the filter, I can not even translate the article from the English! thanks in advance--80.161.143.239 19:37, 26 April 2012 (UTC)Reply

A couple of things. (enWP hat) The link in itself is a blog article and may not meet the citation standard of enWP, so you should consider whether the link itself may be better to be removed, though that is a local decision. (meta hat) The whole TLD seems like it has been renamed, which makes this block in this form redundant, and I would suspect that we will be looking to how we now manage jigsy.com and its use, misuse and abuse at a future point. — billinghurst sDrewth 23:08, 26 April 2012 (UTC)Reply
  Declined after no further discussion from applicant — billinghurst sDrewth 14:55, 5 May 2012 (UTC)Reply

mi-aime-a-ou.com



I would like to have removed from the banned list the Site www.mi-aime-a-ou.com because it gives excellent informations about our island Réunion, history of most cities/places, sights, nature, geography, wildlife, endemic species, etc. . Actually I don't even understand, why it was blacklisted - I know that site for years for trustworthy informations - it must have been a jealous competitor who put that site on the list. User:Tonton Bernardo 05/05/2012 user talk:Tonton Bernardo

It is an old, undocumented addition, which at this point means that presumably it was spammed across sites a while ago. Looking at the site shows a non-authoritative, compiled site with advertising. For the site to be linked, it should provide authoritative referenced material if it is being used to link to Wikipedia sites. — billinghurst sDrewth 15:09, 5 May 2012 (UTC)Reply
  This section is for comments related to problems with the blacklist (such as incorrect syntax or entries not being blocked), or problems saving a page because of a blacklisted link. This is not the section to request that an entry be unlisted (see Proposed removals above).

x.co



The current filter entry is too strict, as it even blocks urls containing this string which is a frequent one. For example www.san-x.co.jp is blocked, which doesn't make any sense. --Mps 21:07, 22 January 2012 (UTC)Reply

  Done fixed as per seth's previous lookbehind regex. Thanks for taking the time to post here and to tell us about this matter. billinghurst sDrewth 00:39, 23 January 2012 (UTC)Reply

cjb





For some reason every cjb.net-Website is blocked. Somebody wanted to add the site http://hateplow.cjb.net/ and failed. Maybe it's beacuse of the \bcjb\.net\b entry, but I'm no expert. --Gripweed 09:25, 27 January 2012 (UTC)Reply

It is a url shortener/redirect. Look at http://do73i.cjb.net billinghurst sDrewth 09:38, 27 January 2012 (UTC)Reply
There are a couple of possible solutions:
  • You can just use another (not blacklisted) link to the same page: http://www.arcticmusicgroup.com/hateplow.
  • If hateplow.cjb.net is mentioned much more often than www.arcticmusicgroup.com/hateplow, it's possible to unblacklist this special domain
    • locally at w:de or
    • globally her at meta, by using a special syntax (zero-width negative look-behind assertions)
I suggest, using the www.arcticmusicgroup.com-link would be the best solution. -- seth 16:25, 27 January 2012 (UTC)Reply
Thanks... Didn't know the url-shortener-thing --Gripweed 20:28, 27 January 2012 (UTC)Reply
  Not done then. Trijnstel 14:06, 28 January 2012 (UTC)Reply

pump.pp4l.me



The site was added by Vituzzu (talk · contribs) but it still displays on en.wp. It Is Me Here t / c 13:24, 19 February 2012 (UTC)Reply

The blacklist will not remove a link already in place, though it should prevent the continued addition. Remove it from the article and all should be good. billinghurst sDrewth 13:38, 19 February 2012 (UTC)Reply
  Declined   Not done nothing to do billinghurst sDrewth 22:47, 25 February 2012 (UTC)Reply
  This section is for discussion of Spam blacklist issues among other users.

Thinking aloud really

I am well aware of policy regarding this page however I do wonder whether making a point of listing all (almost all) spambot placed links might not be such a bad idea? The worst that is likely to happen is a howl of protest from site owners who presumably are directly or indirectly behind the placement of the links. I'd sure there will be other views but... I'll spam a few folks pages in case they miss this :) --Herby talk thyme 15:31, 7 March 2012 (UTC)Reply

That's a practical suggestion, however, the current limitations of the spam blacklist make this less of a good idea. Legitimate links to the spammed sites could be blocked, and this extension gives no record of actions it takes. Perhaps using the global AbuseFilter for this if/when it is enabled? Ajraddatz (Talk) 15:43, 7 March 2012 (UTC)Reply
Afaik google already takes care of our spamblacklist, publicizing this side effect of SEO on Wiki would be a good idea, but I have two concerns:
  • Spam is a war: a SEO expert (actually a bit smarter than those which are attacking WMF's wikis) could use us to destroy the rank of a competitor
  • A big part of spamming consists of google bombs, apparently meaningless texts which can influence google search results.
So, to me, a wall of shame for spammers could be a good idea, but the main point is making mediawiki less xrumer-friendly, e. g. enforcing captchas and implementing a global abusefilter or many filters on several wikis. --Vituzzu (talk) 21:37, 7 March 2012 (UTC)Reply
Fair points both and thanks for contributing. I guess we will go on the hard way for now :) --Herby talk thyme 16:33, 9 March 2012 (UTC)Reply
Hi!
Vituzzu said: Afaik google already takes care of our spamblacklist. Please give some evidence for this (iow: citation needed!). -- seth (talk) 15:40, 24 March 2012 (UTC)Reply
I can only find one source, and it seems old: "Wikipedia Spam Resulting in Google & Yahoo Penalties" Search Engine Journal πr2 (tc) 17:39, 14 April 2012 (UTC)Reply
Hi!
the "source" is a blog, where a forbes article is cited. that article, written 2007, can be reached via archive.org.[3]
But if you do a further research, then you will find out that that forbes reporter just wrote crap. He cited Jonathan Hochman (a wp administrator) in a wrong way.
The author of that blog is not confirming the rubbish that forbes wrote, no, the author is challenging it. Unfortunately all comments (and back in 2010 there were a lot of comments to that post) are deleted. You can still reach them by using archive.org.[4]
One of the commenters was Jonathan Hcohman himself and he could confirm that the forbes reporter did a mistake. He wrote:
I was there. :-) That Hochman guy actually said, “The blacklist is public, so search engines can read it. You don’t want to get on that list.”
This is one reason why it’s great to attend the conferences, because you can hear what’s actually said, rather than read about it second hand.
In fairness to the Forbes reporter, he tried to call me last night to confirm the quote, but I was putting kids to bed and didn’t get back to him in time.([5])
So there is still no source for that google-uses-sbl-hypothesis. -- seth (talk) 01:41, 21 April 2012 (UTC)Reply
Several spammers have requested delisting claiming that their Google-ranking went down considerably after getting on the sbl. There is no real evidence though. EdBever (talk) 12:09, 21 April 2012 (UTC)Reply
If spammers tried to misuse the wikipedia for their spamming issues, they probably also did some other blackhat seo. And many of those tricks can be detected automatically by google. In those detected cases the rank can decrease rapidly, and all that still doesn't need to have anything to do with our sbl. -- seth (talk) 14:22, 21 April 2012 (UTC)Reply