Wikipedia:Bot requests/Archive 82

From Wikipedia, the free encyclopedia
Archive 75 Archive 80 Archive 81 Archive 82 Archive 83 Archive 84 Archive 85

Cleaning up WantedPages by putting <nowiki/> in red links on talk pages

WantedPages is pretty useless as it is since it considers links from and to talk pages. Does the requested action above help at all? JsfasdF252 (talk) 03:06, 5 January 2021 (UTC)

A better solution has been discussed and requested in Phabricator. Until then, WP:Most-wanted articles may be a more helpful alternative. Certes (talk) 10:58, 5 January 2021 (UTC)

Moving old WP:FFD pages

Some older WP:FFD pages are titled WP:Files for deletion instead of WP:Files for discussion. Should a bot be created to move them to the newer title just like how WP:Votes for deletion pages were moved to WP:Articles for deletion for the sake of consistency? P,TO 19104 (talk) (contribs) 15:54, 23 January 2021 (UTC)

Try seeing if WT:FFD is interested first. --Izno (talk) 17:55, 23 January 2021 (UTC)
@Izno: I just posted there - see Wikipedia talk:Files for discussion#Discussion at Wikipedia:Bot requests § Moving old WP:FFD pages (contrary to the title it also invites response there). P,TO 19104 (talk) (contribs) 23:23, 23 January 2021 (UTC)
Happy to code this if there’s consensus. ProcrastinatingReader (talk) 18:51, 24 January 2021 (UTC)
I don't expect a lot of input at WT:FFD, so I have started on RFC on this. P,TO 19104 (talk) (contribs) 13:13, 25 January 2021 (UTC)
Archive 75 Archive 80 Archive 81 Archive 82 Archive 83 Archive 84 Archive 85

StarWars.com

Anything with http://www.starwars.com should be changed to https. JediMasterMacaroni (Talk) 17:03, 25 February 2021 (UTC)

Please post to WP:URLREQ. – Jonesey95 (talk) 18:18, 25 February 2021 (UTC)

FANDOM

FANDOM to Fandom (website), please. JediMasterMacaroni (Talk) 00:57, 24 February 2021 (UTC)

There is no reason to change redirects. Primefac (talk) 01:06, 24 February 2021 (UTC)

Replace Template:IPC profile with Template:IPC athlete

There are some 800+ transclusions of Template:IPC profile. They go to an archive page, because the original link doesn't work, but with the first five I at random checked, the archive page doesn't work either: Scot Hollonbeck, Stephen Eaton, Jonas Jacobsson, Sirly Tiik, Konstantin Lisenkov.

It seems possible to replace the template with Template:IPC athlete: {{IPC profile|surname=Tretheway|givenname=Sean}} becomes {{IPC athlete|sean-tretheway}}. It is safer to take the parameter from the article title than from the IPC profile template though: at Jacob Ben-Arie, {{IPC profile|surname=Ben-Arie|givenname=<!--leave blank in this case, given name not listed-->}} should become {{IPC athlete|jacob-ben-arie}}[1].

If the replacement is too complicated, then simply removing the IPC profile one is also an option, as it makes no sense to keep templates around which produce no useful results. Fram (talk) 11:34, 5 January 2021 (UTC)

@Fram: if I’m understanding you right, is the template totally redundant and should all transclusions be replaced with IPC athlete? If so, you can just TfD the template, then an existing bot with a general TfD authorisation can easily do this task. It’s also probably faster (it’ll probably take at least 7 days for community input + BRFA for the task alone otherwise). ProcrastinatingReader (talk) 00:33, 6 January 2021 (UTC)
Thanks, I'll bring it up at TfD then, didn't know that their "power" went that far (but it is a good thing). Fram (talk) 08:23, 6 January 2021 (UTC)
Primefac given TfD is closed, can your bot action this? ProcrastinatingReader (talk) 16:05, 2 February 2021 (UTC)
If it's at WP:TFDH, it will be actioned. Primefac (talk) 17:30, 2 February 2021 (UTC)
I've done 300 of these and can safely say that it's too complex for a bot, but quite quick with AWB. --Trialpears (talk) 17:28, 1 April 2021 (UTC)
Somehow I managed to do the other 500 as well. I guess it's done. --Trialpears (talk) 22:33, 1 April 2021 (UTC)

Convert to cite Twitter

I would love for a bot or script which could allow users to turn a page's citations which include a URL to Twitter into instances of {{cite tweet}}. –MJLTalk 20:17, 3 February 2021 (UTC)

  • A script would likely be better, unless it can safely be ran on all Twitter URLs in ref tags (in which case a bot may be acceptable). ProcrastinatingReader (talk) 21:10, 3 February 2021 (UTC)
  • Why do we cite Twitter again? :) --Izno (talk) 00:42, 4 February 2021 (UTC)
    • This is not an appropriate task for a bot, given CITEVAR. – Jonesey95 (talk) 16:22, 4 February 2021 (UTC)
      Wouldn't this just be an extension of the work already done by Citation bot? — The Earwig ⟨talk⟩ 16:54, 4 February 2021 (UTC)

Fixing proper linking for every Marvel-related pages and Grammatical corrections

This bot is needed for fixing the grammatical errors. I noticed there were a no. Of grammatical errors in pages which was not attended by any users or administrators. — Preceding unsigned comment added by Kohcohf (talkcontribs)

You will have to be much more specific in what you are requesting. See also WP:CONTEXTBOT. —  HELLKNOWZ   ▎TALK 15:22, 18 February 2021 (UTC)

Create and maintain a category of pages in draftspace that are not redirects

Trying to browse through and improve the drafts at Special:AllPages/Draft: is hard because there are so many redirects. Is it possible to create and maintain a category of drafts that are not redirects for easier navigation?

I guess you could use Special:PrefixIndex/Draft: which will give you all pages in the draftspace, and you can filter and remove redirects. Primefac (talk) 01:42, 19 February 2021 (UTC)

Edmonds Community College

Edmonds community college is now known as edmonds college. I request a bot that changes all articles saying Edmonds Community College to fix the wikilink to change to Edmonds College. This is a college in Washington state, USA. 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 (talk) 09:07, 2 April 2021 (UTC)

Not a good bot task. For example, here the text should remain Edmonds Community College, as that was the name of the college at that time. Manually changing those cases where the text should be updated is the way to go. Fram (talk) 09:12, 2 April 2021 (UTC)

OK 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 (talk) 09:15, 2 April 2021 (UTC)

Internet Archive

Links die on the internet. I request a bot that checks if a citation reference to the Internet is mirrored on the Internet Archive to then rewrite the link to its link in the Archive as that won't suffer from Link Rot. This preserves references on Wikipedia which may vanish over time because of Link Rot. Big job! I know! Smart coder needed. 09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 (talk) --- This could be used to really fix the dead links backlog by hunting to see if the link was mirrored and then rewriting it to the archive to show the link. --- A second idea is to check if the link is mirrored and if so then to add language giving a backup link on the archive which says to go to the link on the Archive if/in case the first reference link has died from being removed from the Internet.

This bot already exists * Pppery * it has begun... 01:04, 4 April 2021 (UTC)
nice!!! Can somebody tell the bot to go deal with dead links? Ok I see cyberpower runs it I’ll go ask them 04
34, 4 April 2021 (UTC)~

Request for bot called "EnergyBot"

I want to request this bot called "EnergyBot". EditJuice (talk) 16:16, 7 April 2021 (UTC)

EditJuice, and what exactly would this bot do? GeneralNotability (talk) 16:16, 7 April 2021 (UTC)

The bot would make talk page archives. EditJuice (talk) 16:19, 7 April 2021 (UTC)

@EditJuice: Do you have particular talk page archiving needs that can't be handled by one of the existing archive bots (see also instructions for setting them up)? Vahurzpu (talk) 16:58, 7 April 2021 (UTC)

No, I don't even have an archive. I request the bot for later, when I will have an archive already. EditJuice (talk) 17:15, 7 April 2021 (UTC)

Then just use one of the existing ones. Headbomb {t · c · p · b} 19:14, 7 April 2021 (UTC)

Clean up links to hijacked sentragoal.gr

The site sentragoal.gr has been hijacked by a gambling site and we should be looking to deactivate active reference links to that source. If someone is able to manage that easily, that would be fantastic. — billinghurst sDrewth 23:55, 13 February 2021 (UTC)

By which you mean that links in CS1/2 templates should get a parameter |url-status=usurped and other links should... have what happen to them? --Izno (talk) 00:44, 14 February 2021 (UTC)
There are only 141 instances of the text "sentragoal.gr" in articles. If you look at Olympiacos F.C. for example, you see that the reference is through a {{webarchive}} link. I suggest that the best outcome would be to supply archived urls for as many as possible. Isn't there a bot that tries to recover dead links by that technique? Maybe it could do this job with minor modifications? --RexxS (talk) 01:23, 14 February 2021 (UTC)

Add ELP links to language info boxes

Notice of this request has been posted at WT:LANG, and has received only positive comments (thanks or text).

I formatted an example by hand at Dâw language. There are a bit over 3000 URLs to link to. They provide demographic data and reliable sources for the languages, and are an alternative to Ethnologue, which is now behind a very expensive paywall. (And in some cases ELP is a check on Ethnologue, as the two sites often rely on different primary sources and often give very different numbers.)

Last time I did something like this it was handled by PotatoBot, but Anypodetos tells me that's no longer working.

Goal

Add links to the Endangered Languages Project (ELP) from our language articles through {{Infobox language}}, parallel to the existing links to other online linguistic resources (ISO, Glottologue, AIATSIS, etc.)

Data

The list of ELP language names and associated ISO codes and URLs is here. I would be happy if the entries in the table with single ISO codes were handled by bot. I can do the rest by hand, but see below.

There are three columns in the table. Two contain values for the bot to add to the infobox. The third is for navigation, an address for the bot to find the correct WP article to edit.

Action

The bot should add params "ELP" and "ELPname" to the infobox, using the values in the columns 'ELP URL' and 'ELP name' in the data table.

The value in the column 'ISO code' is to verify that the bot is editing the correct WP article. The bot should follow the WP redirect for that ISO code and verify that the ISO code does indeed occur in the infobox on the target page.

Example

For example, say one of the entries in the data table has the ISO code [abc]. The WP redirect for that code is ISO 639:abc. That should take the bot to a language article, and the bot should verify that the infobox on that article does indeed have a param ISO3 = abc or lc[n] = abc (where [n] is a digit).

If there isn't a match (and it's been years since we've run a maintenance bot to verify them all), then that ELP entry should be tagged as having a bad WP redirect for the ISO.

Complications

There is sometimes more than one ISO code per language infobox, because we don't have separate articles for every ISO code. (This is where the params lc[n] come in.) If the bot finds that there's already an ELP link in the box from a previous pass, then it should add the new codes as ELP[n] and ELPname[n], and keep a list so we can later code the template to support the article with the largest number [n] of links.

There is occasionally more than one infobox in a WP language article. It would probably be easiest if I did such cases by hand, since there are probably very few of them (if any), unless the bot can determine which infobox on the page contains the desired ISO code.

The bot should test that the external URL lands on an actual page. For instance, a language in the data table is listed as having URL 8117, but following 8117 gets the error message "Page not found :(". Such bad URLs should be tagged both for this project and for submission to the ELP.

ELP entries with multiple ISO codes (optional)

If the programmer of the bot wishes to, it would be nice if they could do a run for the 40+ ELP entries that each have 2 ISO codes. (Or have three, if the coding is easy enough, but there are only 16 of those. Anything more than that I should probably do by hand.) If the rd's for those two ISO codes both link to the same Wikipedia article, then the ELP params should be added as above. If they link to different articles, they should be tagged and I'll do them by hand.

Please ping me if you respond. — kwami (talk) 11:14, 16 January 2021 (UTC)

Consider adding this info to Wikidata instead and then pulling it from there. There are multiple systems there that can probably make this a fairly quick job. --Izno (talk) 08:31, 27 January 2021 (UTC)
@Kwamikagami: To elaborate a bit more on Izno's comment: most of the unambiguous ELP IDs are already in Wikidata. I'm currrently in the process of importing the corresponding ELP names. The IDs are on the statements as endangeredlanguages.com ID (P2192) and the names as subject named as (P1810) qualifiers. The next step would be to modify {{Infobox language}} to use these, but I'm not familiar enough with Module:WikidataIB to do this myself. Vahurzpu (talk) 19:44, 27 January 2021 (UTC)
We can summon RexxS and he will appear as if by magic to fix all things. --Izno (talk) 20:00, 27 January 2021 (UTC)
I'm not familiar with Wikidata or how to access it through an info box. Probably not a bad thing to learn. It would be nice to have a central repository to make updates easier. BTW, I got the list from ELP. Some of the URLs haven't been created yet. That includes all of the higher numbers and a few scattered lower numbers. I think I've weeded those out, though. — kwami (talk) 21:57, 27 January 2021 (UTC)
@Kwamikagami and Vahurzpu: I'm always happy to help anyone learn, and I can give you an example of fetching the data you want from Wikidata, if you'd like.
You can get the value of endangeredlanguages.com ID (P2192) from Dâw (Q3042278) like this:
{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278}} → 2547
and the qualifier subject named as (P1810) like this:
{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278 |qual=P1810 |qo=y}} → Dâw
You would normally place those calls in the infobox definition, but that will become complicated if there are multiple values for the language's ELP identifier. I can't find one right now. Are there any? If so, I'll write a custom function call for you tomorrow, when I've found an article to test it on.
Otherwise, I have modified Template:Infobox language/sandbox to show you how it would work in the Dâw language infobox. See if that does what you want and let me know. --RexxS (talk) 01:09, 28 January 2021 (UTC)
That looks good, thanks. But what decides whether an ELP code appears, and which one appears? (Not counting the manual override.) — kwami (talk) 07:06, 28 January 2021 (UTC)
@Kwamikagami: Whether an ELP code appears or not depends on whether or not it has the requisite data on its associated Wikidata item (these are linked on the sidebar; for an example, see Dâw (Q3042278)). About 2900 pages currently have ELP IDs on their Wikidata items, and assuming none of that has changed in the last 12 hours, all those have names.
In the case where there are multiple ELP IDs for a single page: it isn't handled cleanly (to see exactly what it looks like, go to Bonan language, switch {{Infobox language}} to {{Infobox language/sandbox}} and preview). However, there are only 7 pages where this would apply currently, and those probably need a manual override anyway. Vahurzpu (talk) 07:50, 28 January 2021 (UTC)

Sorry, I didn't follow any of that. I don't see any of the data at Wikidata. E.g., I can't tell which are the 7 pages with multiple IDs, or how it was determined which page gets which ELP ID. — kwami (talk) 08:13, 28 January 2021 (UTC)

@Vahurzpu: as promised, I've made a custom module Module:Endangered Languages Project to deal with fetching the ELP data. It handles multiple values and allows a local value to override the Wikidata value. If you now look at Bonan language, you'll see the format I've used for multiple ELP values. Let me know if you want something different.
@Kwamikagami: You don't need to know how many ELP values are available in the Wikidata entry as the code now takes care of that. If you update Template:Infobox language from its sandbox, every article which already has ELP and ELPname parameters will remain unchanged, and every article that doesn't have those parameters set will try to fetch them from the corresponding Wikidata entry and use those. Please let me know if you need more explanation. --RexxS (talk) 13:30, 28 January 2021 (UTC)

Thanks, @RexxS:! That looks great!

Where would we go to update the ELP values?

Could you generate a list of ELP ID's with single ISO codes that are not being triggered, so I could fix them manually? I've noticed severl, but would rather not search all 3000 to check.

Could you add a name to the refs so we could call them with <ref name=ELP/>, <ref name=ELP2/>? And could you add a link to Category:Language articles with manual ELP links for articles that have a value in ELP? (I've done it for ELP2 in the template.)

A slight hiccup, when ELP is entered manually without ELPname, nothing displays. Something should show, if only to alert editors that the infobox needs to be fixed.

BTW, see Yauyos–Chincha Quechua, where there is a second, partial match. (The only ELP article said to be a subset match to an ISO code.) I used ELP2 to add that to the automated link.

Gelao language has up to ELP4. — kwami (talk) 22:08, 28 January 2021 (UTC)

@Kwamikagami: I think we're talking at cross purposes. Izno and Vahurzpu suggested using Wikidata to store the ELP code and ELP name, and I created a way to fetch the information from Wikidata. You seem to want to add the information manually to each article, or have a bot do that for you. Either way will work, but obviously not both at the same time. Personally, I'd recommend storing the ELP identifiers on Wikidata because that makes them available to all 300+ language Wikipedias, but you may prefer not to. If there is a list of these ELP identifiers, then a bot could add them to Wikidata, although you'd need someone on Wikidata to do the request for you.
I've just added the four ELP values for Gelao language to Gelao (Q56401) on Wikidata and removed the manual parameters from the article. As you can see, the information is now fetched from Wikidata. I've added code to generate a name for each reference, ELP1, ELP2, etc.
Previously, when ELPname was added without ELP, nothing was displayed. I coded the module so that there is nothing displayed in either case, which is preferable for readers. But I understand that you need something for editors to see where problems may occur, so I've amended it to show the parameter (unlinked) and added a tracking category Category:Language articles with missing ELP parameters to catch cases where one ELP parameter is missing and can't be supplied from Wikidata. --RexxS (talk) 15:03, 29 January 2021 (UTC)

@RexxS: Actually, I do prefer Wikidata, but I didn't know how & where to go about modifying it.

I think there will still be some need to augment it manually, though. In other language WP's, they may decide to follow ISO divisions where we do not, or have other differences in scope that would not be appropriate in WD. So, unless there's a work-around (I'm not familiar with WD), we should probably have the universal elements in WD for every WP to access, and then manual overrides when some particular WP wishes to diverge from that, for whichever reason. (E.g. deciding that ISO or ELP is inaccurate, based on the sources used for an article.) Wouldn't putting everything in Wikidata cause conflicts between different-language WPs?

Also, how can we generate a list of the ELP ID's that are called in WP-en, so I can fix the ones that aren't? — kwami (talk) 01:17, 30 January 2021 (UTC)

Hi Kwamikagami: I'm no Wikidata expert, but I hope this query may help (use the blue run button). It should show you a table of Wikidata item IDs, ELP IDs, ELP names, and enwiki article titles where there is a connection. (There are multiple rows for cases where an item has multiple ELP IDs.) — The Earwig talk 02:58, 30 January 2021 (UTC)
Thanks, Earwig!
And I must say, that's a user name I'm not going to forget soon! :-) — kwami (talk) 03:11, 30 January 2021 (UTC)

That reduces it to about 500 articles I need to check by hand or manually add to WikiData. — kwami (talk) 09:29, 2 February 2021 (UTC)

@RexxS, Vahurzpu, and The Earwig: In the bottom section at Wikipedia talk:WikiProject Languages/List of ELP language names (#Names in the 'Languages with single ISO codes' ...) are the 500+ ELP name that should be linked from WP articles but aren't. Sometimes that's because the WP article covers more than one ELP language, but other times I don't see why there's no link. Maybe just a mismatch in names?

Would it be possible to add those ELP names & links to the WP articles through Wikidata? (To the WP articles that those blue-linked ELP names redirect to.) I've done a few manually, and can revert that once they're in Wikidata. — kwami (talk) 04:20, 3 February 2021 (UTC)

kwami: I looked through a few examples on that page and in many cases it's not obvious what to do. For Nenets, your list gives an ELP ID of 5847, which points to an invalid page on ELP's website, so I don't think this should be added. For Tujia, the article covers both Northern and Southern dialects, but there are separate Wikidata items for Northern Tujia (Q12953229) and Southern Tujia (Q12633994), and the ELP IDs are (rightfully) located on those items instead of the general Tujia item. For this general problem, we may consider borrowing an approach used by {{Taxonbar}}. Rather than manually add the ELP IDs for the dialects to the main language infobox, we add the Wikidata items of the constituent dialects and the template automatically pulls the ELP IDs from those items instead of the page's item. That is, rather than adding |ELP=4225|ELP2=1744 to the template, we add |from=Q12953229|from2=Q12633994, and the template pulls the ELP IDs from there. This has an advantage of making it easy to maintain other dialect identifiers if we choose to move more identifiers to Wikidata. I am not sold on this approach, but wanted to propose it. — The Earwig ⟨talk⟩ 07:17, 14 February 2021 (UTC)
Thanks, Earwig. Is there a way to automate that? — kwami (talk) 07:33, 14 February 2021 (UTC)
It's possible to automate either that or adding the ELP IDs directly. I looked around and I don't see any infoboxes doing what I described, so it might be too esoteric of a proposal. I'd appreciate input from someone more familiar with infobox design. The situation we have is rather hairy, with many ELP IDs invalid or attached to Wikidata items that are different from the articles that discuss them. — The Earwig ⟨talk⟩ 00:30, 15 February 2021 (UTC)

A bot for adding missing date format tags

There appear to be many many thousands of articles that are missing {{use mdy dates}} or {{use dmy dates}}, but which are linked to information that is sufficient to determine which tag should be used. For instance, I think we can safely assume that an untagged page for a high school in a subcategory of Category:High schools in the United States (or with country (P17) = United States of America (Q30) on Wikidata) ought to be using MDY, or that an untagged British biography page not in any categories for expatriates or dual nationality ought to be using DMY. The 3500 pages that use {{American English}} but have no tag seem like an even easier call.

I'd like to see a bot that goes through old pages and adds the appropriate tags where it can make a firm determination. It would then operate periodically to add DMY or MDY tags to new pages as they are created (but would not override any pages tagged manually). This would help reduce the incidence of the ugly 2021-02-15 dates, and save some amount of editor work. It would be very low-risk, as even if there's some unforeseen circumstance that causes the bot to occasionally mess up, there's very little damage done (e.g. Americans can still understand DMY fine, likewise for Brits with MDY, and most would probably prefer either to YYYY-MM-DD) and correction would be easy.

Does anyone want to take this on? {{u|Sdkb}}talk 23:27, 15 February 2021 (UTC)

@Sdkb: How would a bot be able to determine when the current format (even if it's "the ugly 2021-02-15 dates") should be retained per MOS:DATERET?) GoingBatty (talk) 04:11, 16 February 2021 (UTC)
GoingBatty, MOS:DATERET carves out an exception for switches based on strong national ties to the topic, which would be the case for the categories here. {{u|Sdkb}}talk 05:43, 16 February 2021 (UTC)

Bot to update from imdb

Hi there,

I am looking to for a bot to update a company page [1] with shows that are released as per the corresponding imdb page [2], is this possible? I apologise if this is not the place for this kind of request, I am new to using bots.

Many thanks

MonkeyProdCo (talk) 15:48, 16 April 2021 (UTC)

IMDb isn't considered a reliable source on here, so this probably won't be possible. ProcrastinatingReader (talk) 15:59, 16 April 2021 (UTC)

References

Request for bot called "BCuzwhynot bot"

I'm very busy in real life, and I need a bot to do the editing for me, he will use AutoWikiBrowser and help with vandalism, edit warring, and other problems and situtations. This will not be my only bot, I am also requesting to have 3 bots. He will have a user page, although I do not have one. This will be a good faith bot. If he malfunctions Press the emergency shutoff button. — Preceding unsigned comment added by BCuzwhynot (talkcontribs)

@BCuzwhynot: If you have a specific bot, with well-defined logic, written, you can request approval through WP:BRFA. But given you are a relatively new editor, your first step is to gain familiarity with Wikipedia and how it works, because edit warring, vandalism, etc... are not suitable for bot tasks or AWB bots. Headbomb {t · c · p · b} 15:26, 10 April 2021 (UTC)

Bot for 5 month notices to draft page creators

Hello, I was hoping that a new bot could be created to do what Hasteur Bot used to do which was to notify editors that their drafts were coming up on their 6 month period of no activity when they could be deleted as stale drafts (CSD G13). These notices were sent out after a draft had been unedited for 5 months. We have been missing this since the summer which has resulted in what I think is a higher number of draft deletions and a high volume of requests for restoration at WP:REFUND. I think oftentimes, editors forget that they have started a draft (especially those editors who start a lot of drafts simultaneously), and these reminder notices are very useful for page creators as well as for editors and admins who regularly patrol stale draft reports.

Would it be possible for a bot creator to just reuse code from Hasteur Bot? But I'm just looking for a bot that will do exactly what it used to before it was disabled due to the bot creator's passing. See Special:Contributions/HasteurBot for examples of what I'm looking for. Thank you. Liz Read! Talk! 00:18, 7 February 2021 (UTC)

I see that MDanielsBot 7 was approved to take over from HasteurBot, but it seems to be disabled at the moment. Pinging operator Mdaniels5757. — The Earwig ⟨talk⟩ 00:22, 7 February 2021 (UTC)
They're AFK for about a fortnight, and have already stated that they do not plan on continuing this task. Primefac (talk) 00:25, 7 February 2021 (UTC)
In that case, I don't mind taking a look into this, though it might be a little while before I can get it done and someone else is free to grab it from me. — The Earwig ⟨talk⟩ 00:31, 7 February 2021 (UTC)
If it makes it easier, the code is available (it's what Mdaniels was using). Primefac (talk) 00:42, 7 February 2021 (UTC)
Yep, I'm actually part of the Toolforge project, and apparently have been for several years, though I mostly forgot about it. Will probably move it elsewhere though. — The Earwig ⟨talk⟩ 00:57, 7 February 2021 (UTC)
Hah! I kinda figured, just meant that it's not always obvious what is and isn't available (for example, I'm reminded that I told someone I'd upload my Task 30 AWB module...). Primefac (talk) 00:59, 7 February 2021 (UTC)
I believe that MDanielsBot had some strange opt-in aspect of it which would not make it effective since many draft creators are not regular editors. They do often have settings to receive email messages when someone posts a notice on their talk page so that's why a talk page notice is very useful. Hasteur Bot has been inactive since July 2020 so while this new bot is greatly needed, it is not urgent. If this task could just make it on someone's To Do list, I would be happy! Liz Read! Talk! 00:20, 8 February 2021 (UTC)
It's on my to-do list now. I've run bots in the past, and given the source is available it's like playing on easy(ish) mode! Heh. If The Earwig gets there first, no biggie, at least then we'd have two people willing to run the thing. ƒirefly ( t · c ) 11:13, 8 February 2021 (UTC)
Turns out my bot still has an approved BRFA for doing this, so I can restart the task immediately. I'd completely forgotten about that! ƒirefly ( t · c )
@Liz: - now running. I'm also going to revive the BRFA for the actual CSD G13 tagging, as that is 100% a job for a bot and not humans. ƒirefly ( t · c ) 17:26, 11 February 2021 (UTC)
@Firefly: Thanks for taking this on, but the bot is sending messages with edits marked as minor edit + bot edit, which means users will NOT get the "you have new messages" or email notifications. Please turn off the minor edit flag as people may miss the message otherwise. – SD0001 (talk) 19:30, 11 February 2021 (UTC)
Strange, it’s running the exact same code as it was before. I’ll fix it. ƒirefly ( t · c ) 19:35, 11 February 2021 (UTC)
Now fixed! Thanks for the bug report! ƒirefly ( t · c ) 22:16, 11 February 2021 (UTC)
I don't have the knowhow to take this on myself, but throwing out some quick support, it should definitely be done.
I wish that tasks like this alerted us when they stopped working, as there is a lot of damage being done while they are inoperable. How many notable pages have we lost because the creator only got notified of the deletion and gave up rather than go through the hurdle of a refund? {{u|Sdkb}}talk 04:19, 17 February 2021 (UTC)
Per its BRFA, SDZeroBot 9 will eventually be able to monitor the activity of other bots, but it's currently stalled. Vahurzpu (talk) 04:26, 17 February 2021 (UTC)
For what it's worth, the relevant portion of the task (keeping the list updated) has actually been approved. Primefac (talk) 11:35, 17 February 2021 (UTC)

Maintain a list of articles that are possibly wrongly marked stub

I'd say more than a few articles are marked stubs but assessed differently by Wikiprojects. Is it a good idea to maintain a (possibly cached) list of these for maintenance purposes. — Preceding unsigned comment added by 5a5ha seven (talkcontribs) 23:17, 19 February 2021 (UTC)

@5a5ha seven: Seems like a reasonable report if a WikiProject was willing to resolve the items on the report. Maybe a WikiProject member could request this on Wikipedia:Database reports? (Please remember to sign your posts on talk pages by typing four keyboard tildes like this: ~~~~. Or, you can use the [ reply ] button, which automatically signs posts.) GoingBatty (talk) 23:45, 19 February 2021 (UTC)

Make section headings unique in year articles

Sections with the names of the months of the year are repeated twice or thrice, in "Events", "Births", and "Deaths". To make section headings unique, I propose the following changes be made:

replace regex

(\w)( ?)===

with

$1 {{none|births}}$2===

or

$1 {{none|deaths}}$2===

depending on the section. JsfasdF252 (talk) 17:31, 5 February 2021 (UTC); updated 17:38, 5 February 2021 (UTC)

Before writing a bot, WT:WikiProject Years might be interested in discussing that idea. Certes (talk) 17:44, 5 February 2021 (UTC)
Per my usual request - how wide-spread of an issue are we talking about? Tens, hundreds, or thousands of potential pages? Primefac (talk) 17:47, 5 February 2021 (UTC)
There are 2,700+ years in Category:Years, but doing a bit of sampling indicates that the individual month repetitions only starts around 1900, so that precise problem only arises on less than a couple of hundred pages. However, looking at the span from around 1500 onwards, the sections are usually headed "January–June" and "July–December", with a gradual shift to four subsections, "January–March". etc. Replacing those when they are third level with "... births" or "... deaths" respectively wouldn't cause any problem, though, apart from the pedants who will quote MOS:SECTIONSTYLE ("should not refer to a higher-level heading"). Including those from 1500 on would obviously involve a few hundred more year articles. --RexxS (talk) 21:33, 5 February 2021 (UTC)
What's the purpose? 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 (𝗍𝗮𝘭𝙠) 22:03, 21 February 2021 (UTC)

Double bolding

Per MOS:BOLD, "boldface is applied automatically [in] [t]able headers. Manually added boldface markup in such cases would be redundant and is to be avoided.". Special:Search/insource:/\|\+ *'''/ currently returns over 17,000 results. I suggest replacing

\|\+( *)'''([^'\|]+)'''\|

by

|+$1$2|

in all of the occurrences. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 (𝗍𝗮𝘭𝙠) 22:13, 21 February 2021 (UTC)

Declined Not a good task for a bot. Has no substantive impact so fails WP:COSMETICBOT. ƒirefly ( t · c ) 22:21, 21 February 2021 (UTC)
Good application for WP:COSDAY. -- GreenC 22:28, 21 February 2021 (UTC)
Could also be made part of WP:GENFIXes. Headbomb {t · c · p · b} 23:28, 21 February 2021 (UTC)
"It will end up making double-bold (900 weight) fonts that are excessive." This really does look ugly. Given that CBD is supposed to be pretty soon though, I'll be okay with it being done then. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 (𝗍𝗮𝘭𝙠) 23:40, 21 February 2021 (UTC)
I haven't seen double bold fonts in an eternity on Firefox, which was one of the primary affected browsers. A gen fix for this might be a good idea. --Izno (talk) 00:11, 22 February 2021 (UTC)
Doing this either on WP:COSDAY or through GENFIXes sounds sensible to me. ƒirefly ( t · c ) 09:35, 22 February 2021 (UTC)
Adding it to the AWB genfixes is definitely a good way to deal with this. Primefac (talk) 13:57, 22 February 2021 (UTC)

I am proposing a bot that will do the following task: It will add the template {{R to section}} to all the redirects on Wikipedia that link to a section. Yesterday I made almost 200 edits trying to add the template, and I was later inspired to add this request by this diff. 🐔 Chicdat  Bawk to me! 11:13, 23 April 2021 (UTC)

What if it encounters a broken section anchor? JsfasdF252 (talk) 00:53, 26 April 2021 (UTC)
Another bot is adding anchors to mend such cases. Certes (talk) 09:38, 26 April 2021 (UTC)
I do not mean that; what I mean is edits like this. 🐔 Chicdat  Bawk to me! 12:49, 26 April 2021 (UTC)
In reply to JsfasdF252: if the proposed bot encounters a broken section anchor and decides to skip the redirect, it may be able to return to that redirect later after a bot has mended the link. Cewbot 6 fixes the links (request). Dexbot adds anchors (request). There is also a report (request). Certes (talk) 13:28, 26 April 2021 (UTC)
@Chicdat: no way, this is a bad idea. There is a distinction between {{R to section}} and {{R to embedded anchor}} - and if there was not, then the rcat could be detected automatically (I am planning on introducing this for many other rcats soon). Elli (talk | contribs) 17:44, 26 April 2021 (UTC)
A bot could easily distinguish between the two. Headbomb {t · c · p · b} 19:56, 26 April 2021 (UTC)

Minor fix in Persian village articles

Based on this search, there appears to be around 50,000+ articles (mostly on populated places in Iran) created by User:Carlossuarez46 that use the incorrect capitalization of "romanized".

The word is capitalized when it means "to make something Roman in character", and lowercase when it means "convert to Latin script", as it does in these cases. This distinction is reflected across all of our articles (Category:Romanization), and is supported by dictionaries [2], encyclopedias [3][4][5], etc.

If a bot task were made to fix this, it would also probably be prudent to retarget the wikilinks like so: [[Romanization of Persian|romanized]], which is a better target. — Goszei (talk) 23:56, 17 February 2021 (UTC)

  • Whether to capitalize "Romanize" seems to not be uniformly agreed: The Chicago Manual of Style uses both capitalized and uncapitalized versions. (see, e.g., section 11.95 (16th edition): Italics versus roman for transliterated terms: "Usually Transliterated (or Romanized), Italics versus roman for transliterated term" and elsewhere it is used uncapitalized (e.g., section 11.106 discussing transliteration of Japanese). If that's something you think is best, a bot should be able to do that. Carlossuarez46 (talk) 00:08, 18 February 2021 (UTC)
I looked at that part of the CMS 16th, and "Romanized" is only capitalized in that instance because it is in the title of a subsection (e.g. Languages Usually Transliterated (or Romanized)). The title of the previous section, for example, is written as Languages Using the Latin Alphabet.
The first sentence inside the section (and all other uses in the manual) have the lowercase usage: In nonspecialized works it is customary to transliterate—that is, convert to the Latin alphabet, or romanize—words or phrases from languages that do not use the Latin alphabet.Goszei (talk) 00:31, 18 February 2021 (UTC)
I believe that this is a good change. Problem, however — are we sure that this never appears at the start of a sentence, or anywhere else that it should be capitalised? I'm afraid that some fringe cases will be a CONTEXTBOT issue. Maybe it could be required to search for "Romanized" only when it's in a parenthetical phrase consisting of (Romanized as [one or more words in Farsi script]) to reduce the risk of false positives. Nyttend (talk) 15:21, 18 February 2021 (UTC)
@Nyttend: I don't understand your point. Goszei's search query was looking for instances of "also Romanized as" – in which case would this need to be kept capitalised? 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 (𝗍𝗮𝘭𝙠) 22:06, 21 February 2021 (UTC)
Oh, so the request is that we ignore anything in which "also" doesn't precede "Romanized"? I interpreted it as "hey look, this search shows that there are lots of Romanized articles, so let's fix all appearances of Romanized". If the request really means "let's fix everything that reads 'Also Romanized'", sure, go ahead. Nyttend (talk) 15:42, 22 February 2021 (UTC)
Regarding my "romanization of Persian" suggestion above, it occurred to me that the {{lang-fa|بابصفحه}} part of the article text could be detected to ensure that the context is correct. I'm no good at regex, but the string {{lang-fa|anything goes here}}, also Romanized as" would be the target for making the two changes (decapitalization of romanization, and changing the link target). There may be some cases left over, but likely a reasonable amount that is fit for an AWB pass (i.e. with human review) instead of a bot run. — Goszei (talk) 17:58, 1 March 2021 (UTC)

Source citations update

Wikipedia currently contains source references in several languages ​​to the websites TracesOfWar.com and .nl (EN-NL bilingual), but also to the former websites ww2awards.com, go2war2.nl and oorlogsmusea.nl. However, these websites have been integrated into TracesOfWar in recent years, so that the source reference is now incorrect in approximately 1,200 pages and a multiple of the source references. Fortunately, there is currently the situation in which ww2awards and go2war2 still link to the correct page on TracesOfWar (redirect), but this is no longer the case for oorlogsmusea.nl. I have been able to correct all the sources for oorlogsmusea.nl manually.

For ww2awards and go2war2 the redirects will stop in the short term, which will result in thousands of dead links, while it can be properly directed towards the same source. Now I have started to make some changes myself by converting sources for these 2 sites as well, but after about a dozen of changes I somewhat lose hope to do this manually at least 1,150 times.

Is there any way this could possibly be done in bulk? A short example: person Llewellyn Chilson (at Tracesofwar persons id 35010) now has a source reference to http://en.ww2awards.com/person/35010, but this must be https://www.tracesofwar.com/persons/35010/. In short, old format to new format in terms of url, but same ID.

In my opinion, that should make it possible to convert everything with format 'http://en.ww2awards.com/person/[id]' (old English) or 'http://nl.ww2awards.com/person/[id]' (old Dutch) to 'https://www.tracesofwar.com/persons/[id]' (new English) or 'https://www.tracesofwar.nl/persons/[id]' (new Dutch) respectively. The same applies to go2war2.nl, but with a different format slightly. The same has already been done on the Dutch Wikipedia, via a similar bot request. Is this possible? Lennard87 (talk) 10:57, 29 April 2021 (UTC)

Pretty sure WP:URLREQ is the place for this request. Primefac (talk) 12:29, 29 April 2021 (UTC)
Alright, will take it there then, thank you. It an be marked as resolved here. Lennard87 (talk) 18:29, 29 April 2021 (UTC)

Changing talk page project templates

Is it possible for a bot to change the following project talkpage templates?:

It would be rather time-consuming to change all of these by hand and the templates dont exactly agree with the changes I made to avoid changing the talkpage templates. NoahTalk 22:48, 12 April 2021 (UTC)

Hurricane Noah, I have a bot task that can do this kind of thing following consensus at WP:TfD if you want to go that route. --Trialpears (talk) 23:00, 12 April 2021 (UTC)
@Trialpears: Okay... I put up a request and explicitly state they need to be corrected by bot. NoahTalk 23:26, 12 April 2021 (UTC)
@Trialpears and Hurricane Noah: Would this also cover redirects such as {{Flood}} and {{Weather-data}}? Thanks! GoingBatty (talk) 05:09, 16 April 2021 (UTC)
@Hurricane Noah: could you change them to Wikipedia:Wrapper templates, similar to {{WikiProject Angola}}, and place them into Category:Wikipedia templates to be automatically substituted?
SSSB (talk) 11:28, 23 April 2021 (UTC)
The problem with that is that it would result in many duplicate banners. I will run the bot soon, probably this weekend. --Trialpears (talk) 11:34, 23 April 2021 (UTC)

Replace dead links

Please could someone replace ELs of the form

with

which produces

  • Rowlett, Russ. "Lighthouses of the Bahamas". The Lighthouse Directory. University of North Carolina at Chapel Hill.

Thanks — Martin (MSGJ · talk) 05:38, 19 March 2021 (UTC)

What sort of scale of edits are we talking (tens, hundreds, thousands)? Primefac (talk) 14:37, 19 March 2021 (UTC)
Special:LinkSearch says 1054 for "https://www.unc.edu/~rowlett/lighthouse" and 483 for the "http://" variant. DMacks (talk) 14:43, 19 March 2021 (UTC)
But spot-checking, it's a mix of {{cite web}}, plain links, and links with piped text, and with/without additional plain bibliographic notes. For example, 165 of the https:// form are in a "url=..." context. I think there are too many variations to do automatically. DMacks (talk) 15:06, 19 March 2021 (UTC)
Thread copied to Wikipedia:Link_rot/URL_change_requests#unc.edu please follow up there thanks. -- GreenC 17:56, 19 March 2021 (UTC)
The correct venue for this sort of request is WP:URLREQ * Pppery * it has begun... 17:26, 19 March 2021 (UTC)
Thread copied to Wikipedia:Link_rot/URL_change_requests#unc.edu follow up there thanks. -- GreenC 17:56, 19 March 2021 (UTC)
hey guys see my comments about internet archive- below in this forum about bots—there is a not called preppery that is run by cyberpower that can clean up many dead links~<04:36, 4 April 2021 (UTC)~i am an IP

SEE Wikipedia bot requests #internet archive. A guy replied about the bot preppery. Tell that bot owner to come here and look at this conversation

Add Template:Documentation to templates

Is there an existing bot that could add the missing Template:Documentation to other templates' pages? Are there any prior discussions? -- DaxServer (talk) 12:20, 2 May 2021 (UTC)

One example would be Template:Hyderabad district. The bot would create the doc subpage, move the documentation and the categories from the template to the new subpage. -- DaxServer (talk) 13:21, 2 May 2021 (UTC)
Template:Hyderabad district has a valid use of in-page | content = of Template:Documentation#Usage. Before considering a bot you would need consensus to stop that practice. It's common for simple documentation. PrimeHunter (talk) 15:10, 2 May 2021 (UTC)
I have certainly forgotten that part while reading the documentation for Documentation. It seems like a well established practice. Achieving consensus to stop it would be very difficult and time consuming. Rather I would fallback to creating the subpage myself if and when I change anything in a template. -- DaxServer (talk) 15:17, 2 May 2021 (UTC)

Auto add bare URL inline template behind bare URL

I want a robot to be able to add {{Bare URL inline|{{subst:DATE}}}}to the end of all bare URLs.--Alcremie (talk) 09:54, 11 March 2021 (UTC)

There are so many bare URLs it would be millions of articles, it would require significant consensus. Suggest find and work on bare URLs with the bare URL bot at Template:Cleanup bare URLs/bot. -- GreenC 16:58, 11 March 2021 (UTC)

Cleveland Clinic Journal of Medicine

The Cleveland Clinic Journal of Medicine used to be free and open, but now it is free with registration. Can a bot be modified to make edits like this, where I removed things like the | doi-access = free parameter and I added the | url-access= registration parameter? I imagine a complicating factor might be having the bot generate urls. This is my first bot request from memory so my apologies if you feel I'm wasting your time by making this request. Thank you. Biosthmors (talk) 06:58, 16 April 2021 (UTC)

The issue mostly is that this will likely be a case-by-case situation. Headbomb {t · c · p · b} 14:04, 16 April 2021 (UTC)
I was afraid of that but thank you for taking a look Headbomb. Biosthmors (talk) 16:24, 16 April 2021 (UTC)

Need categories and subcategories renamed en masse

I've recently gone through all ACW unit pages and manually adjusted pagenames in order to standardize usage according to this discussion. After I performed these page moves, this discussion made clear I needed to repetitively replace a vast number of category entries in the form "X Y Civil War regiments" toward "X Y Civil War units" on appropriate unit articles, including their container categories. The top-most container categories for these changes are located at Category:Regiments of the Union Army and Category:Regiments of the Confederate States Army, which themselves need to be changed to "Units of the Y Army". There are some current state ACW regiment categories which omit "Confederate States" and "Union" merely because no units raised in the state were part of that army (for example: "Vermont Civil War regiments" and "Mississippi Civil War regiments"). In all cases, "regiments" should be changed to the more inclusive plural noun "units". This would take several thousand word replacements (in category names) at a page level. Is this something a bot might do? Is there a better way? BusterD (talk) 18:45, 21 April 2021 (UTC)

Should this request be at CFD? It would require over 100 nominations, which is why I brought it here. BusterD (talk) 21:50, 21 April 2021 (UTC)
CFD is quite used to big batches. DannyS712 (drop a note on their talkpage) has a bot that can help with the tagging and then the bots at WP:CFDW can deal with the actual move afterwards. Probably the simplest way to handle it, but it requires the CfD discussion. --Trialpears (talk) 22:05, 21 April 2021 (UTC)
Thanks. Not my bailiwick. This can be marked as resolved then. BusterD (talk) 22:08, 21 April 2021 (UTC)

Accept pending changes made by autoconfirmed users where they should have been automatically accepted

On articles protected by pending changes, if there are no pending edits to an article, then autoconfirmed users should be able to have their changes automatically accepted. There is currently a rather frustrating bug that causes some edits by autoconfirmed users to be erroneously held back for review: please see Wikipedia:Village pump (technical)#Pending Changes again and various phab tickets [6][7]. Apparently, the flagged revisions/pending changes codebase is completely abandoned (no active developers who understand the code), and currently no timely fix to this issue is anticipated. As an interim stopgap measure while we attempt to find developers to fix the underlying software, would it be possible to create a bot that automatically accepts pending changes made by autoconfirmed users where they should have been automatically accepted by the software? Thanks, Mz7 (talk) 23:41, 12 March 2021 (UTC)

Someone wrote code for a bot task in a phab task somewhere related to this issue. I don't have the link handy though. ProcrastinatingReader (talk) 23:54, 12 March 2021 (UTC)
In the first phab ticket, there is toolforge:mbh/patrol.txt, for ruwiki. There's not much going on here, and some design questions, so I would suggest a clean-room implementation for enwiki. Do we have consensus for this? It's troubling we've ended up in this situation. Would time be better spent reviewing the FlaggedRevisions code to fix the issue, even if we can't get a new maintainer for it? I mean, given the underlying situation, we don't have many options beyond more aggressively finding a maintainer or turning pending changes off. — The Earwig (talk) 00:16, 13 March 2021 (UTC)
All good questions. The way I see it, this bot would merely be a technical means to return to the status quo, enforcing the already established consensus on how pending changes is supposed to work. My understanding is that it would be easier to build this bot because at least here we know what we're doing, whereas when debugging FlaggedRevisions, I don't think we even know where to start. As I mentioned at the village pump, in the long run if we can't ensure the reliability of this software, then I agree that we do need to think about whether we should have this software running at all. Mz7 (talk) 03:28, 13 March 2021 (UTC)
I'm happy to write & run this bot if a consensus exists (develops?), but I have to agree with Earwig that if these bugs do not get fixed we might need to look at the viability of continuning to run this extension... ƒirefly ( t · c ) — Preceding undated comment added 12:08, 13 March 2021 (UTC)‎
As far as consensus goes, what do you think is needed? As I explained above, I think that because the purpose of this bot would be to enforce a preexisting consensus (i.e. how pending changes is supposed to work), we don't need to go to great lengths to start an RfC or something like that. Mz7 (talk) 01:15, 14 March 2021 (UTC)
This particular task is just enforcing a preexisting consensus and normal desirable behaviour, as you say, so as I see it the focus in a BRFA would be evaluating technical soundness (and I'd personally approve on that basis). ProcrastinatingReader (talk) 01:27, 14 March 2021 (UTC)
Reasonable enough; that was my feeling too, but I tend to err on the side of caution for consensus about bots. I certainly don't think we need to waste everyone's time with an RfC, and the VPT discussion clearly establishes a problem in need of fixing. In that case, firefly, can I assume you'll work on this? — The Earwig (talk) 02:33, 14 March 2021 (UTC)
@The Earwig: Coding... - I've put in a request at PERM for +reviewer so I can experiment with the relevant API calls. The bot account this task runs under will obviously need this eventually as well, but that's for after the BRFA of course. ƒirefly ( t · c ) 11:03, 14 March 2021 (UTC)
Granted. Anarchyte (talkwork) 11:17, 14 March 2021 (UTC)

BRFA filed ƒirefly ( t · c ) 17:33, 15 March 2021 (UTC)

Replace template-space transclusions of Template:Doc with Template:Documentation

This is a request specifically to benefit external wikis who import Wikipedia templates for their own use. Currently, the redirect {{Doc}} is transcluded onto over 3000 templates. This means that any wiki which imports one of these templates will also get a template-space redirect that they may not want, or at least a redlink that they then have to fix (if they happen to care about that). I don't think this is explicitly covered by any of the points at WP:NOTBROKEN, but it feels to me at least to be within the spirit of the second-to-last "good reasons" point:

In other namespaces, particularly the template and portal namespaces in which subpages are common, any link or transclusion to a former page title that has become a redirect following a page move or merge should be updated to the new title for naming consistency.

If the community here decides (or has decided) that reuse on external wikis isn't a major enough concern to justify this type of change, that's fine. I personally think it's worth changing this, though as a reuser at one of those external wikis, I'm obviously biased here. =) ディノ千?!☎ Dinoguy1000 07:01, 12 March 2021 (UTC)

Isn't this the same as changing every redirect for a template to its main name? Which the community appears to generally be quite strongly against when done as a standalone task without any overriding reason, I believe. In any case, this falls under WP:COSMETICBOT and I can't see an overriding purpose that would allow for its approval. ProcrastinatingReader (talk) 07:06, 12 March 2021 (UTC)
I had that thought while writing this request, but felt that the fact that {{Documentation}} is transcluded on near-every template on Wikipedia would make this worth pursuing as its own thing (since you might run into a transclusion through a redirect on any template, as opposed to any random template with limited template-space usage).
I wasn't aware of the prior consensus on this topic, though it doesn't surprise me; is there any chance you have links to some of those discussions handy?
Your COSMETICBOT point is noted, though I don't really have any response to it. If the other objections can be resolved, I suppose the best I could offer is a suggestion of submitting this for Cosmetic Bot Day? ディノ千?!☎ Dinoguy1000 07:30, 12 March 2021 (UTC)
I had to do some digging to find this, but I'll point out that I nominated a swathe of template-space redirects for deletion in 2010, for similar reasons to this proposal, and (at the time) it was uncontested. Of course, consensus may have changed in the intervening decade-plus, but this at least illustrates that the community (at one time) tolerated edits of this nature. ディノ千?!☎ Dinoguy1000 07:45, 12 March 2021 (UTC)
I don’t have any specific links handy, but that’s just my understanding. It could be eligible for CBD. I’d have said to bundle it into genfixes, but genfix-edits won’t really run on templates so that won’t work. ProcrastinatingReader (talk) 14:16, 12 March 2021 (UTC)
If you get an RfD passed that’s a different thing and then wouldn’t fall under CBOT. But I’m not sure an RfD for doc would pass. ProcrastinatingReader (talk) 14:17, 12 March 2021 (UTC)
I am sympathetic. I do not think changing one template for another is sufficient cause to edit high-use templates. I don't see a large issue with AWBing these away and a link to some rationale like provided here for low-use templates, or high-use template sandboxes (to be synced at a later date). Explain that it's for i18n/external wikis to have an easier time of things. Don't revert if reverted.
The other direction would be to ship the redirect to RFD with same rationale.
I do agree with Reader when he says there are many templates where this is an issue and it's not like those are different (well, maybe they are, since doc is basically only used for templates which we generally avoid editing with AWB which could/would probably take care of this with template redirect fixes). --Izno (talk) 19:22, 12 March 2021 (UTC)
This is not a task that would normally be approved as a bot request, since it is purely cosmetic. It may be easier to create redirects at other WP instances pointing {{doc}} to their version of the documentation template. That is what we do here for many templates that are commonly copied from WPs in languages other than English. – Jonesey95 (talk) 23:48, 12 March 2021 (UTC)
This request isn't just about other-language WPs, or even about other WMF wikis, but also third-party wikis (to the extent that I "represent" anything, I represent such a wiki in this request). Of course, the argument that creating redirects there is simpler than changing templates here is valid regardless of whether "there" is "other WMF wikis" or "third-party wikis", but other wikis will (presumably) never stop being created, whereas these redirects will (ideally) never be significantly used beyond where they already are; this would suggest that in the long run, it would be better to update them here than to expect reusing wikis to continue creating the redirects in perpetuity. There's also a difference between expecting a wiki to create a redirect from the local name of a template, versus expecting them to create a redirect from a name that is locally, also a redirect (within reason, of course; see below for a counterexample).
If the advice here is to raise this issue at RFD instead, I'll do so, but I hesitate to do it now because it feels an awful lot like I'd just be shopping around for the answer I want to hear. In addition, I'm not after this redirect's deletion; it seems like a reasonable shorthand for linking.
As for the "other templates also have this issue" concern, this is obviously anecdotal, but in my own importing I haven't actually noticed this popping up much. The main offenders I've noticed recently are the {{tl*}} redirects, due to that family of templates recently being renamed to expanded names, and in those cases the redirects make a lot of sense to preserve for external wikis (and always will) since the templates' whole purpose is to simplify linking to templates anyways (so less typing is a good thing). ディノ千?!☎ Dinoguy1000 09:48, 13 March 2021 (UTC)
Isn't this the same argument for having to copy over dependency templates? I mean, if you want to copy over a template into a new wiki, you not only have to copy over that template but every template it depends on, recursively (eg {{if}} etc). I don't think creating a {{doc}} redirect, or adjusting the name, is too much more to ask for given that. ProcrastinatingReader (talk) 01:29, 14 March 2021 (UTC)
Not really... Dependency templates are generally needed for the correct functioning of the template being copied, to the point that removing that dependency would require a more-or-less involved rewrite of the calling template; template redirects, on the other hand, only require changing the template's name in the template call (unless the calling template is calling them in a convoluted way, but that type of pattern is pretty rare). ディノ千?!☎ Dinoguy1000 08:21, 15 March 2021 (UTC)

BOT

I want to make bots — Preceding unsigned comment added by 2601:246:5980:6240:8D01:A7E1:9D68:EE5C (talk) 16:49, 12 May 2021 (UTC)

This isn't really the place to announce that. You might want to read through WP:Bots. Primefac (talk) 16:55, 12 May 2021 (UTC)
Y Done improper venue. EpicPupper (talk) 23:19, 24 May 2021 (UTC)

Bug fix and article review

Ask you to review the article "ٱفلام حصرية" Thanks — Preceding unsigned comment added by 196.151.130.174 (talk) 12:33, 18 May 2021 (UTC)

This is not the place to ask about this matter. Primefac (talk) 16:52, 18 May 2021 (UTC)
Y Done improper venue. EpicPupper (talk) 23:20, 24 May 2021 (UTC)

Bulk XfD query

Please CfD-tag all categories in the following bulk nomination(s):

LaundryPizza03 (d) 03:39, 16 May 2021 (UTC)

 Done — JJMC89(T·C) 04:00, 16 May 2021 (UTC)

Estradiol hormones - External link not working - Dead link

The article about Estradiol as a hormones contains an non working external link in the references. On the reference number 71., the last external link as a PDF is citing the values from this source "Establishment of detailed reference values for luteinizing hormone, follicle stimulating hormone, estradiol, and progesterone during different phases of the menstrual cycle on the Abbott ARCHITECT analyzer".

This external link redirects to an 404 server error and needs to be replaces with an operating link. The original research document is available on the laboratory's website.

How to change this link? I don't know how to use a bot. I'm thankful for any help. — Preceding unsigned comment added by Jerome.lab (talkcontribs) 13:11, 30 April 2021 (UTC)

Instructions are at WP:URLREQ. Primefac (talk) 13:18, 30 April 2021 (UTC)
Moved to WP:URLREQ
EpicPupper (talk) 23:24, 24 May 2021 (UTC)

Clean up Infobox music genre templates

The goal is to remove "color =" and "popularity=" parameters from {{Infobox music genre}}. Color parameter was suppressed in January 2019 [8], while popularity was removed in 2013 [9], but they are still present in ~900 and ~300 templates respectively [10]. It would be great if we could clean up these templates. Solidest (talk) 17:06, 3 March 2021 (UTC)

@Primefac: Would this be a good job for your bot? GoingBatty (talk) 02:06, 12 March 2021 (UTC)
I'd have to look further, but likely sure. Primefac (talk) 15:11, 12 March 2021 (UTC)
@Primefac: any updates with this one? Solidest (talk) 08:00, 12 May 2021 (UTC)
I've added it to my to-do list. Primefac (talk) 12:28, 12 May 2021 (UTC)
Thanks. Looking forward to this to check the remaining errors in the template. Solidest (talk) 13:49, 14 May 2021 (UTC)
As far as I can see - it has been done. Thank you once again! Solidest (talk) 08:03, 16 May 2021 (UTC)

noindexing (or even deleting) old WT:WPSPAM reports

This did happen before, and is a bit of an issue. There are apparently some really old reports created by COIBot out there that are not NOINDEXed and which appear in the Google search results. As far as I know we did solve that some time ago in the robots.txt, but I am not sure whether those really old, untouched reports actually 'get' that properly set through (and I am not sure whether a bot-run is necessary).

Can a bot go through all pages under Wikipedia:WikiProject Spam/LinkReports, Wikipedia:WikiProject Spam/Local (so e.g. Wikipedia:WikiProject Spam/LinkReports/example.org etc.) and add {{NOINDEX}} to any pages that are currently in the category for no-index (I would not know how to find that in the first place). The edit will then enforce the parsing of the page and make sure that from our side the pages are NOINDEXed. If all are NOINDEXed all of them should probably be purged. We could consider to delete them, but some are representations of evidence that is used for the decisions to blacklist (but nothing is really lost, the bot can recreate them with data over the last 10 years, and since admins are handling the cases they can always see deleted revids.

A second step would be to contact Google to remove those that have not been re-indexed (and hence removed) by Google from the Google database, but that is probably something that needs to be done on a case-by-case basis so also not a bot task. It is an advice that we then can give to anyone who 'complains'. Thanks. --Dirk Beetstra T C 13:52, 2 May 2021 (UTC)

My understanding is: it would check each subpage of Wikipedia:WikiProject Spam/LinkReports (of which there are very many) and of Wikipedia:WikiProject Spam/Local (of which there are many but not as many), check if they are NOINDEX-ed, and if they are not, NOINDEX them. Is this correct? Tol | Talk | Contribs 20:33, 2 May 2021 (UTC)
Tol, correct. Dirk Beetstra T C 21:35, 2 May 2021 (UTC)
Alright, I'll see if I can put something together to do that. Coding... Tol | Talk | Contribs 21:51, 2 May 2021 (UTC)
BRFA filed. Tol | Talk | Contribs 19:05, 3 May 2021 (UTC)

A WP:LAYOUT bot

Hi, I have seen many articles that seem to get WP:LAYOUT wrong. For example, placing 'See also' after 'References', 'External links' placed before 'References'. I think it is easy for bots to read the layout and correct them. Perhaps, an existing bot can be programmed to do that. Regards--Chanaka L (talk) 10:42, 1 April 2021 (UTC)

AWB's genfixes do some of that, but there will be many pages it never visits. Certes (talk) 10:52, 1 April 2021 (UTC)
@Chanakal: I think this can be done with an AWB bot that looks for articles with == *References *==[^=]+== *See also *== and saves only if this code no longer exists after applying the genfixes. A similar bot could look for articles with == *External links *==[^=]+== *References *== that saves only if the code no longer exists after applying the genfixes. What do you think? GoingBatty (talk) 04:18, 7 May 2021 (UTC)
I'm also seeing some some extra sections being created due to the H.T.M.L. <code></code> being used here. Is this a bug?Catchpoke (talk) 22:07, 9 May 2021 (UTC)
Enterprisey is the one who maintains the table bot and may be able to fix the issue. Would call this a lowest priority issue though. --Trialpears (talk) 14:25, 10 May 2021 (UTC)
I maintain some list tables in zhwiki and jawiki. How about adapt existed codes to this page as an alternative option? Kanashimi (talk) 00:38, 11 May 2021 (UTC)
Fixed. Enterprisey (talk!) 05:08, 11 May 2021 (UTC)
@Chanakal: What do you think of my suggestion above? GoingBatty (talk) 04:15, 25 May 2021 (UTC)
@GoingBatty: Looks like you have got the correct idea. Looking forward to see this being implemented and even expanded the functionality. Regards.--Chanaka L (talk) 07:12, 25 May 2021 (UTC)
@Chanakal: BRFA filed GoingBatty (talk) 14:56, 25 May 2021 (UTC)
That's great. Hope request gets approved.--Chanaka L (talk) 15:02, 25 May 2021 (UTC)
@Chanakal: Doing... GoingBatty (talk) 14:30, 31 May 2021 (UTC)
Wow, nice to hear. Cheers--Chanaka L (talk) 14:36, 31 May 2021 (UTC)

Reassign DRN Clerk Bot Task

The Dispute Resolution Noticeboard has had a bot-maintained table of the status of cases for several years, and this table can be transcluded onto user pages, and onto the main status page of DRN. This table should be updated a few times a day. This task was previously done by User:HasteurBot, but that bot has been retired from service because its operator is no longer in this world. This task was, until about ten days ago, done by User:MDanielsBot, but that bot has stopped doing that task. It is doing other tasks, but not that task. Its bot operator is on extended wikibreak and did not respond to email. I have spoken to one bot operator who is looking into this task. Robert McClenon (talk) 16:21, 13 April 2021 (UTC)

Bulk XfD request

Please tag all of the following articles included in Wikipedia:Articles for deletion/List of names of European cities in different languages:

LaundryPizza03 (d) 21:27, 17 June 2021 (UTC)

LaundryPizza03, this request might be a better fit for WP:AWBREQ. Also, would it be a good idea to include your bulleted list of Wikilinks in the AFD, so that people know that those articles are included in the AFD? Just throwing out some ideas, hope this helps. –Novem Linguae (talk) 22:01, 17 June 2021 (UTC)
Note: This has been presumably moved to AWBREQ (as there is a copy of this request there), and I have noted that I completed this task there. 🐶 EpicPupper (he/him | talk, FAQ, contribs) 17:27, 18 June 2021 (UTC)

Bot for Top 25 report

Could a bot be created to add/update {{Top 25 report}} to the talk pages of pages featured in the Top 25 reports, it would be useful if the bot could also do the annual top 50 report and if the bot could go through the old top 25 reports as a few are missig thier talk page banners. Thanks,
SSSB (talk) 09:20, 18 April 2021 (UTC)

It seems a long term task. --Kanashimi (talk) 04:46, 19 April 2021 (UTC)
We'd need to do one iteration over all the reports (to make sure they all have the template), but after that it need only run once a week on the most recent report (i.e. 25 edits per week).
SSSB (talk) 09:23, 19 April 2021 (UTC)
Maybe we can also place it inside {{Article history}}? --Kanashimi (talk) 01:04, 20 April 2021 (UTC)
That's something you should bring up at Wikipedia:Templates for discussion
SSSB (talk) 08:19, 20 April 2021 (UTC)
Sorry, I just give some suggestions. Kanashimi (talk) 09:07, 20 April 2021 (UTC)

Add Archive URLs for Showbuzz Daily refs

Moved to WP:URLREQ#Add_Archive_URLs_for_Showbuzz_Daily_refs

Removal of spam links to iamin.in

Moved to WP:URLREQ#Removal_of_spam_links_to_iamin.in

Wikilink rename request

Please, rename "Stadio Pierluigi Penzo" to "Stadio Pier Luigi Penzo" in this pages. Thanks in advance!!! --2001:B07:6442:8903:D4D:F67B:CF18:C681 (talk) 13:36, 14 June 2021 (UTC)

 Doing... I'll see what's appropriate (via AWB). 🐶EpicPupper (he/him | talk, FAQ, contribs) 20:11, 14 June 2021 (UTC)
Y Done 🐶EpicPupper (he/him | talk, FAQ, contribs) 20:27, 14 June 2021 (UTC)

Category sorting for Thai names

Hi. I'm looking to revive a request previously made in 2018, which was discussed (to a considerable extent) here and here. Back then, TheSandDoctor originally offered to help, but due to other circumstances was unable to devote time to the task, and suggested that I ask here again. I've left it for quite some time, but better late than never I guess.

Briefly, names should be sorted by given name (i.e. as they appear) in Thailand-related categories. A Thai biography footer should as such contain the following:

{{DEFAULTSORT:Surname, Forename}}
[[Category:International people]]
[[Category:Thai people|Forename Surname]]

Currently, compliance is all over the place, with the Thai order being placed in the DEFAULTSORT value in some articles, and the Thai sort keys missing in others. A bot is needed to: (1) perform a one-time task of checking DEFAULTSORT values in Thailand-related biographies (a list with correct values to be manually supplied), and replacing the values if incorrect, and (2) do periodical maintenance by going through a specified list of categories (probably via a tracking template placed on category pages) and adding the Thai name order as sort keys to those categories' calls in each member article that is a biography. In most cases, the Thai name order would be the page title, but there are several exceptions, which I will later elaborate upon. This had been advertised and some details of the task ironed out back then, but since it's been three years there may be need to reaffirm consensus. I would like to see first, though, whether any bot operators are interested in such a project. --Paul_012 (talk) 00:13, 26 February 2021 (UTC)

@Paul 012:Well, I think I will do this:
  1. For pages in Category:Thai people, all sub-pages of categories transcluding {{Thai people category}}:
  2. Add / modify DEFAULTSORT as {{DEFAULTSORT:Surname, Given name}}
  3. + Category:International people
  4. Modify sort key of Category:Thai people and categories transcluding {{Thai people category}}, as [[Category:Category name|Given name Surname]]
Do I miss anything? --Kanashimi (talk) 08:36, 11 March 2021 (UTC)
Kanashimi, the "Category:Thai people" and "Category:International people" in the example were meant as placeholders for all categories in and outside the pre-defined set, not literal categories with those names (so skip No. 3); sorry if this wasn't clear. The set of categories could be tracked by {{Thai people category}} (the template will need to be added), though this isn't set in stone. There are also names which are not in the Given-name Surname format; a list of these will probably need to be compiled by hand, so that's also something to consider. --Paul_012 (talk) 20:43, 11 March 2021 (UTC)
I misunderstood the discuss. Is this right?
  1. For categories transcluding {{Thai people category}} and their subcategories, call it Thai_CATEGORY_LIST. For articles in all Thai_CATEGORY_LIST, call it Thai_ARTICLE_LIST. And we will do this for all Thai_ARTICLE_LIST:
  2. If the article is in Template:Thai people category/doc#Sort keys of biographical articles added to categories with this template:
    1. Modify sort key of categories in Thai_CATEGORY_LIST, as [[Category:Category name|Category Sort key specified]]
  3. Else:
    1. Add / modify DEFAULTSORT as {{DEFAULTSORT:Surname, Given name}}
    2. Modify sort key of categories in Thai_CATEGORY_LIST, as [[Category:Category name|Given name Surname]]
And I have a question: Are the surnames of Thai peoples in English always just one-word so I can split the given name and surname via article title easily; it is always in a pattern of "given given given ... surname"? --Kanashimi (talk) 22:26, 11 March 2021 (UTC)

Maybe I should provide a bit more background first. The short answer your last question would be, "No." To get the long answer, I went through the about 4,000 Thai people articles to identify the following patterns:

lengthy name examples
  • The majority (about 3,300) are regular two-part given-name/surname names like Abhai Chandavimol (defaultsort Chandavimol, Abhai). These include a few dozen pseudonyms or stage names that are generally treated as if they were normal names, e.g. Amnaat Luukjan (Luukjan, Amnaat).
  • There are a few dozen multi-word names which are composed of either multiple-word given names (or middle names), and/or multiple-word surnames. Examples include:
  • Some names which appear to have two words are actually a single name, and must be sorted as they appear in all categories (both non-Thai and Thai). Examples include:
    • Lor Tok is a single word written in English with a space. Sort as Lor tok.
    • Sunthorn Phu is two words, but never separated. Sort as Sunthorn Phu.
    • Headache Stencil - a pseudonym with two words, also shouldn't be separated (Headache Stencil)
    • The Toys - a pseudonym with the article the. Sorted as Toys, The
    • Pang brothers - some are phrases that don't need a defaultsort
  • A couple hundred royal names. Some of them are multi-word names, but all are sorted as they appear (these may need checking as they may be confused with surnames). Examples:
  • A couple dozen articles use noble titles. These come in three parts: a prefix rank; the main title; and sometimes the person's personal name in parentheses. They should be sorted by the main title in all categories. Examples:
  • Some Buddhist monk names include honorifics. The given name and/or monastic name, or ecclesiastical title may be used. Sorting should omit the honorific in all categories.
  • Actors and singers often use a stage name based on their nickname. Often, this is combined with their given name to form a two-word name which sometimes is treated as if it were a regular name. Others are suffixed by the name of their band or label, which should probably not be treated as such.
    • Phum Viphurit - Phum is his nickname and Viphurit his given name, but English-language sources tend to treat them as given name and surname, so maybe sort in non-Thai categories as Viphurit, Phum?
    • Aed Carabao - Carabao is the name of his band, and no one would refer to him as such; should be sorted as Aed Carabao in all categories
  • Most boxers' ring names are two- or multi-word names. I don't quite know how they're used in English. Are they treated as regular names? If so, Muangthai P.K. Saenchaimuaythaigym wouuld take the defaultsort P.K. Saenchaimuaythaigym, Muangthai.
  • Some names are Chinese names and should be sorted in the Chinese order in all categories. E.g. Khaw Soo Cheang (Khaw, Soo Cheang)
  • There are also some other non-Thai names that show up in the list; their defualtsort values should be left alone.

I guess all this is to say it's probably far too complicated for the defaultsort value to be automatically processed; reading off a manually compiled list would be more practical. I'm still tweaking the list but see for example an earlier (outdated) version at Special:Permalink/829756891.

I think the process should be something more like:

  1. For categories transcluding {{Thai people category}}, call it Thai_CATEGORY_LIST. For articles in all Thai_CATEGORY_LIST, call it Thai_ARTICLE_LIST. And we will do this for all Thai_ARTICLE_LIST:
  2. If the article is a personal biography, proceed with the following:
    1. If the article is in DEFAULTSORT_UPDATE_LIST:
      1. Add / modify DEFAULTSORT according to the value in DEFAULTSORT_UPDATE_LIST
      2. If so instructed by DEFAULTSORT_UPDATE_LIST:
        1. Add {{Thai sort same as defaultsort}} to the article
      3. Else:
        1. Modify sort key of categories in Thai_CATEGORY_LIST, as [[Category:Category name|PAGENAME]] (though format the page name to exclude parenthetical disambiguators)

The above applies to the bot's initial run. There should also be periodical update runs, where 2.1 would be:

    1. If DEFAULTSORT exists and is different from article title (excluding commas and parenthetical disambiguators), and {{Thai sort same as defaultsort}} is not found in the article:
      1. Modify sort key of categories in Thai_CATEGORY_LIST, as [[Category:Category name|PAGENAME]] (though format the page name to exclude parenthetical disambiguators)

Category recursion is tricky and can lead to unexpected problems, so {{Thai people category}} should probably be placed directly on all applicable category pages. (That may also be a bot task.) I'm working off this preliminary list: Special:Permalink/1011801926, but some further tweaks my still be needed.

Since the Thai sort key will be the same as either the article title (for regular names) or the DEFAULTSORT value (for royalty, etc.), the DEFAULTSORT_UPDATE_LIST can note which case applies to each article, and this can be tracked in the article source. I think this would be preferable in the long run, as a central list will be hard to keep updated while a tracking template can be added to new articles as they are created. {{Thai sort same as defaultsort}} wouldn't need to generate any visible output (except maybe a tracking category if useful).

Does this more or less make sense? --Paul_012 (talk) 23:15, 12 March 2021 (UTC)

In the process above, we will ignore all articles that is not in DEFAULTSORT_UPDATE_LIST, even if the article is in Thai_ARTICLE_LIST. I think we may detect given name and surname automatically (for example, list up common surnames), only list up special cases in another list. This will greatly reduce the workload for human and bot both. And, is {{Thai sort same as defaultsort}} should appears in this way: {{Thai sort same as defaultsort}}{{DEFAULTSORT:Surname, Given name}} (nothing between the template and DEFAULTSORT)? --Kanashimi (talk) 01:26, 13 March 2021 (UTC)
Oops, should have been more like this:
  1. If the article is a personal biography, proceed with the following:
    1. If the article is in DEFAULTSORT_UPDATE_LIST:
      1. Add / modify DEFAULTSORT according to the value in DEFAULTSORT_UPDATE_LIST
      2. If so instructed by DEFAULTSORT_UPDATE_LIST:
        1. Add {{Thai sort same as defaultsort}} to the article
        2. Skip to next article
    2. Modify sort key of categories in Thai_CATEGORY_LIST, as [[Category:Category name|PAGENAME]] (though format the page name to exclude parenthetical disambiguators)
I don't quite see a practical set of instructions that would allow automatic name identification, given the intricacies involved. The human workload isn't a problem, as I'm mostly done with the names already (just needs a second check). I imagine the placement of the Thai sort same as defaultsort template the way you described. It's a preliminary suggestion though; if we agree to proceed with the method I'll post at the MOS talk page for community approval. --Paul_012 (talk) 07:18, 13 March 2021 (UTC)
Ok. I get a rough idea of what you mean. I think DEFAULTSORT_UPDATE_LIST should include PAGENAME, surnames, and given names; at least three columns. By the way, how do we maintain new Thai people articles? They will not in DEFAULTSORT_UPDATE_LIST. --Kanashimi (talk) 07:51, 13 March 2021 (UTC)
I expect DEFAULTSORT_UPDATE_LIST to be referred to only once, to check currently existing articles. The practice of specifying Given-name Surname as the DEFAULTSORT (which the bot will need to correct) is quite old (mostly found in articles from over a decade ago I think). New articles today will likely have DEFAULTSORT values in the Surname, Given-name format, so will only need PAGENAME sort keys added. The minority of articles which require specific formatting and tagging can be handled by patrollers following WikiProject Thailand's potential new articles feed as they are created. --Paul_012 (talk) 09:13, 13 March 2021 (UTC)
Ok. It seems possible for bot to do this. --Kanashimi (talk) 10:58, 13 March 2021 (UTC)
Thanks for the responses, Kanashimi. Do you plan to take on the task? If so, I'll notify the relevant projects and discussion pages. --Paul_012 (talk) 14:05, 14 March 2021 (UTC)
Yes, but I still need your help. I don't speak Thai. 😓 And please split the name to surname, and given name. I think this will be useful. --Kanashimi (talk) 21:54, 14 March 2021 (UTC)

I've opened a discussion requesting community input at Wikipedia talk:Categorization of people#Bot for Thai name category sorting. I've now also listed the categories and articles at Wikipedia:WikiProject Thailand/Thai name categories and Wikipedia:WikiProject Thailand/Thai name sort keys. --Paul_012 (talk) 18:54, 16 March 2021 (UTC)

Kanashimi, there hasn't been further comment, but given the lack of opposition, I think it should be safe to go ahead based on the previous consensus, when you have time. (I might not be very active for some time, so please leave me a talk page message if I don't respond to pings.) --Paul_012 (talk) 09:25, 14 April 2021 (UTC)
@Paul 012: BRFA filed --Kanashimi (talk) 10:07, 14 April 2021 (UTC)

@Paul 012: Sorry, it seems there are some pages modified during the interval we waiting for the task approved. Can you check and update Wikipedia:WikiProject Thailand/Thai name sort keys again? Thank you. For example,

And how do we deal with the pages moved in the future? When the pages moved, the sort key will not follow the changing. --Kanashimi (talk) 00:36, 31 May 2021 (UTC)

Kanashimi, list updated. I think future page moves can be adequately dealt with manually—editors moving pages are already expected to take care of updating the category sort keys, as the general rule. --Paul_012 (talk) 15:32, 31 May 2021 (UTC)
@Paul 012 Please check Wikipedia:WikiProject Thailand/Nonbiographical pages transcluding Thai name categories Kanashimi (talk) 23:56, 1 June 2021 (UTC)
@Kanashimi: Done. There were a few false positives, most of which were due to the article lacking birth/death year categories, which I've rectified. The list does seem to include quite a few articles that shouldn't have been picked up, though. For example, Made in Thailand doesn't transclude any Thai-people categories, though it is in Category:Carabao (band) albums, which is under Category:Carabao (band); it seems there's some unintended category recursion taking place. --Paul_012 (talk) 15:15, 2 June 2021 (UTC)
@Paul 012 OK, I fix this. May I mark the phase 1 is done? Kanashimi (talk) 21:09, 2 June 2021 (UTC)
The last round of edits still had problems where the DEFAULTSORT wasn't added to articles which didn't already have one (e.g. Special:Diff/1026370887). Could you check this? --Paul_012 (talk) 05:29, 3 June 2021 (UTC)
@Paul 012  Fixed Please check the new round. Wikipedia:WikiProject Thailand/Nonbiographical pages transcluding Thai name categories should properly now. Sheikh Ahmad (nobleman of Siam) seems strange... Kanashimi (talk) 07:34, 3 June 2021 (UTC)
Kanashimi, all seems sorted out, from what I've checked. The Sheikh Ahmad article was recently created, so it wasn't added to the list, but it's now manually tagged so shouldn't be a problem going forward.

I know I'm late to the party but would it make any sense to sort the Thai names as [[Category:Thai foos|{{PAGENAME}}]] (literally the word PAGENAME in braces) so they will update automatically if the page name changes? That would include parenthetical qualifiers, but consistently sorting Foo bar (footballer born 1900) before Foo bar (footballer born 2000) might not be a bad thing. Non-Thai names in Thai categories could either follow suit to sort consistently (often by given name) or simply omit the sort code to sort by DEFAULTSORT (normally surname). Certes (talk) 09:45, 2 June 2021 (UTC)

@Certes: Hmm. I never considered that. Not sure how acceptable it would be in the community's view, given that using the magic word directly in articles doesn't seem to be usual practice, but it should indeed produce no difference in output in articles with the regular name structure, and I can't think of specific examples where this would be detrimental. The change will probably need to be proposed at WP:NAMESORT first, though. --Paul_012 (talk) 15:15, 2 June 2021 (UTC)
My suggestion wouldn't handle Sheikh Ahmad (nobleman of Siam), but then nor would the current method or any other proposal I've seen. A few pages will always need manual attention. Certes (talk) 08:47, 3 June 2021 (UTC)
Certes, the initial bot run is now complete, but if the PAGENAME option is agreed upon it could be later implemented via the planned weekly update runs. I think the bot can stick to the original method for now; maybe other considerations will arise as time goes by that will also be worth re-evaluating. --Paul_012 (talk) 13:45, 3 June 2021 (UTC)

Paul_012 I start running the routine version, it modifying 2 pages. Please check this round. --Kanashimi (talk) 23:31, 3 June 2021 (UTC)

Hmm. Fahlan Sakkreerin Jr. demonstrates a minor issue where the sorting rules (not to include "Jr.") are a bit more complex than copying the article title, but since it doesn't produce any difference in output, I think we can live with that. The weekly runs should be good to continue. On what days do you plan to run the task (so I can remember to check)? --Paul_012 (talk) 07:12, 4 June 2021 (UTC)
@Paul 012 When are you free? Kanashimi (talk) 07:39, 4 June 2021 (UTC)
@Kanashimi: Right now, whenever is fine. --Paul_012 (talk) 14:24, 4 June 2021 (UTC)
I set the task running on Saturday, so you may check it weekend. It seems we may close this request then? Kanashimi (talk) 20:36, 4 June 2021 (UTC)
Okay, marking as  Done. --Paul_012 (talk) 16:17, 5 June 2021 (UTC)

There's current a boatload of raw IPv4/IPv6 address used in URLs, instead of something legit useful to readers. Is there a way to parse/update a link like

to

by bot? Or something similar/close to this? I fully expect most such links to not be recoverable, but there could be a few that are. Headbomb {t · c · p · b} 23:01, 28 May 2021 (UTC)

I am not sure this could be done in any automated fashion by a bot given reverse DNS lookups are often unreliable and inaccurate. IPs change ownership, or host multiple domains, or have a hostname that would not be the correct target anyway. To pick a silly example, google.com just resolved to 172.217.7.14 for me, and while https://172.217.7.14/search?q=earwig seems to work and could theoretically end up in an article somehow, that IP's reverse DNS is lga25s56-in-f14.1e100.net which is clearly not what we want. Certainly tools could be used to build lists of possible replacements that could be manually reviewed, and a bot could perhaps operate on that, or we could use an existing method like WP:URLREQ. — The Earwig (talk) 00:18, 29 May 2021 (UTC)
Yes, might be better as some kind of tool... perhaps with a bot to compile/build basic suggestions and have humans review things, possibly assisted by a script. Headbomb {t · c · p · b} 02:47, 29 May 2021 (UTC)

mass linking and reffering

bot for linking stuff in wikipedia


FizzoXD (talk) 03:31, 9 June 2021 (UTC)

@FizzoXD: There isn't enough information in this request for anyone to be able to design a bot from it. "Linking stuff" could mean any number of things, and nearly all linking decisions are going to be context-dependent, which makes them bad tasks for a bot. If you do have some specific type of linking that can be performed universally, please expand on your request. Vahurzpu (talk) 05:00, 9 June 2021 (UTC)


By linking things i mean like linking to other wikipedia articles. Like if there is a word link "internet meme" the bot would like it to the page by editing it. FizzoXD (talk) 05:31, 9 June 2021 (UTC)

As indicated above, not only is this a WP:CONTEXTBOT issue, it's a WP:CONTEXT issue as well; we don't link every use of a term on every page that uses it. Primefac (talk) 11:47, 9 June 2021 (UTC)
@FizzoXD: I find the Find link tool to work well for linking stuff, such as deorphaning an article. Hope this helps! GoingBatty (talk) 04:59, 20 June 2021 (UTC)

Bot for Challenges projects

I would like to request that a bot starts putting a project-box on all articles that appear in the Wikipedia:The 2500 Challenge (Nordic). Plenty of other projects has this kind of box at the articles talk pages like at Talk:Gunnar Seijbold. The project is growing bigger. BabbaQ (talk) 11:59, 5 May 2021 (UTC)

I see now that the bot have stopped working for the Wikipedia:WikiProject Europe/The 10,000 Challenge as well. I know that the bots HasteurBot and AnomieBOT did the work for a while. --BabbaQ (talk) 12:00, 5 May 2021 (UTC)
Is there a template in the Category:Wikipedia article challenge templates? By the way, are there any bots continue the work of Wikipedia:Bots/Requests for approval/HasteurBot 15? --Kanashimi (talk) 07:09, 7 May 2021 (UTC)
Yes it is the template:WPEUR10k that has been used for the Europe project. And I also request that a similar box is made for the Nordic challenge. As far as I can see no bot has taken over Hasteurbots work. No new articles added receives the box at it’s talk pages.--BabbaQ (talk) 21:38, 8 May 2021 (UTC)
Since there are many challenge templates, should we integrate them together, like {{WikiProject banner shell}}, {{Article history}}, {{Multiple issues}} or {{Redirect category shell}}? Kanashimi (talk) 21:45, 8 May 2021 (UTC)
Great idea. If something can be done, that would be appreciated. And then having a bot placing the templates at the respective project articles talk pages.BabbaQ (talk) 23:28, 8 May 2021 (UTC)
BRFA filed Please give some ideas there and tell me the template name if it is created. --Kanashimi (talk) 23:49, 8 May 2021 (UTC)
Just as a note, the other templates in Category:Wikipedia article challenge templates have been nominated to be merged into their related WikiProject banners. It might be better to hold off creating the banner template until the discussion has ended. Primefac (talk) 13:01, 10 May 2021 (UTC)

Bot to repair broken peer review links

Summary of problem
Summary of bot request
  • There is now a way to provide the artice title of the page when it was reviewed (e.g. Special:Diff/986964116)
  • I would like help with a bot to go through the 700 or so reviews with broken links to provide this information and fix the link
  • User:AWMBot by BJackJS was created for this (Wikipedia:Bots/Requests for approval/AWMBot ) but unfortunately the editor is on a long wikibreak and this occurred during the bot approval process. There is one remaining problem that stopped the bot from being approved, which is that for some reason the link is duplicated when the bot runs through some articles. (see also Special:Diff/986964116)
  • I am hopeful finishing this request may be as simple as picking up User:AWMBot's code and making some small fixes.

Hopefully once a bot has gone through those articles, there may only be a few additional cases that I can manually fix. Unfortunately 700 is too much for me to do manually :(. Thanks I hope! --Tom (LT) (talk) 09:45, 2 April 2021 (UTC)

Update: bot request has been refiled at Wikipedia:Bots/Requests for approval/AWMBot 2. Tom (LT) (talk) 05:00, 11 May 2021 (UTC)

Broken section links

I really thought we had a bot or several working on this, and it seems it was brought up as recently as last year, but I just had to make yet another manual fix, so... We really need a bot that reliably fixes section links when section names are changed. Preferably one that stays online for more than a few weeks before it stops working.[understatement] {{u|Sdkb}}talk 06:58, 21 May 2021 (UTC)

Yes Dexbot stopping on this task was recorded at Wikipedia:Bot activity monitor/Report ("0 actions in last 1 week, expected at least 1. Last seen 5 May 2021"). @Ladsgroup would you like automated notifications when the bot stops again? If so, add |notify=Ladsgroup to the bot's config line on Wikipedia:Bot activity monitor/Configurations. – SD0001 (talk) 07:06, 21 May 2021 (UTC)
@SD0001The bot has a monthly cron so it's natural it did in early May and as you can see in this link at did fix hundreds just in early May. @Sdkb The bot I wrote can fix up to a certain portion of broken section links but the rest are too complicated for a bot to tackle and needs manual attention. I've already started doing some basic NLP work in the code to detect the new section and more than that would cause a lot of errors. Ladsgroupoverleg 07:12, 21 May 2021 (UTC)
Thanks both for the info. Would making the edit period more frequent perhaps help the bot catch some of these instances before the changes become too much to handle automatically? For some high-traffic redirects, a broken anchor for a month could affect a sizable number of readers. {{u|Sdkb}}talk 07:30, 21 May 2021 (UTC)
Just a note. user:cewbot fix broken anchors for the articles recently edited. I may do a global scan, although it certainly take a long time... Kanashimi (talk) 23:13, 21 May 2021 (UTC)
By the way, I think a notetag like w:ja:ノート:2009年の日本 is beneficial for fixing broken anchors. How about doing this in enwiki? Kanashimi (talk) 23:52, 21 May 2021 (UTC)
@Kanashimi You mean that when a redirect is broken and the bot is unable to fix it, the redirect be templated to summon a human to fix it? That sounds like a good idea. – SD0001 (talk) 11:47, 22 May 2021 (UTC)
Yes, this is just what I mean. Moreover, when the bot finds the link being fixed, it will remove the notice automatically, like this. Kanashimi (talk) 12:11, 22 May 2021 (UTC)

Replace Template:Tbullet with Template: Demo inline

{{demo inline}} is similar to {{tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:

{{tbullet|t|1|2|3|4|5|6}}

with this:

* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}} JsfasdF252 (talk) 22:27, 24 April 2021 (UTC)

You may be looking for Wikipedia:Templates for discussion. – Jonesey95 (talk) 22:46, 24 April 2021 (UTC)
Perhaps the author would like some bot assistance with mass-replacing the template with a bot? Not sure if that's the intention or if it's appropriate (didn't look into this thoroughly) EpicPupper (talk) 23:21, 24 May 2021 (UTC)
Unless there is a consensus to do so, then it will not be done. As Jonesey says above, if the template really should be replaced then TFD is the way to do it. Primefac (talk) 12:21, 25 May 2021 (UTC)

Featured topic bot

Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 (talk) 01:14, 4 May 2021 (UTC)

Doing... Aza24, hello friend. I started on this one tonight. You're right, this is quite complicated. Hopefully I am disciplined enough to complete this one. Feel free to ping me every once in awhile to keep me on task! I may ask you some questions once I get a little farther along. Code so far: task2-promote-topics.phpNovem Linguae (talk) 11:10, 18 May 2021 (UTC)
Is there a way to determine whether an article is a featured topic nominee vs a good topic nominee, purely from its nomination page? Example: Wikipedia:Featured and good topic candidates/Fabula Nova Crystallis Final Fantasy/archive1Novem Linguae (talk) 12:36, 18 May 2021 (UTC)
A topic has to be at least 50% to be considered Featured. I guess that would be hard to figure out for a bot, right? GamerPro64 02:08, 19 May 2021 (UTC)
Many thanks for taking this up Novem! Yeah, Gamer's comment is the only way to tell—though we could probably add a parameter to the template if that won't work? Aza24 (talk) 05:26, 19 May 2021 (UTC)
Aza24, GamerPro64. Thanks for explaining how that works. I'll make a note. Work is slow but progressing. Link to GitHub.Novem Linguae (talk) 13:29, 25 May 2021 (UTC)

Replace Template:Tbullet with Template: Demo inline

{{demo inline}} is similar to {{tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:

{{tbullet|t|1|2|3|4|5|6}}

with this:

* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}} JsfasdF252 (talk) 22:27, 24 April 2021 (UTC)

You may be looking for Wikipedia:Templates for discussion. – Jonesey95 (talk) 22:46, 24 April 2021 (UTC)
Perhaps the author would like some bot assistance with mass-replacing the template with a bot? Not sure if that's the intention or if it's appropriate (didn't look into this thoroughly) EpicPupper (talk) 23:21, 24 May 2021 (UTC)
Unless there is a consensus to do so, then it will not be done. As Jonesey says above, if the template really should be replaced then TFD is the way to do it. Primefac (talk) 12:21, 25 May 2021 (UTC)

Replace external links to Wikimedia commons with interwiki link

Following up from this discussion about converting links to Wikimedia commons from http → https, it was decided a better option is to convert "external" links (i.e only those enclosed in [...]) to interwiki links since it provides better protection against WP:LINKROT. For example [http://commons.wikimedia.org/wiki/File:Example.jpg Wikimedia commons][[:commons:File:Example.jpg|Wikimedia commons]]

There are currently about 4,100 main space pages that use http or https external link to commons. Most of them can be replaced with interwiki link. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 17:58, 28 May 2021 (UTC)

Just to double check, shouldn't that be [[:commons:File:Example.jpg|Wikimedia commons]]? Primefac (talk) 20:04, 28 May 2021 (UTC)
Yes, corrected. Thanks. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 02:11, 29 May 2021 (UTC)
If someone else gets to it first, that's fine, but I can probably swing this. Primefac (talk) 10:43, 29 May 2021 (UTC)
BRFA filed, withdrawn due to it not being a suitable task for a bot. Primefac (talk) 00:23, 30 May 2021 (UTC)
Oh. I thought it would pass easily and did not expect this. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 03:26, 30 May 2021 (UTC)
Neither did I, but apparently the dozen pages I checked were not indicative of the entire set. Too much in the way of CONTEXT issues. Primefac (talk) 10:48, 30 May 2021 (UTC)

Merging GA template into article history

There are 210 talk pages that transclude both {{GA}} and {{article history}}. A bot could integrate the GA template data into the latter, to reduce template clutter. – SD0001 (talk) 18:15, 1 June 2021 (UTC)

For 210 edits, that might be easier for an AWB user to perform. Primefac (talk) 18:16, 1 June 2021 (UTC)

Remove dead links from book and journal citation templates with identifiers

The March 2021 cleanup backlog for the Medicine WikiProject is currently dead links on articles that start with the letter A. About a quarter/third or so of the list was dead links in "cite journal" and "cite book" templates (Template:Cite journal and Template:Cite book) that contain identifiers such as ISBN, DOI, or PMID. A URL is not necessary in these references because identifiers are used. Using the March backlog as a sample and considering the size of the dead link category for the Medicine WikiProject as a whole (currently around two thousand), there are potentially thousands of dead links site-wide that fall into this type of dead link. Removing a single one of these dead links is simple but finding all of them and making a large number of tedious edits is very time-consuming, so this seems like a task a bot could do. Note that |access-date and other URL-related parameters would also be removed. An example of what the bot edits would look like. Velayinosu (talk) 04:09, 25 March 2021 (UTC)

Just to add another voice to this and try to gin up some interest here, the task would be to go through each page in Category:All articles with dead external links (251,000 pages have 1+ dead EL!). If the dead link is a URL within {{Cite journal}} or {{Cite book}}, and that template already includes a stable identifier (I think any of the ~26 parameters in Template:Cite_journal#Identifiers will do?), then we don't actually need to fix the dead URL since a stable identifier is pointing to the correct location. So the dead URL and the maintenance tag can be safely removed. This will help us prioritize our time to address dead links that require human intervention. Also the brave bot-operator to take this task up will probably be responsible for the largest drop in articles-with-dead-links of all time. Certainly worth bragging rights at Wikipedia talk:Backlog. Happy to address questions or concerns. Ajpolino (talk) 07:57, 31 March 2021 (UTC)
@Velayinosu and Ajpolino: I will accept this if still useful, initially just for the Medicine WikiProject. William Avery (talk) 21:01, 12 May 2021 (UTC)
@William Avery: that would be great! Still useful. Petscan shows 2113 articles currently tagged for WikiProject Medicine and in Category:All articles with dead external links. I think a conservative version of this bot would remove the tag only if the reference is in the cite book/cite journal templates, and the provided identifier links to the full text (i.e. |doi=, |jstor=, |pmc=, or |pmid=). If you have questions/concernes, let me know. Thanks! Ajpolino (talk) 15:12, 14 May 2021 (UTC)
Ajpolino and William Avery: Couple thoughts: URLs typically appear in |url= but there are many other places in a template a URL might be located. See the CS1|2 Lua Configuration source and search on "url". Since it has a {{dead link}} it unlikely to have a |archive-url= + |archive-date= + |url-status= .. but I have seen it, the possibility exists, they should be removed as well. Let's see.. it could end up removing a dead URL that can be saved via Wayback and this Wayback contains the full PDF, while the DOI link doesn't contain the full PDF. One way to tell is if the template has a |doi-access=free which flags the full PDF is freely available at the identifier-produced URL. Pinging Nemo who is more knowledgeable.. @Nemo bis:. -- GreenC 16:53, 14 May 2021 (UTC)
Yes, good point. Perhaps a better starting list is articles that InternetArchiveBot has already attempted and failed to find a Wayback link for, i.e. Category:Articles with permanently dead external links. That list is about 190,000 total articles; 1,541 tagged for WP Medicine. Ajpolino (talk) 17:06, 14 May 2021 (UTC)
I intend to take a look at this task in the coming week. It sounds like removing permanently dead links where there is {para|doi-access|free}} will be the least controversial step. Any advice on whether further discussion or notice is required at other venues before filing a BRFA will be gratefully received. William Avery (talk) 08:28, 16 May 2021 (UTC)
 Doing... Working on scanning for suitable cases at the moment. At this stage the bot will create lists such as User:William_Avery_Bot/testsample. William Avery (talk) 20:05, 19 May 2021 (UTC)
Great! My impression is that |doi-access= is used relatively rarely (unless a bot has been adding it?), but I aagree it's the most straightforward task. So let's see how wide a net that is, and if we then want to test a broader set of restrictions we can do so. Thanks again William Avery! Ajpolino (talk) 20:32, 20 May 2021 (UTC)
@GreenC: Am I right in thinking that I should be ignoring deadlinks with an associated {{cbignore}}, such as this? Any deadness there seems to have been transient, BTW. William Avery (talk) 16:02, 23 May 2021 (UTC)
Oxforjournals is a known difficult case and probably should not used as an example, in this example the dead link should be removed. Generally though, cbignore is just a flag to tell IABot not to process the citation because it was already done by WaybackMedic and this helps prevent bot wars when there is disagreement. -- GreenC 22:08, 23 May 2021 (UTC)
Still doing... - Today's discovery is that when the url parameter is removed, any access-date parameter must be removed too, otherwise a template error is produced. William Avery (talk) 21:50, 28 May 2021 (UTC)
@Ajpolino: BRFA filed - strictly free access only for now. William Avery (talk) 08:41, 2 June 2021 (UTC)

Replace User:PumpkinSky's signatures

I am looking for anyone to take a task of replacing signatures of PumpkinSky (talk · contribs). Their old signature had <font>...</font> tags which are creating obsolete html tag Lint errors in all pages that have their signature. Regex search shows that the signature is currently in 1,081 pages across namespaces. To remove the errors, the font tags need to be replaced with span tags.

[[User:PumpkinSky|<font color="darkorange">Pumpkin</font><font color="darkblue">Sky</font>]] [[User talk:PumpkinSky|<font color="darkorange">talk</font>]] need to be replaced with [[User:PumpkinSky|<span style="color: darkorange;">Pumpkin</span><span style="color: darkblue;">Sky</span>]] [[User talk:PumpkinSky|<span style="color: darkorange;">talk</span>]]

ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 16:47, 11 May 2021 (UTC)

If we're going to have a bot do this, we should probably think a little bigger and compile a list of regexes, like these:
Extended content
  str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿րևանցիԵ]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>');
  str = str.replace(/<font style="colou*r:["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>');
  str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* size="*([\dpxem\. ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; size:$2;">$3<\/span>');
  str = str.replace(/<font face *= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\';">$2<\/span>');
  str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* face= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; font-family:\'$2\';">$3<\/span>');
  str = str.replace(/<font face= *"* *([a-z ]+)"* colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\'; color:$2;">$3<\/span>');
  str = str.replace(/<font style *= *"color:([#a-z\d ]+);" *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>');
  str = str.replace(/<font style *= *"([:#a-z\d ;\.\-]+)" *>([a-z\d_— \'’&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="$1">$2<\/span>');

  str = str.replace(/(\[\[User:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4');
  str = str.replace(/(\[\[User[ _]talk:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4');
  str = str.replace(/(\[\[Special:Contributions\/[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4');
There are more at User:Jonesey95/AutoEd/doi.js. Someone more skilled at regex construction could no doubt make something more comprehensive. – Jonesey95 (talk) 20:05, 11 May 2021 (UTC)
I will be happy if font tags are replaced more broadly. Some more signature regex can be found in User:ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ/common.js. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 02:43, 12 May 2021 (UTC)
Pinging @Ahecht: who had done similar bot jobs in past. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 02:47, 12 May 2021 (UTC)
I really only ran ahechtbot for signatures whose formatting was bleeding out onto other text on the page. While I did replace font tags while I was fixing other issues, doing it just to replace the font tags doesn't really seem worth the effort of going through another BRFA, doing trial runs, etc., especially when it makes no difference to readers. Obsolete HTML Tags are listed as "Low priority" errors, which makes it hard to label them as "egregiously invalid HTML" per WP:COSMETICBOT. --Ahecht (TALK
PAGE
) 17:29, 12 May 2021 (UTC)
I had the same doubt about font tags and asked at WP:VPT. There seems to be general sentiment that Linter errors regardless of priority can be replaced by bots. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 18:11, 12 May 2021 (UTC)
BRFA filed. Using AWB for now, with the original regex search listed above in the original post. EpicPupper (talk, contribs) 17:59, 2 June 2021 (UTC)

Please replace 265–420 with 266–420 in the following articles

These

Thank you! --ExperiencedArticleFixer (talk) 16:41, 3 June 2021 (UTC)

Is this a typo or a link change? Primefac (talk) 16:45, 3 June 2021 (UTC)
It’s a common historical mistake in dating that has been corrected in all main articles dealing with that dynasty, but remains uncorrected in hundreds or thousands of articles. --ExperiencedArticleFixer (talk) 16:49, 3 June 2021 (UTC)
Fair enough, though it's only 1211 pages. Primefac (talk) 16:54, 3 June 2021 (UTC)
Thank you! --ExperiencedArticleFixer (talk) 16:55, 3 June 2021 (UTC)
An editor making these changes may want to verify the dates. There is a bit of discussion here. There may be more. – Jonesey95 (talk) 17:15, 3 June 2021 (UTC)
Given the initial post was 2017 and the move in 2019, I'm thinking that there has been sufficient eyes to ensure accuracy.
I guess what I mean is, what sort of verification are you referring to? Primefac (talk) 17:17, 3 June 2021 (UTC)
BRFA filed, convo continues there. Primefac (talk) 17:25, 3 June 2021 (UTC)
I know that you know what you're doing, but when I implement a request like this, I just like to do a quick check to ensure that it won't have to be undone. You know, trust, but verify. – Jonesey95 (talk) 17:42, 3 June 2021 (UTC)
That's a fair point, and I've been a little quick to judge those who question me lately... might be time for a deep breath. Primefac (talk) 17:46, 3 June 2021 (UTC)

Extracting data from a list

Hey, was wondering if anyone here could help me extract the data from the templates listed at Template:Subatomic particle/symbol and Template:Subatomic particle/link and post in one of my user pages? The data in each template is a single line so there is nothing special here. --Gonnym (talk) 17:09, 13 July 2021 (UTC)

@Gonnym, hi, could you please elaborate? What format do you want the extracted version to be in? Any other details? 🐶 EpicPupper (he/him | talk, FAQ, contribs | please use {{ping}} on reply) 01:59, 16 July 2021 (UTC)
Look at User:Gonnym/Subatomic particle/symbol & User:Gonnym/Subatomic particle/link. If you want the contents of the templates in the page then just use subst: against each template to update the page. -- WOSlinker (talk) 08:03, 16 July 2021 (UTC)
Thanks both, exactly what I needed! Gonnym (talk) 15:43, 16 July 2021 (UTC)

List of articles with multiple problem templates

See PINOFF, my earlier request. The bot would look for pages with large numbers of templates like {{Citation needed}} and add them to a list in its own userspace. It would never edit pages outside its own userspace; users who wanted to view the output would just transclude the page via template. Does this sound like something that needs a bot? 'Ridge(Converse, Create, & Fascinate) 15:17, 22 June 2021 (UTC)

@Unoriginally Named Editor: Wikipedia:Database reports/Pages contains too many maintenance templates looks similar to what you're asking about; does it meet your needs? In the specific case of {{citation needed}}, there's also the complicating factor that the pages most in need of citations often just have a single {{Unreferenced}} rather than a bunch of individual CN tags. Vahurzpu (talk) 15:58, 22 June 2021 (UTC)
How did I miss that?trout Self-trout 'Ridge(Converse, Create, & Fascinate) 16:45, 22 June 2021 (UTC)

theinquirer.net domain taken over; all deep links dead

Moved to WP:URLREQ#theinquirer.net_domain_taken_over;_all_deep_links_dead

Creating a list of userpages that have been edited by the editor once

I am requesting for a bot to go through userpages so that a list is created of pages where its users have made edits only to their pages.

The intent is to tag these user pages as {{Db-notwebhost}}.Catchpoke (talk) 00:46, 22 April 2021 (UTC)

That would also potentially tag drafts which are not eligable for speedy deletion under that criteria.
SSSB (talk) 07:13, 22 April 2021 (UTC)
So then we use the bot to create a list rather than to tag pages. Editors can assess eligibility of individual pages for {{Db-u5}}.Catchpoke (talk) 16:53, 22 April 2021 (UTC)
Does Special:NewPagesFeed not already serve this purpose? Tol | Talk | Contribs 20:22, 23 April 2021 (UTC)
no it doesnt seem to. there are many userpages which are no longer new and are older than 6 months which are eligible for deletion per the draft policy.Catchpoke (talk) 20:28, 23 April 2021 (UTC)
I think this request might be better initially filed at WP:SQLREQ. I put a query into Quarry that might roughly list a minuscule sample of the pages you are interested in: https://quarry.wmflabs.org/query/55065
This script will catch pages like User:Hoad, User:Martin_spen, User:SnoFyre, User:Eoinlane/LittletonConservationTrust. Whether they are worth tagging or deleting is a separate question, that would need to be resolved before anybody wrote a bot to do it. William Avery (talk) 16:39, 12 May 2021 (UTC)
wow what an amazing tool. it looks like toggling the limit might be filter the pages.Catchpoke (talk) 23:05, 12 May 2021 (UTC)
Hhmm: I just noticed User:Mikedill has made more than 1 edit. If it were necessary, how would the query be modified so that it only lists users who made one edit?-- Catchpoke (talk) 04:15, 16 June 2021 (UTC)
https://quarry.wmflabs.org/query/56107 William Avery (talk) 20:40, 19 June 2021 (UTC)

Using tagbot to mark orphaned page

Some of the newly created pages are orphaned pages, but they have not been marked with orphaned templates, is tagbot that can do this?--q28 (talk) 08:44, 9 June 2021 (UTC)

@Q28:  Question: Is there already a bot that does this? This is definitely a task that can be done, but I want to make sure that it's not duplicated by another bot. 🐶 EpicPupper (he/him | talk, FAQ, contribs) 21:49, 14 June 2021 (UTC)
@EpicPupper: There are no robots in operation. If there was, he'd have stopped working by now.--q28 (talk) 11:07, 18 June 2021 (UTC)
@Q28: BRFA filed 🐶 EpicPupper (he/him | talk, FAQ, contribs) 16:20, 18 June 2021 (UTC)

Cleanup of chronological data in NFL Draft tables

There are 298 list articles in Category:Lists of National Football League draftees by college football team. Of these, approximately 121 articles contain a table of records which are not in correct chronological order (they contain newer "2021" rows on top of older "19XX" rows).

Currently, all 121 articles in need of one-time cleanup contain sections with {{Chronological|sect=yes|date=June 2021}}. So that is potentially a hook to key off of.

There are two cases where automated cleanup should update the table to cause render in chronological order (oldest YYYY rows first, newest YYYY rows last, AND preserve the existing top-bottom sequence of rows within a given year's draft by Round/Pick/Overall ).

  1. List of Central Arkansas Bears in the NFL Draft#Selections - where there are no rowspans in the YYYY column. (Every year row has exactly 1 player selected)
  2. List of Central Michigan Chippewas in the NFL Draft#Selections - where there are some years with 2 or more players selected and the existing top-bottom sequence must be preserved. Ex, within the 2 "2019" rows, the "2 7 39 Sean Murphy-Bunting Tampa Bay Buccaneers DB" row must remain above the "6 22 195 Xavier Crawford Houston Texans DB" row. Same callout for 2010, 2007, 2005, etc.

Note, there might be a few per-article variations which do not use a section name of #Selections

Here are the other cases to consider, where no bot modification is desired:

  1. List of Ohio State Buckeyes in the NFL draft is a formatting outlier and no change is desired.
  2. List of Florida State Seminoles in the NFL Draft is a formatting outlier with multiple tables, but can be manually fixed (not in scope to this request).
  3. List of Penn State Nittany Lions in the NFL draft#Players is correctly sequenced, but without rowspans for players in the same YYYY Draft (there are no YYYY rowspans)
  4. List of Austin Peay Governors in the NFL Draft#Selections is in the correct chronological order AND without YYYY rowspans
  5. List of Amherst Mammoths in the NFL Draft#Selections is in the correct chronological order AND with YYYY rowspans

If this can be automated, I am happy to manually inspect all 298 articles and fix/revert any missed edge cases to stable where necessary. Updating these manually would be very slow and prone to error. The scope is unlikely to ever be done, even with WP:NFL project participation. So any automation would be an enormous time-saver and win. Cheers, UW Dawgs (talk) 23:41, 9 June 2021 (UTC)

Accepted @UW Dawgs: Could you please do a couple of things for me?
  • Drop a line onto WT:CFB to give chance for any other input, such as get confirmation that this is the desired direction of sorting, before a BRFA is filed.
  • Check that the sorted versions of the two tables you highlighted as examples appear correctly on User:William_Avery/Test_tablesort
Thanks, William Avery (talk) 14:01, 10 June 2021 (UTC)
Feedback from WP:CFB and WP:NFL projects
BRFA filed - It will be best to continue this discussion there.

Y Done @UW Dawgs: This bot couldn't help with List of Florida State Seminoles in the NFL Draft. William Avery (talk) 12:32, 22 July 2021 (UTC)

WP:OVERCITE warning for draft articles submitted

As an AfC reviewer I come across many draft articles with a disproportional ratio of references to the prose text. A healthy number of such draft articles are on subjects that ultimately turn out to be notable, i.e. authored by newcomer editors with good intentions who are simply oblivious of WP:OVERCITE.

By developing and launching a bot that would show a warning notice for editors trying to submit a draft article triggering WP:OVERCITE filters, we would:

  1. Raise the awareness of newcomer good faith editors on citation rules and give them an opportunity to resolve the shortcomings in their draft;
  2. Contribute to a speedier resolution of the 4500+ AfC backlog by decreasing the number of reference-bloated drafts;
  3. Decrease the number of naive bad-faith editors bombarding drafts with references in hope that it will create an illusion of superficial appearance of notability.

Suggested filters that would trigger this notice could vary and be based on a community consensus. Examples of filters:

  1. There are two or more sentences or words with over 3 or more references;
  2. Disproportional ratio of references per X number of words in the article.

Example text for the notice: "It seems like your draft is using too many references. Please keep in mind that draft articles are not accepted based on the number of references provided. To the contrary, citation overuse can delay review or even be a reason for a decline. Please see WP:OVERCITE and consider editing your draft accordingly." nearlyevil665 06:24, 12 June 2021 (UTC)

I am noting here that WP:OVERCITE is an essay that contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of Wikipedia's policies or guidelines, as it has not been thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints.
  • Since this is an essay & neither a policy or guideline, I question the need to enforce it.
  • It is unclear from the proposal whether this would apply only to drafts that have been submitted for approval. A plethora of citations might well be appropriate for a draft still in development.
  • I know that I always strive for a minimum of ten citations for biographical notability. I would hate to see a draft get warned for having too many citations in draft form then get nominated for deletion as an article when it has too few. It seems to me that the proposed bot should only apply when there is a minimum number of citations.
Peaceray (talk) 06:36, 12 June 2021 (UTC)
Hi, thanks for the input. I understand that WP:OVERCITE is technically an essay and your other reservations about BLPs. Let's go through them one by one:
  1. I agree this shouldn't be a nuisance for editors who have good intentions and are providing a proper set of references, especially for BLPs. This could be avoided through the following:
    1. Registered accounts could turn off this notification in their preferences;
    2. The filter would be triggered for blatant citation overkill, such as having over 5 references per a single word or sentence. Very rarely could a BLP with only two sentences possibly justify 10 in-line references between them;
  2. The proposal would apply only when editors try to submit the draft for review. A notification would come up (much like the blacklisted source warning) which the editor could override by simply hitting publish again.
  3. For your point about WP:OVERCITE being an essay, was an oversight on my part. I'm not well-versed enough around Wikipedia to say if that essay cannot be linked as recommended reading for newcomer editors.
nearlyevil665 06:58, 12 June 2021 (UTC)
I feel pretty strongly about having a minimum of 10 in-line references before the bot starts flagging them as WP:OVERCITE, even if they only two sentences. Let me give a scenario or two.
  1. A woman who has no Wikipedia article wins a Nobel Prize in physics.[1] Or a previously unknown colonel in a small African country's army, again with no Wikipedia article, becomes head of a government through a coup d'etat.
  2. A non-autoconfirmed editor immediately creates a stub in the draft namespace with ten in-line references from contemporary news sources. This editor submits it for promotions, then gets the WP:OVERCITE message, so the editor removes some of the references.
  3. An autoconfirmed editor, perhaps even the creating editor who now may have enough edits to be autoconfirmed, moves the draft into the article space so that Wikipedia
  4. A deletionist tags it with a {{Db-person}} speedy delete notice.
  5. A well-meaning administrator who has not read the news or is familiar with the field deletes the article.
I have been involved with article rescue before, & IMHO there are many deletionists who are trigger happy & who fail to do the due dilligence of performing a rudimentary search to see if someone is actually notable or not. Also, IMHO, a disproportionate of speedy deletion tags end up on women's articles.
I have also seen instances of administrators taking the speedy deletion tags at face value, rather than move an article into the draft namespace. I do not think I need to emphasize that it is more difficult to get an article created after it has been deleted.
While some may feel that I have described a hypothetical situation, I have been around long enough & have enough edits under my belt to assure you that the deletionist tagging & subsequent deletions are not isolated scenarios. It happens all the time.
In this regard, I strongly feel that the number of references that WP:BIO, the notability guideline, trumps the WP:OVERCITE essay. For that reason I will strongly oppose any bot that does not allow a minimum of ten in-line references before warning about WP:OVERCITE. Peaceray (talk) 20:51, 12 June 2021 (UTC)
It is now making sense to me too. I'd feel comfortable with a bot that'd get triggered to display the notice to the editor only after a certain threshold of in-line references has been cleared. Based on my experience with blatant citation overkill it's nearly always the extremes, e.g this Draft:Savannah St. Jean, so the bot would still technically perform its function with a minimum of 10 or 15 in-line references. It could, for example, pick up citation overkill in the linked draft by triggering two conditions: 1. Draft has over 10 in-line referenes 2. It has an unreasonable number of references for one particular paragraph (1 reference per every 4 words, for example, or multiple words with over 5 references). nearlyevil665 07:03, 13 June 2021 (UTC)
I am on board with that. Peaceray (talk) 20:14, 14 June 2021 (UTC)

References

  1. ^ Bazely, Dawn (2018-10-08). "Perspective - Why Nobel winner Donna Strickland didn't have a Wikipedia page". Washington Post. Retrieved 2021-06-12.
  • I am not at all OK with this.

There are many list type articles, or articles containing a bibliography, or scientific details, where a large number of citation is essentia, and it is the accepted practice in medical articles to give what would probably be considered an excessive number in any other field. Any notice that might lead to people removing such references would be giving exactly the wrong advice. There are however several real problems, but I do not see how they are capable of easy solution by bot.

(1)Multiple citations for the same point, especially when some of the references are copies of each other.
(2)inclusion of unnecessary references from low quality sources, or from non-independent sources
(3)Short articles where the references compose most of the article
(4)Reopeated references from the same source to multiple places in the same sentence.

I do not think articles are often declined on this reason alone; and if they are, it is incorrect, and should be brought to attention of the deleting editor or if necessary at Deletion Review. Rather, the inclusion of excess referencing is often a sign of promotional editing, or editing by a fan. It is bad style, and while it is neve correct to delete for style alone, bad style often indicates problems, and will certainly cause an article to be looked at carefully--perhaps even hyper-carefully. It's not currently concentrated on women; rather, a few years ago some of those running editathons and projects on undercovered areas were somewhat careless of ensuring that the articles written were of more than borderline notability . This did encourage the tendency of some editors with traces of misogeny to be over-critical in this area. But those running such projects have learned, and so have most of the misogynists. DGG ( talk ) 06:48, 16 June 2021 (UTC)

Help needed

I'm not really sure what bot is involved with this but we have had ongoing problems with Category:AfC G13 eligible soon submissions. When things are running normally, it holds between 4,000 -5,000+ draft articles that are between 5 and 6 months since their last edit. When they hit 6 months without a human edit, they are deleted per CSD G13. Also, reviewers from AFC (particularly DGG) go through this category and "rescue" promising drafts and postpone their deletion and sometimes even move good drafts into main space.

What has happened this past year is that this category starts going down to 3,000 drafts, 2,000 drafts, 1,000 drafts and now it is only holding 478 expiring drafts. When this has happened in the past, I have asked ProcrastinatingReader for help and he has been able to do some technical task that causes the category to, over a few days, fill back up again. Right now though, he can't get to this task and advised me to come here and ask for help.

I have little information to offer beyond a description of the problem. I have no idea what bot or template categorizes these drafts or what ProcrastinatingReader did to fix this problem. I know that having categories filled has been an ongoing problem because I have brought the issue to the Village Pump and other individuals several times over the past few years. So, I'm not sure what the fix would be. If you could find a permanent solution, that would be awesome. Thank you. Liz Read! Talk! 02:20, 16 June 2021 (UTC)

Here is the list of pages that should have been in that category, which is no less than 4808 in number. If all those pages are purged, the category will fill up. But IMHO this seems like an artificial solution when the real solution would be to use the database report directly. The way this particular category is populated will always be really unreliable (since it relies on an edit not being made to the draft, which is the opposite of what attracts the attention of MW job queue). If people are relying on this exclusively to review old drafts, well, they're going to miss out on a lot. – SD0001 (talk) 11:49, 16 June 2021 (UTC)
Liz, If needed, we could have a bot dump the query that SD0001 gave (which is very similar to the one my bot uses for its G13 warnings) onwiki somewhere for people to review as they wish. firefly ( t · c ) 12:21, 16 June 2021 (UTC)
Well, whatever you did, it worked! Many thanks!
Personally, I rely on SDZeroBot's reports from SD0001 but to see expiring drafts, it involves going 7 days back into User:SDZeroBot/G13 soon's edit history and then sorting the page by date and time. This is no problem once you are used to it but I think AFC patrollers find it easier to use the Category:AfC G13 eligible soon submissions where the drafts that are expiring the soonest are at the top of the category and they can just scan the category. I know that SDZeroBot's reports are more complete than the category is which is why I use them. There is also a problems with using a 7 day old page history from SDZeroBot as you have to check the edit history of every draft to make sure there haven't been edits to the draft in the past 7 days which is not the case when using the category which removes drafts if they've been recently edited. I don't know how we'd make use of the query list as the pages aren't linked on this list.
I'm not sure of the technical stuff you mention but both the category & SDZeroBot's reports serve a purpose, it's useful to have a 30 day, 7 day and current day list of drafts that will be expiring soon. The 30 day category helps us know how many drafts are expiring in the coming month, the 7 day report helps AFC reviewers like DGG target promising drafts that are expiring soon and postpone their deletion and the current day list helps admins track drafts that are expiring today.
Is "purging" something that I or DGG can do? Or maybe we could adapt the Firefly bot to tag eligible drafts a month ahead of time when it posts its talk page notices? Liz Read! Talk! 19:57, 16 June 2021 (UTC)
I ideally would work at 30 days using the category , .but it's been months since I've been able to get that far. At the moment I working at 1-2 days and don't have any margin. The category page is supposed to be sorted by time, and usually it is, and I can work on about 1.5 to 2 days worth per day in the time i have for this -- if I don't find myself getting diverted and there's no emergency elsewhere on WP or in my real life. I agree with Liz that it's critical to have the entries linked to the drafts so pop-ups work. . DGG ( talk ) 05:22, 17 June 2021 (UTC)
At the moment, Category:AfC G13 eligible soon submissions, shows nothing at all. What went wrong this time? DGG ( talk ) 23:37, 19 June 2021 (UTC)
It's displaying 3,909 pages for me. * Pppery * it has begun... 00:45, 20 June 2021 (UTC)
Looks fine to me. However, I still don't have an answer to my question of what can be done about this in the future. Should I just request help here every time the number of drafts falls below 1,000? Or is there some long term solution? Liz Read! Talk! 04:03, 20 June 2021 (UTC)
OK for me now. May have been a cache issue. DGG ( talk ) 07:43, 20 June 2021 (UTC)

Sounds like a task for User:Joe's Null Bot. According to toolforge:sge-jobs/tool/nullbot it's still operational, despite the warning on its page -FASTILY 22:16, 7 July 2021 (UTC)

I created a custom task to process this in a more simple manner. No onwiki list to manage (like User:ProcBot/PurgeList2) but this also means it will actually complete its runs. It'll run once a week. Current category count is just over 3k. It purges those three cats listed on User:ProcBot/PurgeList2, which I presume together contain all AfC active drafts. It will update the G13-expiring-soon for pages in those three categories only. I still advise moving to a more sophisticated DB-generated system, such as SD's list. ProcrastinatingReader (talk) 11:26, 9 July 2021 (UTC)

Thanks so much, ProcrastinatingReader, I just noticed yesterday that the category had filled back up. It was getting close to 1000 drafts and I was ready to come back here for help next week and then it bumped up to over 3,000. Liz Read! Talk! 03:34, 11 July 2021 (UTC)

Anjana Chaudhari

Pls add my name bots user. Hind ji (talk) 06:19, 29 July 2021 (UTC)

@Hind ji: Could you please make your request more specific? Thanks! GoingBatty (talk) 00:24, 2 August 2021 (UTC)

Fixing proper name for The New York Times...

I regularly come across The New York Times in articles written only as "the New York Times". Would it be possible to have a bot find all instances of "New York Times", and ensure that the "the" before the instance is not only capitalized, but also italicized as part of the proper name of the publication? This could then also be repeated for other publications that the "The" is part of the proper name (The Boston Globe, The Herald Journal, The News Courier, The Plain Dealer, etc.) after/later? - Adolphus79 (talk) 08:01, 22 July 2021 (UTC)

@Adolphus79: Seems to be supported by MOS:THETITLE. I'd be happy to do this presuming there is consensus to do so. GoingBatty (talk) 14:34, 22 July 2021 (UTC)
I think this might run afoul of WP:CONTEXTBOT. Consider the following sentence from Minnie Maddern Fiske:

According to the New York Times article "Ibsen or Shakespeare?" (March 18, 1928), Harrison Grey Fiske was 12 years old when he first set eyes on the future Mrs. Fiske—she was but eight, performing in a Shakespearean role.

Including the "the" as part of the title would make it grammatically incorrect. Vahurzpu (talk) 15:05, 22 July 2021 (UTC)
Is there a way to avoid that? "If not preceded by "a" ("a The NYT article") or another "the" ("the The NYT article"); then add the"? - Adolphus79 (talk) 16:07, 22 July 2021 (UTC)
Please don't. That would be horrible. – Jonesey95 (talk) 20:38, 22 July 2021 (UTC)
Don't what? - Adolphus79 (talk) 21:01, 22 July 2021 (UTC)
Don't write ("a The NYT article") or ... ("the The NYT article"). – Jonesey95 (talk) 21:20, 22 July 2021 (UTC)
I don't understand, if that is the proper name of the publication, why not write it as such? - Adolphus79 (talk) 21:23, 22 July 2021 (UTC)
It's grammatically wrong. I think. Or at least, jarring and confusing, non-typical style. -- GreenC 22:16, 22 July 2021 (UTC)
I agree. Maybe "a The NYT article" is technically correct grammar, but it is much easier to read "a NYT article". It is also much more common ([11] vs [12]. Vpab15 (talk) 22:40, 22 July 2021 (UTC)
MOS:THETITLE suggests that the leading article may be dropped when the title is used as a modifier: According to a New York Times article by .... Certes (talk) 00:43, 23 July 2021 (UTC)

I'm not going to disagree with concensus, nor ask the bot(master)(s) for an impossible task. It was just an idea I had at 4AM... lol - Adolphus79 (talk) 02:34, 23 July 2021 (UTC)

Beware that a few publications link to different articles when "The" is prefixed. Examples I deal with regularly include New Statesman magazine vs. The New Statesman sitcom, and St. Petersburg Times of Florida vs. The St. Petersburg Times of Russia. Certes (talk) 00:49, 23 July 2021 (UTC)
Here are some other cases where the bot might be useful. Some are not publications, notably Holocaust and Bahamas, and some might not be improved by adding "The". As a proxy for usage, I've limited the list to redirects with 100+ incoming wikilinks. Of course, there is no requirement to have any wikilinks at all, or even a redirect, but the filter excludes thousands of terms mentioned so rarely that deploying the bot might be a waste of effort. Certes (talk) 02:16, 23 July 2021 (UTC)
I understand, that is why I was focused directly on those that "The" was part of the official proper name. - Adolphus79 (talk) 02:34, 23 July 2021 (UTC)

Orphaned User Talk pages

Hey, Bot folks,

I accidentally happened upon a user talk page where the user page had been deleted almost exactly four years ago (see User talk:Bmoy94/sandbox/Innova Market Insights) but the user talk page was not deleted. This was a surprise to me because we have database reports for orphaned talk pages (see Wikipedia:Database reports/Orphaned talk pages and Wikipedia:Database reports/Orphaned file talk pages). Are User Talk pages exempted from these reports?

Obviously, this is not an urgent problem but it would be useful if there was a bot report for orphaned User Talk pages as well. Right now, many deletions are done with Twinkle which will delete redirects but not the redirect talk pages. If it is a regular article redirect talk page, it can show up on the orphaned talk page report or the broken redirects report but not all talk pages of redirects are also redirects and some redirects are from User pages. I don't think there are hundreds of these pages out there but it would be useful if, like the orphaned file talk page report, this could become a weekly report that is done. Thank you for considering this request. Liz Read! Talk! 22:46, 24 June 2021 (UTC)

Of course, it's perfectly normal for an editor to have messages at User talk:Example but not have created a User:Example page, and WP:G8 explicitly excludes those cases. As for subpages, there are 257,914 orphans, including 165,127 called User talk:Example/archive or similar. quarry:query/56182 Certes (talk) 00:25, 25 June 2021 (UTC)
Oh, I didn't mean THE User Talk page, but talk pages for sandboxes and drafts where the sandbox or draft has been already been deleted. I don't know how to get from the quarry query page to a list of the actual pages so I could review them. Liz Read! Talk! 03:38, 25 June 2021 (UTC)
@Liz: Someone would have to rewrite the SQL to replace COUNT(*) with the page titles. GoingBatty (talk) 05:02, 25 June 2021 (UTC)
Yeah, I don't know whether that would be an easy project or a time-consuming commitment. Some help would be greatly appreciated but like I said, this would be a maintenance clean-up project rather than something that is urgently needed. I'm sure it would help tidy up pages that have been sitting around abandoned for years though. Liz Read! Talk! 05:53, 25 June 2021 (UTC)
That's a simple change: full list in quarry:query/56260. I expect you'll want to do some filtering on the pages but that can be done after download with your favourite programming language or a good text editor. Certes (talk) 11:22, 25 June 2021 (UTC)

I thought it would be "easy" to download the current list of titles (enwiki-20210620-all-titles.gz from here) and do some clever stuff to find orphaned talk pages. Problem: my first run found 16,228,343 orphaned pages! Some superficial checking showed that most of those were due to things like Talk:Example/Archive_1 or .../GA1 or .../FA1 or .../Test or .../OtherStuff. When I finally found a few dozen orphaned talk pages, they were tagged as "keep" (in Category:Wikipedia orphaned talk pages that should not be speedily deleted), examples Talk:Qazwsxedcrfvtgbyhnujmikolp, WT:AOTM, Category talk:Films about hebephilia, Draft talk:Anirban Sengupta. Then there are weird redirects like WT:MOS:VG. I finally found a junk page: TimedText talk:Constitution.ogg (and probably a few more). I'm posting this to let anyone wanting to take the job on that quite a lot of pruning of results would be required. Johnuniq (talk) 11:16, 25 June 2021 (UTC)

Yikes! I forgot about GA & FA nominations on article talk pages. Also, since this was a request about user talk pages, now that I think about it, there are archived user talk pages where there isn't an accompanying user page. Well, it seemed like a good idea at the time. Sigh. Liz Read! Talk! 22:06, 25 June 2021 (UTC)

Creating a list of all pages that use the "net worth" parameter in the infobox

We recently had a discussion at Template talk:Infobox person#Deprecating the net worth parameter? and it was decided to remove the parameter. Would it be possible to get a list of all the infoboxes that use the parameter. Also could it be go back to July 11 rather than the current date since we have someone who is removing all the parameters as we speak? Thanks! Patapsco913 (talk) 16:00, 18 July 2021 (UTC)

Set up a tracking cat, tell the other editor to stop wasting their time, and let my bot handle it. Primefac (talk) 16:07, 18 July 2021 (UTC)
Thanks, the idea is that we want to move the info to the body of the article before it is all deleted. Cheers Patapsco913 (talk) 16:49, 18 July 2021 (UTC)
Well, the tracking cat is set up, and it will take some time to fill, so let me know if you actually want removal (otherwise I'll just keep an eye on the cat). Primefac (talk) 16:51, 18 July 2021 (UTC)
will do thanks for the help!Patapsco913 (talk) 16:52, 18 July 2021 (UTC)

Please remove these pages (that contain {{pec}} and the related templates and use the template {{Category class}}) per the discussion at Template talk:Category class. Qwerfjkltalk 21:41, 13 August 2021 (UTC)

With <500 edits, this is better for an AWB-type task. Primefac (talk) 00:11, 14 August 2021 (UTC)
@Primefac: it is still better for this to be done with bot rights, whether it be AWB or another, so someone doesn't have to wastefully sit there and press save. — billinghurst sDrewth 04:13, 14 August 2021 (UTC)
It is something that is not codified, but over the years we have an informal consensus among BAG that for <500 edits, by the time a bot gets tested, trialled, and approved, you're pretty much done with the run. It's much faster to just do it with AWB or similar. Primefac (talk) 11:08, 14 August 2021 (UTC)
@Primefac Sorry, I didn't realise so many were transcluded via templates. I've done this myself via JWB. ― Qwerfjkltalk 12:48, 14 August 2021 (UTC)

New pictures in a Commons category

Hello. I'm a marine biologist specialized in Echinodermata. I would like to be informed of any new picture of these animals so I can review the identification and, when useful, add them to the relevant Wikipedia articles. But as there are over 7000 species of them, of course I can't check all the categories every day. I used to benefit from Ogrebot's newsfeed for a long and useful time but it is no longer working. Do you guys know any other way I could get such uploading newsfeed ? Thanks and best regards, FredD (talk) 14:23, 8 June 2021 (UTC)

@FredD: What is your preferred way to be notified? Talk page message? User subpage that you have watchlisted? Ping on a user subpage? Notification? EpicPupper (he/him | talk, FAQ, contribs) 21:41, 9 June 2021 (UTC)
Hi. A user subpage would be good : here is how it worked with Ogrebot, it was doing well. Cheers, FredD (talk) 06:10, 10 June 2021 (UTC)
Hi, any news, EpicPupper ? Thx, FredD (talk) 19:53, 14 June 2021 (UTC)
@FredD: Still in the concept stage, but I'll try to hammer out something later this week. BTW, please ping me when replying, or else I might not be able to see the message :) 🐶EpicPupper (he/him | talk, FAQ, contribs) 20:10, 14 June 2021 (UTC)
For future reference, I'm essentially stuck at a standstill right now, it would be appreciated if somebody else could take on this task. 🐶 EpicPupper (he/him | talk, FAQ, contribs) 17:48, 23 June 2021 (UTC)

Featured topic bot

Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 (talk) 01:14, 4 May 2021 (UTC)

Doing... Aza24, hello friend. I started on this one tonight. You're right, this is quite complicated. Hopefully I am disciplined enough to complete this one. Feel free to ping me every once in awhile to keep me on task! I may ask you some questions once I get a little farther along. Code so far: task2-promote-topics.phpNovem Linguae (talk) 11:10, 18 May 2021 (UTC)
Is there a way to determine whether an article is a featured topic nominee vs a good topic nominee, purely from its nomination page? Example: Wikipedia:Featured and good topic candidates/Fabula Nova Crystallis Final Fantasy/archive1Novem Linguae (talk) 12:36, 18 May 2021 (UTC)
A topic has to be at least 50% to be considered Featured. I guess that would be hard to figure out for a bot, right? GamerPro64 02:08, 19 May 2021 (UTC)
Many thanks for taking this up Novem! Yeah, Gamer's comment is the only way to tell—though we could probably add a parameter to the template if that won't work? Aza24 (talk) 05:26, 19 May 2021 (UTC)
Aza24, GamerPro64. Thanks for explaining how that works. I'll make a note. Work is slow but progressing. Link to GitHub.Novem Linguae (talk) 13:29, 25 May 2021 (UTC)
Hey Novem Linguae, just wanted to check in, any progress with this? The main thing needed is automatically updating the talk pages and creating Template:Featuredtopictalk—the category stuff isn't a huge deal, and we can definitely update a page like Wikipedia:Featured Topics by hand. Aza24 (talk) 14:49, 25 July 2021 (UTC)
Aza24, hey there, thanks for checking in. I got pretty busy with other wiki projects, but I'll try to take another look at this soon. The simplified requirements are a good idea. We can always add more features later. –Novem Linguae (talk) 09:43, 28 July 2021 (UTC)
Aza24, alright, I started a rewrite of the bot tonight that will just update article talk pages. We'll start simple, then add features later. 1) I spot checked a few article talk pages, I didn't see any "FTC/GTC nomination templates" as mentioned in step 4a of the promote instructions. Can you link me to a talk page that has this? 2) The next time you have an article to promote, can you ping me and I'll do the whole promote instructions for it manually, so I can learn more about how it works? Thanks. –Novem Linguae (talk) 08:18, 30 July 2021 (UTC)
Also, are there any other {{icon}}s used in the topic box besides FA, GA, FL? –Novem Linguae (talk) 09:31, 3 August 2021 (UTC)
@Aza24, GamerPro64, and Sturmvogel 66:. Alright, I'm making good progress now. The bot does step 2 and most of step 3 of your checklist. I'll keep adding steps, and I think we can BRFA in a week or two. Here's some screenshots. [13]. For the trigger, I plan to have it listen for the template {{User:NovemBot/Promote}} placed on topic candidate discussion pages. –Novem Linguae (talk) 09:57, 6 August 2021 (UTC)
Great to hear; apologies on my slow responses. There shouldn't be any icons other than the three you've listed. The nomination templates have slowly become out of use, but see Talk:1998 FA Charity Shield for a current one—I almost wonder if they're needed at all. I will try and ping sometime this weekend when I promote another. My work has just slowed down after yesterday, so I should have more time to help with this. Aza24 (talk) 21:15, 7 August 2021 (UTC)
Aza24, things are going well. The bot now does steps 2-5 and step 9. Next round of questions:
  • are topic page names always the same as the lead article's name?
  • If a talk page is missing an {{Article history}} template, what other templates that need to be converted are likely to be present? {{GA}}... any others?
  • Is there a way to figure out what topic type a Featured Topic should go in on the Wikipedia:Featured topics page?
  • Is there a way to figure out what topic type a Good Topic should go in on the Wikipedia:Good topics page? {{article history|topic=}} looked promising, but it appears to use different topic types. https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Good_articles/Project_quality_task_force#Good_article_topic_values
  • Is there ever a situation where archive2, archive3, etc. are used?
Novem Linguae (talk) 23:50, 11 August 2021 (UTC)

Responding in order:

Unknown Infobox parameters

Hi there, I've been occasionally trying to chip away at the articles in the Category:Infoboxes with unknown parameters. Some of these categories are... beefy to say the least, some of the standouts are Film with 11.2k, German location with 9.8k, Officeholder with 3.5k, Organisation with 3k, Scientist with 7.4k, Settlement with 10.5k and Sportsperson with 3.7k. Currently, the only way to really tell which parameter is causing the issue is to attempt to edit the article and either check the infobox parameters or save the preview so it appears at the top of the page. You are at least given an idea of what to look for by the sortkey it appears under in the category, but I don't think this works when there are multiple errors properly, which results in having to consult the preview regardless. I'm not certain but I feel like a lot of these ones especially are either deprecated parameters that PrimeBOT might be able to handle or simple misspellings or other issues that could be handled with AWB or other similar tools, such as missing underscores and dashes and alt test and image sizes being separated with a pipe.

Since this requires information to be grabbed, it seemed like a bit more than an SQL query would be necessary, I was thinking of some sort of bot that could generate a report maybe that could be linked to from the category page. I'm not thinking anything too complicated (in my uneducated opinion, I think), just something that lists the page in the left column and the broken parameter in the right, you could sort both columns (so by article title or broken parameter), and this would make it much easier to visibly see where there is a great amount of overlap in broken parameters to more speedily clear out these categories.

I hope that makes sense, but it would hopefully assist the relevant WikiProjects in being able to clean up their respective articles as well, and potentially allow for these parameters to be either added in as aliases or if there is significant usage within a template, maybe even have an underused parameter modified to call an already existing name used in the majority. Thank you if anyone is willing or able to help out! --Lightlowemon (talk) 12:22, 28 June 2021 (UTC)

@Lightlowemon: While a bot report wouldn't necessarily be bad, it actually is possible to do almost what you want with an SQL query. Most of the infobox-bad-parameter tracking categories are configured to use the first bad parameter as the sort key, so something like quarry:query/37156 works. If there are multiple bad parameters on the same page, however, this method will underestimate the issue. Vahurzpu (talk) 12:57, 28 June 2021 (UTC)
@Vahurzpu: Huh, that is pretty much what I was looking for, I wasn't sure an SQL query would show me what I was after, looking at the example you used which can't show multiple parameters, it seems several of the parameters you selected not to show were used on the remaining pages not listed, which would shed the light I was looking for. CO-OP (podcast) for example has atom, width, audio_format, rss_other and atom_other parameters, all of which will call the error category. It's good that most of this documentation on the talk page shows most of their removals (better listed on the main page or the category page IMO, for purposes like this), but like you identified it can underestimate the issues. This is helpful all the same though and a useful interim at least for now while I conquer the smaller categories, thank you. --Lightlowemon (talk) 13:25, 28 June 2021 (UTC)
@Lightlowemon: You may also be interested in User:PrimeHunter/Sortkeys.js which only requires a click on a category page. It would be more useful if it could display the sort keys next to the links on the category page but my JavaScript is extremely limited. PrimeHunter (talk) 14:14, 28 June 2021 (UTC)
The monthly report linked from the Template Data section in the documentation can also be helpful. – Jonesey95 (talk) 15:46, 28 June 2021 (UTC)
@PrimeHunter:, the script works very similarly to the quarry query, in that it only shows one of the broken parameters, I've added it still though as it'll be good for the short ones , many thanks. @Jonesey95:, the Template Data report is actually super helpful and captures all those multiple cases as well as tells you how many are present which is what I was looking for, admittedly some templates seem to not quite be documented properly, but hopefully, I can fix that stuff as I go (Basketball Player and German Place were the two large ones I checked). Thank you everyone for your help and suggestions!
@Primefac: Would you be willing to run PrimeBOT over these infoboxes? Thanks! GoingBatty (talk) 18:10, 28 June 2021 (UTC)
As always, if I'm given a template and a list of parameters that need culling (the more specific the better) then I am happy to do so. Best place for such things is my userpage. Primefac (talk) 22:14, 29 June 2021 (UTC)
@Lightlowemon: Would you be willing to provide this information to @Primefac: on their talk page? GoingBatty (talk) 04:02, 30 June 2021 (UTC)
As I find them I can send them over, thanks to you both. — Preceding unsigned comment added by Lightlowemon (talkcontribs) 08:37, 30 June 2021 (UTC)

Legally mandated deletion of all online news content by German state broadcasters

All German state broadcasters have to follow a 2009 law that they need to delete all online content after a year, so as not to "disadvantage" commercial news corporations. ("12. Rundfunkänderungsstaatsvertrag" [de], 1 June 2009)
This has big consequences for Wikipedia when they cite news from German state broadcasters: It means legally mandated automatic link rot for such sources. I suggest a bot that recognizes when such a broadcaster is cited and automatically requests a save point from the Internet Archive, then links the save point in the ref.

Also see the Depublication [de]: The whole German article is about this novel concept brought up the 2009 law.--ΟΥΤΙΣ (talk) 00:26, 27 June 2021 (UTC)

Do we have a way to find and recognize all of these sources? Are we just searching for certain URL prefixes? HighInBC Need help? Just ask. 00:28, 27 June 2021 (UTC)
I would guess, anything from the websites of the broadcasters on that list? DS (talk) 00:46, 27 June 2021 (UTC)
This sounds like something that either GreenC or Cyberpower678 would best be able to get their archiving bots to work. Primefac (talk) 01:14, 27 June 2021 (UTC)
Agree 🐶 EpicPupper (he/him | talk, FAQ, contribs | please use {{ping}} on reply) 01:20, 27 June 2021 (UTC)
Thanks for the friendly reception of my idea, so far. It just came to my mind that well known news magazines like Tagesschau that have their own, independent websites (tagesschau.de) should also be scanned for, as their online news content regularly gets deleted per that law, too. I'm not sure how many else there are, but there may be quite a number. Will look to create a list of those tomorrow. --ΟΥΤΙΣ (talk) 01:41, 27 June 2021 (UTC)
I could also email ARD and ZDF (which serve as umbrella organisations for smaller affiliates, and should cover most if not all public broadcasters in Germany) and ask them to provide a list.
Another thought: Since the 2009 law realized a EU policy, this affects Austrian state broadcasters, too. Although I think I read that ORF tends to ignore its online implementation. (Not sure about that last part.) --ΟΥΤΙΣ (talk) 01:48, 27 June 2021 (UTC)
How is it being handled at dewiki? -- GreenC 02:43, 27 June 2021 (UTC)
@GreenC: I'm still looking a bit, but it looks like this was touched on back in 2010, at https://de.wikipedia.org/wiki/Wikipedia:Bots/Anfragen/Archiv/2010-2#tagesschau.de_et_al_–_Inhalte_"verschwinden"_→_Webcite?, and "tagesschau.de"_etc. https://de.wikipedia.org/wiki/Wikipedia:Fragen_zur_Wikipedia/Archiv/2010/Woche_43#Neuerdings_viele_tote_Links_durch_"tagesschau.de"_etc.. !ɘM γɿɘυϘ⅃ϘƧ 04:11, 27 June 2021 (UTC)
FWIW, my understanding is that behind the scenes, any time we add an external link to WP, there are automatic processes that automatically call to Internet Archive and make sure the link is archived, and then User:InternetArchiveBot can then use this information to populate archive links. In other words, we technically need to do nothing. --Masem (t) 04:46, 27 June 2021 (UTC)
Are you sure this is working reliably, Masem?
The immediate cause for my proposal was that I discovered a "lost" 2019 WDR article from this April 2020 edit.
In this specific case a save point was generated at the Internet Archive in March 2021, but the only thing archived there is an error message citing the 2009 law mandating the article's removal.
Thank you SQL, for digging out those old German bot discussions.
My translation of the gist of your two links: Croesch (talk · contribs) had a working bot in 2010 that linked affected online articles to their archived version on the archive depub.org.
An option to also offer archival at the Internet Archive was suggested, but never realized. When depub.org shut down at the end of 2010, the project was dead and not revived thereafter. (At least there is nothing new in your two links, after 2010.) --ΟΥΤΙΣ (talk) 07:40, 27 June 2021 (UTC)
You can read more at WP:PLRT, and as that page states "(nb. in practice not every link is getting saved for various reasons)." Whether that case for that German work was exceptional, or represents a situation across all German works, I don't know. --Masem (t) 13:05, 27 June 2021 (UTC)
I'd like to find out if this does substantially affect English Wikipedia, after all, or not.
You could use a bot/script to find answers for the following questions:
  • How many referenced news articles (from the above list of German public broadcasters) older than a year have not been linked along with an archived version? These are probably all dead by now.
  • How many referenced news articles (from the above list of German public broadcasters) older than a year have been linked along with an archived version? These should all be considered save except for those in the following question.
  • Within these cases: In how many of these is the time span between publication and archival greater than one year? These would all likely have archived an error message, like in my above example.
With these numbers you could judge whether a substantial problem exists with such references at this time, or not.
Could someone code a small bot/script that pulls this data from the servers using SQL and does the described analysis? (I could to it myself, I suppose but I haven't done such queries on WP data and would need someone to guide me.) --ΟΥΤΙΣ (talk) 15:46, 4 July 2021 (UTC)

FfD notices on article talk pages

Pinging User:Marchjuly. We had a short discussion at Wikipedia talk:Files for discussion#Notifying uploaders about a bot that could leave FfD notices on the talk pages of all articles that use the nominated image. I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?), and having one for local files would be beneficial for all the same reasons (more participation at FfD, having a record in the article talk history, and general notification and discussion transparency purposes). — Goszei (talk) 23:31, 2 July 2021 (UTC)

@Goszei Can you create a template message first that the bot can place on the talk page? I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?) That's Community Tech Bot [14]SD0001 (talk) 15:47, 5 July 2021 (UTC)
If a bot can be created for this, then that would be great. However, it probably would be a good idea to make sure there are no unexpected issue (e.g. conflicts) with other bots going around patrolling file use. Perhaps, Fastily and JJMC89, who both operate bots working in the file namespace, can see if this could be made to work for FFD. AS for the wording of the template, perhaps something similar to {{Missing rationale2}} or {{Ffd notice}} could work for the notification message? -- Marchjuly (talk) 21:58, 5 July 2021 (UTC)
I think that {{Ffd notice}} can be modified by replacing the phrase "A file that you uploaded or altered" with "A file used in this article". Maybe a switch case can be added, and the bot can use the activating parameter when doing subst. — Goszei (talk) 22:05, 5 July 2021 (UTC)
That sounds fine, but I don’t know much about how bots work. One thing to consider here is that only the notification of the file’s uploader seems to be required when it comes to FDD. Templates like {{ffdc}} can be used in articles, but they seem to be optional. — Marchjuly (talk) 23:18, 5 July 2021 (UTC)
Not a good task for a bot IMO. Given how little traffic the majority of article talk pages get, I think this would clutter these pages up with (unread) bot spam; if you're a new(-ish) contributor who's stumbled across an article talk page for the first time only to find it filled with deletion warnings, that's a terrible user experience. However, I could see this being useful at the WikiProject level (as an opt-in only service). Also noting for the record that FastilyBot automatically notifies uploaders (if they haven't explicitly opted out) of FfD & dated-CSD tags. -FASTILY 22:14, 6 July 2021 (UTC)
Well, even if a talk page gets little traffic it's on the watchlist of people who have the article watchlisted (& are presumably interested in knowing if one of the images will go away). That said, I think tagging the images in the article with {{ffdc}} is IMO better. Perhaps also having a bot that removes these tags once a FFD is closed? Jo-Jo Eumerus (talk) 08:10, 7 July 2021 (UTC)
Mentioned this above, I think the drawbacks of leaving bot spam on individual talk pages outweigh any (perceived) benefits. Also, adding {{FFDC}} is non-trivial from a programming perspective. Media files can be displayed/embedded in a variety of ways (e.g. infoboxes, galleries, thumbnails, other templates I'm not thinking of) and adding {{FFDC}} as a caption in the correct location for each of these scenarios could be extremely tricky for a bot. To be clear, I don't think this would be a bad thing to have, but I do believe the reward to effort ratio is very low. OTOH, removing {{FFDC}} when FfD's are closed is much more straightforward. If there's consensus for it, I can build it. -FASTILY 21:54, 7 July 2021 (UTC)
As pointed out by Fastily, adding the ffdc template can be tricky. Many images whihc end up at FFD don't have captions; so, the person adding the template actually needs to add a |caption= or do some other tweak to the file's syntax to add the ffdc template. I also have noticed that ffdc template are sometimes removed by editors who don't think the file should be deleted; they seem to misunderstand the purpose of the template and mistake it for a speedy deletion template of sort. I never really considered any possible article talk page spamming effect about might have, but that does seem like valid point now that it's been made. I can see how not only new editors, but even long-term but not very experienced editors (i.e. long-term SPAs) might be find the templates "shocking" in some way. Maybe adding them to a WikiProject talk page would be better as well since the editors of a WikiProject are less likely to be shocked by such template. Even better might be to figure out a way to incorporate WP:DELSORT into FFD discussion since many WikiProjects already have "alert pages" where their members can find out about things like PRODs, SPEEDYS and XFDs.
There's always going to be people unhappy when a file is deleted; so, there's no way around that. Many times, though, these people claim they weren't properly notified at all or not in enough time to do something about it, and there might be some way to help mitigate that. I'm also a little concerned about comments such as this where some editors nominating files for FFD might be relying too heavily on bots for things like notifications. For example, the wording of {{FFD}} states as follows: Please consider notifying the uploader by placing {{subst:ffd notice|1=Ffd}} on their talk page(s). That, however, seems a bit inconsistent with the instructions given in the "Give due notice" section at the top of the main FFD page and this might be causing some confusion. I don't use scripts, etc. whenever I start an FFD and do everything manually; this is a bit more time intensive perhaps, but I think it also might lead to less mistakes because you have to check off a mental list of items before the nomination is complete. Those who do rely on scripts or bots to do this type of thing though, might set the bot up to do only the bare minimum that is required; they're not wrong per se, but the automation might cause some things to be left out that probably shouldn't be left out. So, before any new bot is created and starts doing stuff, it might be better to figure out exactly what a human editor should be required to do first. -- Marchjuly (talk) 22:44, 7 July 2021 (UTC)
Aren't almost all non-free files only used on one or two pages? Also, FfD noms are rather rare occurrences in the first place. I don't think the valid concern about talk page spam outweighs the need here for due notification, considering firstly the rareness and secondly that deletion-related things should always have plenty of notification. — Goszei (talk) 00:53, 8 July 2021 (UTC)
Perhaps you could elaborate on why notifications at the WikiProject level (which are usually heavily watched, mind you) are insufficient? (and by the way we do have an existing solution for this) While I think our notification system could always use improvements, I do think that carpet-bombing individual talk pages with bot spam is a grossly inappropriate solution to this problem. -FASTILY 01:34, 8 July 2021 (UTC)
I think funnelling notifications for rare noms at hundreds/thousands of articles under a WikiProject would be more likely to create spam than the same rare notifications at individual talk pages. If a given article was under many WikiProjects, even more so. It also doesn't seem like the right level of notification; for example, it would be overkill to notify the Music project about any given album cover being nommed for deletion. The people watching the album's article would care a lot more (also, those article watchers may not be watching the WP Music talk page). — Goszei (talk) 01:42, 8 July 2021 (UTC)
I'd note the dab bot that seems to frequently spam talk pages, so I agree having them stick around isn't great. If this is done I'd advise bot removal of tags after discussion closure. ProcrastinatingReader (talk) 10:54, 7 July 2021 (UTC)

Red links to AfD deleted subjects

The bot action would be to check the 'what links here' page of articles that have been deleted by WP:AfD (and are still deleted) and report/list any with links to main space articles. And provide/update a list to the project Wikipedia:WikiProject Red Link Recovery

There should not be any redlinks to articles that have been deleted by the AfD process, C.1. "If the article can be fixed through normal editing, then it is not a candidate for AfD."

People would go through the list and make decisions about how to fix, maybe it should a redirect, maybe the redlinks need to be unlinked, maybe something else...

This is discussed on the project page A bot seems like the best solution.

Because there are several avenues that these might get addressed, it seems like the best solution would something that updates regularly, so corrected subjects fall of the list, and new subjects get added.

Jeepday (talk) 15:37, 22 June 2021 (UTC)

I can have a go at this. Pi (Talk to me!) 14:41, 24 June 2021 (UTC)
Coding... Made some progress, should get it finished over the weekend Pi (Talk to me!) 22:41, 24 June 2021 (UTC)
Just because an AFD has closed as delete one should not assume that all consequent redlinks should be unlinked, for example they may point to a different potential topic of the same name. Surely delinking is a choice for the closing admin? ϢereSpielChequers 13:12, 27 June 2021 (UTC)
Yeah, Pi, save your effort for something else, I doubt this would pass a BRFA without a pretty strong consensus and more specifics. Primefac (talk) 13:48, 27 June 2021 (UTC)
WereSpielChequers & Primefac We are asking for a bot to identify where the red links exist for articles deleted by AfD. Not to automatically remove the links. A person needs to decided what to do with the links. It might need a redirect that was not added, it might need a stub built, it might need to unlinked. Jeepday (talk) 13:12, 29 June 2021 (UTC)
Do you just want a list to be posted to the WikiProject page with the recently AfDd pages, and the pages that link in? Pi (Talk to me!) 16:13, 29 June 2021 (UTC)
Would it be useful and feasible to list redlinks to redirects which used to target the deleted page? I'm thinking of an AfD for Joe Busker, where List of kazoo players links to ex-page Joseph Busker, a redirect deleted per G8. I expect it would require collection in advance of deletion while the targets are still known, and might be too much work. Certes (talk) 17:04, 29 June 2021 (UTC)
Oh, I misunderstood. If you're just creating a list, then go for it. Primefac (talk) 19:36, 29 June 2021 (UTC)
Pi We would want more then recent, as there are likely years of old things that need to be looked at. I am not sure, what the best solution is for listing, but maybe adding a sub page to the project or something. The bot would want to rerun periodically. Either remove things that no longer meet criteria, or scrap the old list and make a new one.
* My logic runs something like this
* Check AfD deletes
* If deleted by AfD AND the article is still deleted; Continue else check next AfD
* Check what 'links here' for the article deleted by AfD, if there is links to any main space pages (there will always be links to the AfD and other stuff); add to list, else check next AfD
* Check next AfD
That is what I am thinking, does it make sense? Jeepday (talk) 14:41, 30 June 2021 (UTC)
Certes That might be difficult to code. I would hope/expect that the person addressing redlinks would see the incoming redirect and address it. But if Joseph Busker was not addressed by the AfD, and if Joe Busker was not a Kazoo player, it is probably best to just ignore Joseph Busker. Each will have to be addressed uniquely. Jeepday (talk) 14:41, 30 June 2021 (UTC)
I've done some work on this and I think I've got there in principle. Can you have a look at this User:Pi/Sandbox/Bot for an idea of what the output could be.
  • I am aware that quite a lot of the links in are from templates. I'm going to make a change to it so that it lists templates separately instead of listing all the pages containing that template.
  • I have only made the bot look at the last couple of weeks of RFAs, but I will also be able to import the historic data. I think running it daily would work in the future.
  • Can you have a think about whether this is what you want, and give any feedback. I'm actually going on holiday tomorrow, walking through some hills so I won't be able to work on this until I get home on Friday (16th), but I'll be on my phone so I can respond here. Pi (Talk to me!) 00:43, 3 July 2021 (UTC)
Pi that looks very good. I think you have everything I was looking for. Jeepday (talk) 13:16, 5 July 2021 (UTC)
Great, I'm just going to make the code a bit more robust (error handing etc) and fix the template thing. Hopefully I'll be able to put in a bot request in the next couple of days. Pi (Talk to me!) 22:05, 8 July 2021 (UTC)

Automatically setup archiving

I frequently come across pages that have dozens of sections, many of them from years ago, and many of them being bot-posted "External links modified" sections. I think very long talk pages, especially when most the content is not very relevant, makes them less usable. Most new users won't know how to setup bot archiving. Would it be reasonable for a bot to automatically setup archiving on pages that meet a certain criteria (length/age related), using a standard configuration with the default age depending on how active the talk page tends to be? ProcrastinatingReader (talk) 17:16, 11 July 2021 (UTC)

This is more of a fact-finding than any sort of "not enough of an issue" question, but do we have any metrics on talk page lengths, similar to Special:LongPages (would LongPages show a Talk page if it were large enough?)? I think >100k would be an issue, for example. Primefac (talk) 17:28, 11 July 2021 (UTC)
Per quarry:query/56718, there are 2,520 talk pages that exceed that threshold. The very top ones seem to just be extraordinarily active talk pages rather than unarchived ones (for instance, Talk:Investigations into the origin of COVID-19 is #2 at the moment), but going down a bit further gives some where no one bothered to set up archiving. Vahurzpu (talk) 18:11, 11 July 2021 (UTC)
So I suppose we would take that list, remove any that are calling either {{User:MiszaBot/config}} or whatever ClueBot's template is, and then have a grand old debate about which bot, which timeframe, how many threads (i.e. how every parameter should be set) and then implement? Then I guess we just go through and see what's left in that quarry (e.g. Talk:Piracy, which is now 178k smaller due to the removal of an article) and potentially clean them up too. Primefac (talk) 18:40, 11 July 2021 (UTC)
quarry:query/56719 filters to pages that don't have either of the common automatic archiving templates. 2,397 of those. Vahurzpu (talk) 19:36, 11 July 2021 (UTC)
Nice. Standard "default" settings are 6 months (180 days), archiving 2+ and keeping at least 4, at least for the MiszaBot settings (and yes, I know it has a new bot name, but lowercasesigmabot II is just too damn long). I have always found that bot to be more user-friendly than ClueBot's let's-calculate-days-in-minutes nonsense.
From a BAG perspective, I don't really see anyone complaining about this sort of task, other than my perennial opposition to the fact that auto archiving messes with notifications. Primefac (talk) 20:06, 11 July 2021 (UTC)
A note regarding "External links modified" sections. Those sections can be deleted per the note in the message itself. Gonnym (talk) 18:38, 11 July 2021 (UTC)
Just noting that I have plans on making an archiving bot that is a lot more user friendly and solves problems with current bots. This would involve getting consensus for default configurations just like would be required for this bot. If a discussion is started on that topic I feel it should be applicable for both bots. Also (unfinished) plans can be found at User:Trialpears/Archiving manifesto. --Trialpears (talk) 22:04, 11 July 2021 (UTC)
Trialpears, if you want assistance with this or a co-maintainer - ping me :) firefly ( t · c ) 22:09, 11 July 2021 (UTC)
I will take you up on that! This will definitely be a task that benefits from 2 maintainers. --Trialpears (talk) 22:13, 11 July 2021 (UTC)

Is there a bot to remove a nonexistent category from many articles?

The category page Category:Characters adapted into the Marvel Cinematic Universe has been correctly proposed for speedy deletion as it was previously deleted as a result of a prior discussion.

Can I safely delete that page assuming that a bot will notice and go through all 400+ pages that include this category, and remove it? Or is there an existing bot for which I need to queue up a request? ~Anachronist (talk) 20:58, 10 September 2021 (UTC)

I believe it would be listed at Wikipedia:Categories for discussion/Working, since it is based on an old discussion, at which point the bot would remove extant uses. Primefac (talk) 20:59, 10 September 2021 (UTC)
Thanks, I listed it and then noticed that everything has been taken care of already. The category is empty and it is gone. ~Anachronist (talk) 22:36, 10 September 2021 (UTC)

Bot for welcoming new users

I want to create a bot for welcoming new users. King Molala (talk) 08:41, 8 September 2021 (UTC)

King Molala, this task is on the list of frequently denied bots unfortunately. firefly ( t · c ) 08:48, 8 September 2021 (UTC)

ISBN hyphenation

This is a pretty minor task, but throwing it out here, as it'd be very doable via bot. There are many ISBNs on Wikipedia that lack proper hyphenation. https://www.isbn.org/ISBN_converter always knows how to fix them, but it'd be nice to have a bot on-wiki do it instead. Whether or not we'd also want to switch to using a non-breaking hyphen at the same time is something to consider, given that it doesn't seem we'll be able to use a no-wrap to prevent unwanted line breaks. Alternatively, if this is too cosmetic, we could find a way to add it to the WP:GENFIX set. {{u|Sdkb}}talk 21:41, 1 July 2021 (UTC)

I don't think there is consensus for such a mass change. Also, if you don't want ISBNs to wrap, why would you add hyphens to them? Non-breaking hyphens will probably break the validation scheme: ISBN 123456789X, ISBN 123‑456789X Parameter error in {{ISBN}}: invalid character. – Jonesey95 (talk) 04:24, 2 July 2021 (UTC)
It seems it's the validation scheme that's broken, then, as formally ISBNs are supposed to be hyphenated. The scheme allows for hyphen-minus (as in ISBN 123-456789X), but not genuine hyphens or non-breaking hyphens. {{u|Sdkb}}talk 06:00, 2 July 2021 (UTC)
The hyphen-minus is one variety of "genuine hyphen", whatever that means. It's the one on the keyboard, which is why it is used in ISBNs, hyphenated words, and other normal typing. See its description on the page you linked above. If you think that something is wrong with ISBN validation, I suggest Template talk:ISBN as a forum, not the bot requests page. – Jonesey95 (talk) 13:12, 2 July 2021 (UTC)
What would be the benefit, here, other than conforming to the International ISBN Agency's formal recommendations? Do you think that this increases readability?
When working with citations in the past, I have always given numbers without hyphens. Reasoning that a pure number is hard to misunderstand by man and machine alike, while hyphens add an (unneeded) extra level of information and possible misunderstanding. --ΟΥΤΙΣ (talk) 16:01, 4 July 2021 (UTC)
Well if ISBNs are meant to be used in reference, I believe all print books are hyphenated, so it would make sense to follow that here. I think most online databases stick to hyphenation as well. Perhaps something about this script could be altered to a bot—albeit probably without the "Convert ISBN10 to ISBN13" function. Aza24 (talk) 16:09, 4 July 2021 (UTC)
Courtesy pinging creator of script AntiCompositeNumber. {{u|Sdkb}}talk 23:08, 12 July 2021 (UTC)
Not really a great task for a bot, at least without broader consensus first. Currently there is no consensus as to whether ISBNs should be hypehnated, only that it should generally be done consistently within an article. On the technical side, it wouldn't be difficult to turn hyphenator into a bot. --AntiCompositeNumber (talk) 01:39, 13 July 2021 (UTC)
Note that the how-to guide Wikipedia:ISBN#Types states "Use hyphens if they are included, as they divide the number into meaningful parts". GoingBatty (talk) 03:48, 13 July 2021 (UTC)

Bilateral relations SD's

In 2019, the ~6,000 articles on bilateral relations were given short descriptions in the format of "Diplomatic relations between the French Republic and the Islamic Republic of Pakistan", with full country names, by Task 4 of User:DannyS712's User:DannyS712 bot (BRFA here). These are way over the 40-character instruction in WP:SDFORMAT, for little utility in information conveyed. I propose that another task be run where the SD's are all removed, so that an automatic short description like "Bilateral relations" can be added to Template:Infobox bilateral relations. — Goszei (talk) 23:45, 5 July 2021 (UTC)

Agreed. If you need somebody to do a quick bot run once you have consensus, let me know if I can help. MichaelMaggs (talk) 03:57, 6 July 2021 (UTC)
I’d be happy to take on this task with an AWB bot. I’ll see if consensus can be reached through a BRFA. 🐶 EpicPupper (he/him | talk, FAQ, contribs | please use {{ping}} on reply) 18:43, 7 July 2021 (UTC)
The trickiness of fixing this is part of why I think it's a lot better to generate automatic short descriptions through infoboxes than enter them individually via AWB or a bot or something else. See also don't repeat yourself. Looking at some of the current examples, Diplomatic relations between the French Republic and the Republic of Iraq (73 characters) at France–Iraq relations is definitely way too wordy. I'm not super keen on "bilateral relations", as many people don't know what "bilateral" means, and there's no indication that what we're talking about is diplomatic relations, not some other type, which is the only way I could really see these titles needing any clarification. Something like Bilateral diplomatic relations (30 characters) might be good. {{u|Sdkb}}talk 23:23, 12 July 2021 (UTC)
User:MichaelMaggs User:EpicPupper It seems like there is consensus for this (I notified WikiProject International relations about this discussion one week ago, and Wikipedia talk:Short description two weeks ago). I will let you two to sort out who should do the bot run. — Goszei (talk) 03:32, 19 July 2021 (UTC)
EpicPupper, I won't have a lot of free time over the next couple of weeks, so happy to leave the task to you. MichaelMaggs (talk) 09:08, 19 July 2021 (UTC)
Note: I have added an automatic short description of "Bilateral diplomatic relations" to Infobox bilateral relations in this diff: Special:Diff/1034308039. — Goszei (talk) 03:55, 19 July 2021 (UTC)
I will be willing to do this one if no one does. – Ammarpad (talk) 13:51, 19 July 2021 (UTC)
If they're all transcluding {{infobox bilateral relations}} and just need the shortdesc replaced with an auto-shortdesc (i.e. removed), my bot can handle it. Primefac (talk) 17:13, 19 July 2021 (UTC)
Sure. Something already approved seems better than opening a new BRFA. 🐶 EpicPupper (he/him | talk, FAQ, contribs 18:30, 19 July 2021 (UTC)
User:Primefac That sounds like it will do the job. — Goszei (talk) 19:18, 19 July 2021 (UTC)

Y Done Just noting that PrimeBOT took care of this task circa July 24. Thanks to Primefac. — Goszei (talk) 01:30, 21 August 2021 (UTC)

Urgent request from Wikimedia Foundation Human Rights Lead

I've removed this thread, pending review from the Oversight Team. Somewhere like BOTREQ isn't the best place for "here's a huge privacy issue". At this exact point in time I will be suppressing it, but I am also putting this up for review by the OS team, and we will determine whether this thread is acceptable to keep the discussion going, if it should just live in the history, or if it should stay suppressed.Primefac (talk) 18:06, 25 August 2021 (UTC)

@Primefac: please keep it suppressed. GreenC sent me an update on the situation. It seems the ticket has also been removed from OTRS. ~Anachronist (talk) 19:37, 25 August 2021 (UTC)

Bot for welcoming.

I want to create a bot for welcoming new users. — Preceding unsigned comment added by Tajwar.thesuperman (talkcontribs) 18:50, 29 August 2021 (UTC)

/Frequently denied bots#Bots to welcome users * Pppery * it has begun... 18:56, 29 August 2021 (UTC)

Extra line breaks at the end of sections

I pretty frequently come across instances where there are too many line breaks at the end of a section in an article, creating an extra space. This seems like something a bot could pick up fairly easily, although I'm sure there are some exceptions/edge cases that'd throw it off if we're not careful. Would anyone be interested in putting together a bot to address these? I'm sure there are thousands and thousands of them. {{u|Sdkb}}talk 21:35, 13 July 2021 (UTC)

How serious of an issue is an extra line or two? I feel like "fixing whitespace issues" is on the list of things that, while not technically cosmetic, is still problematic. Primefac (talk) 10:17, 14 July 2021 (UTC)
Could tackle this on a smaller scale using a user script. When you see it, you click a button, and it fixes it. My DraftCleaner script does this. The code for it is really simple, wikicode = wikicode.replace(/\n{3,}/gm, "\n\n");. If interested, give me a ping and I can make a custom user script that does this. –Novem Linguae (talk) 10:30, 14 July 2021 (UTC)

Gap in RPP archive

The new WP:RPP permanent archive has a missing page for requests filed in October 2013: Wikipedia:Requests for page protection/Archive/2013/10. –LaundryPizza03 (d) 18:21, 17 July 2021 (UTC)

Automatic lists of images in rejected or deleted drafts

When a draft is deleted images uploaded to Commons are not always checked and might be left to languish. Even if the images are acceptable they may be uncategorized.

To help with this I would request to have a bot automatically create a list of images in rejected (or deleted, if possible) drafts, with the following conditions:

  • Add to list if a rejected or deleted draft contains images that were uploaded by the draft creator.

Optional features:

  • Limit to new users (account less than 100 days old; it may take a while before a draft is checked).
  • If the image is already marked for deletion, include it in the list but mark it in someway.
  • If the uploader has previously deleted images highlight other images as possibly problematic.

I am bringing this here after comments in Wikipedia:Village pump (proposals)#Automatic lists of images in rejected or deleted drafts MKFI (talk) 19:54, 16 June 2021 (UTC)

@MKFI: I put together a PAWS notebook to examine the feasibility of implementing this as a tool.
There is a rough specimen page at User:William Avery/Draft image examples, which was created using the following selection criteria, which would be user inputs in an actual tool:
  • Draft was rejected in the last 30 days
  • Draft was created less that 100 days after the user registered. (Does this make more sense than the current age of the account?)
Is this something like the data you are looking for?
Showing which images are marked for deletion and whether the user has had images deleted would be fairly straightforward additions. I haven't done anything concerning deleted drafts, and they present some problems, as the database no longer contains a link between the draft page and the image. One could show the images most recently uploaded by the user who created the deleted draft, and which are not used in draft or main space. William Avery (talk) 09:31, 22 June 2021 (UTC)
I imagine any of the information output at commons:User:OgreBot/Uploads by new users/2020 December 23 15:00 would be useful. William Avery (talk) 09:38, 22 June 2021 (UTC)
@William Avery: very nice, thank you. The 30 days rejected and 100 days after registered are good limits. I believe that will be enough so that it's not needed to check deleted drafts.
Perhaps the images could be a little smaller to condense the page a bit. It would also be nice to mark images if they have an OTRS ticket.
Sadly, OgreBot is no longer active which is one the main reasons I am asking for this. MKFI (talk) 13:17, 22 June 2021 (UTC)
@MKFI: - I have deployed a tool at https://heber.toolforge.org. It's a pretty basic list at the moment. I will add image information in Ogre and OTRS tagging, and improve it as I'm able. I also need to implement a regular scheduled job to keep it up to date. William Avery (talk) 18:54, 8 July 2021 (UTC)
@William Avery: excellent, thank you. If you add scheduled job is it possible to archive earlier results? MKFI (talk) 19:08, 9 July 2021 (UTC)
@MKFI: Sorry for the late reply, I've had domestic obligations, and trips to the beach. I am hoping to make a few further improvements next week: more of the information shown on the Magog the Ogre pages and indications of whether drafts and images have been deleted. The scheduled jobs should just be adding records to the tools database, and anything you see today should be there for good. William Avery (talk) 19:32, 17 July 2021 (UTC)
@William Avery: please don't feel pressurized to hurry, the report has been very useful already in spotting image copyright violations. Thank you once again for making it. MKFI (talk) 18:50, 18 July 2021 (UTC)

Request to revert mass message delivery (Guild of Copy Editors)

This message is sent on behalf of the WikiProject Guild of Copy Editors coordinators: Dhtwiki, Miniapolis, Tenryuu, and myself. We screwed up. The Guild of Copy Editors (GOCE) sent out a mass message to our mailing list before it was ready. It has too many errors to fix, so we would like it completely removed. We will edit the message and resend it at a later date.

If there is a friendly bot operator who can revert as many of these mass message additions as possible, we would be grateful. – Jonesey95 (talk) 20:38, 17 September 2021 (UTC)

 Done — JJMC89(T·C) 04:25, 18 September 2021 (UTC)

Replacing copy and pasted link on a ton of articles

When (almost) all of the individual articles listed at List of settlements in Central Province (Sri Lanka) were created in 2011 (around 1,500 articles), they used the same layout, linking to the Sri Lankan Department of Census and Statistics as an exteral link. The URL for the site has changed, so https://www.statistics.gov.lk/home.asp no longer works and should be replaced with http://www.statistics.gov.lk/ on all articles.

I'm requesting a bot that can replace * [http://www.statistics.gov.lk/home.asp Department of Census and Statistics -Sri Lanka] with *[http://www.statistics.gov.lk Department of Census and Statistics] on each article. —  Melofors  TC  21:42, 11 September 2021 (UTC)

If this bot task is accepted, please make it future proof and create this external link as a template so any need in the future to fix the link will require only 1 page to be edited and not 1500. Gonnym (talk) 21:59, 11 September 2021 (UTC)
@Melofors and Gonnym: BRFA filed. GoingBatty (talk) 03:58, 12 September 2021 (UTC)
@GoingBatty: Thanks! —  Melofors  TC 04:21, 12 September 2021 (UTC)
@Melofors: Doing... GoingBatty (talk) 18:32, 20 September 2021 (UTC)
@Melofors: Y Done! GoingBatty (talk) 02:04, 21 September 2021 (UTC)
@GoingBatty: Thanks! :) —  Melofors  TC 02:07, 21 September 2021 (UTC)

A bot to remove NA classes

In Category:NA-Class France articles ( 23 ) there are many pages with hardcoded |class=na in the {{WikiProject France}} template. I would like the "na"/"NA" removed. This will allow redirects, templates, and such to file into the appropriate categories, and any oddballs can be sorted out individually after. --awkwafaba (📥) 01:35, 18 September 2021 (UTC)

 Done with AWB as there were only about 20 instances. firefly ( t · c ) 10:32, 18 September 2021 (UTC)

Archiving of links used as source

Hello! I discovered that the Shoki Wiki is dead. We use quite a few links from that site for the translated text of Samguk sagi, as a source for Korean kings, like in Jinsa of Baekje. The links have a format of http://nihonshoki.wikidot.com/ss-23 where ss is samguk sagi and 23 is the number of the scroll. Some 60 pages link to various scrolls, and it would be just too much manual work to correct the links with the archive links. Anyone could do this with an archivebot? Thanks a lot. Xia talk to me 18:01, 22 August 2021 (UTC)

Yes sure. The IABot is currently down due to a DB problem, and WaybackMedic is down due to Webcitation.org giving it trouble - but can get to it once things are working again. -- GreenC 18:22, 22 August 2021 (UTC)
Thank you! Xia talk to me 18:55, 22 August 2021 (UTC)
Done. Also blacklisted in IABOt it will be saved in 120+ wiki language sites. -- GreenC 20:26, 23 August 2021 (UTC)

Film infobox order change

Template talk:Infobox film#Request for comments has been closed as consensus to reorder the fields like this:

Before After
...
|director=
|producer=
|writer=
|screenplay=
|story=
|based_on=
|starring=
|narrator=
|music=
|cinematography=
|editing=
...
...
|director=
|writer=
|screenplay=
|story=
|based_on=
|producer=
|starring=
|narrator=
|cinematography=
|editing=
|music=
...

(See testcases for rendered examples.)

This means not just does the template need to be changed but in any article where a person notable enough to be linked appears in both |producer= and any of |writer/screenplay/story/based_on=, or in both |music= and |cinematography= or |editing=, the linked and unliked occurrences will need to be swapped. So we need a bot to make changes like these:

Article Before After
Good Night, and Good Luck
| producer       = [[Grant Heslov]]
| writer         = {{unbulleted list|George Clooney|Grant Heslov}}
| writer         = {{unbulleted list|George Clooney|[[Grant Heslov]]}}
| producer       = Grant Heslov
The Usual Suspects
| music = [[John Ottman]]
| cinematography = [[Newton Thomas Sigel]]
| editing = John Ottman
| cinematography = [[Newton Thomas Sigel]]
| editing = [[John Ottman]]
| music = John Ottman

CirrusSearch's regex engine doesn't seem to support back references to capturing groups, so I don't know how many articles need fixing. I don't think we need to simply reorder the parameters in articles that don't require moving links, as that would be purely cosmetic, though I could see an argument either way. The link might not always be a plain wikilink but could be {{ill}} etc. Also some articles must have refs or footnotes in relevant arguments, which could be a nuisance in figuring out what needs to be done.

To minimize disruption, I plan not to implement the changes to the template until a bot is ready to take on this task. Nardog (talk) 22:09, 7 July 2021 (UTC)

First, I would say that reordering the parameters in the infobox, regardless of whether there are any other changes, is pointless (as seen in the testcases, order of input does not affect order of output). Second, I think it would be reasonable to figure out what sort of scope this task would require; if <0.5% of the (currently) 146 264 transclusions need a "wikilink fix" it could be done manually with AWB. If it's more, then we're in bot territory and I should be able to handle it.
In other words, can we either pull 200 pages at random to check for this issue (if 1 or more need fixing, chances are reasonable that we're in "bot territory") or set up a tracking category to compare these parameters? Primefac (talk) 01:38, 11 July 2021 (UTC)
@Primefac: order of input does not affect order of output Of course. I was just using the parameters rather than the labels for the sake of explanation. (But as I said above, an argument could be made that reordering the input even if it doesn't change the output would prevent editors from accidentally linking the wrong occurrences of names and thus producing more instances of what we're trying to fix. I guess the sheer number of the transclusions makes a compelling case against making such cosmetic changes, though.)
I took the first 25,000 transclusions on WhatLinksHere (AWB limit; so it's skewed to older articles), randomized the list, and tested the first 200. 12 of them required moving a link from |producer= to |writer= etc. (none involved |music=). So, though the samples may not be completely representative, we can expect ~6%, or ~8,776, of the articles with the infobox would require moving a link. I can set up a tracking category, or is it better for the bot to go through them one by one? Nardog (talk) 11:19, 11 July 2021 (UTC)
The bot can pull up the list of all 146k transclusions, but if we're only looking at (rounding up) ~10% I'm not really keen on running through 130k pages that don't actually need changes (and I doubt anyone else would be either) so a tracking cat would definitely be helpful. If you need assistance with that implementation let me know. Primefac (talk) 12:11, 11 July 2021 (UTC)
@Primefac: Alright, see Special:Diff/1027952498/1033077990 and Module:Infobox film/track and let me know if you think this is the right approach. Basically, the module enumerates all links in |producer(s)= or |music= and see if the same phrases appear in |writer= etc. Admittedly I imagine it'll turn up a lot of false positives, but I haven't got a better idea. Nardog (talk) 13:29, 11 July 2021 (UTC)
Looks fine to me. Personally, I think you should update the infobox with the new code before the links are swapped, because there is little point in making one change to add tracking and then later a second change for the more substantive parameter-moving. Primefac (talk) 13:47, 11 July 2021 (UTC)
Done. My idea was to update the template right before the bot operation began, but with the tracking category I agree it makes more sense to do it now. Nardog (talk) 14:37, 11 July 2021 (UTC)
Cool. With as many transclusions as exists, it will probably take the better part of a week to trickle everything down through the caches, but then again it will probably take me that long to find enough time to code it up. Primefac (talk) 14:50, 11 July 2021 (UTC)

@Primefac: So I just went ahead and semi-manually cleaned up the category (I hope it didn't upset your workflow ;)). Excluding existing DUPLINKs from the detection brought down the number from ~2,500 to ~1,500, which made it more manageable.

PrimeBOT's operation last month left hundreds of errors like [[[[Mexico]] City]], West [[German language|German]]y, <ref name="[[Variety (magazine)|Variety]]">, |name=[[Bruce Lee]]: A Warrior's Journey, which I fixed as far as I could find. FWIW I just left comments, refs, and non-personal parameters (|name=, |caption=, |distributor=, |released=, etc.) alone, which was enough to avoid most of these. Nardog (talk) 10:42, 19 August 2021 (UTC)

Apologies for the lack of update, and the massive cleanup needed for this. As mentioned, there were a ton of issues with this task, and I could not find a way to effectively deal with the request while also avoiding the false positives, mistakes, and editing-of-comments issues presented here and at my talk, so I've canned the project. Basically, it's too much of a CONTEXT issue to reasonably have a bot do it, so the changes will need to be done manually. Primefac (talk) 15:03, 22 August 2021 (UTC)

Remove stale "references needed" tag

As a former volunteer / current user, I periodically come across articles with years old {{refimprove}} templates. (e.g. example). I just remove them if it's obvious reference have been added since the tagging. Seems like a bot could do that, based on whatever criteria the community agrees on. NE Ent 12:39, 5 July 2021 (UTC)

@NE Ent: What criteria do you think a bot could use to determine if there are now sufficient references to remove {{refimprove}} (which now redirects to {{more citations needed}})? GoingBatty (talk) 14:53, 5 July 2021 (UTC)
Tag more than five years old and at least three references added since tag placed. NE Ent 10:47, 13 July 2021 (UTC)
  • Not a good task for a bot. It'd be very difficult to determine when sufficient citations have been added. Human editors who address issues from tags should be removing them. {{u|Sdkb}}talk 23:14, 12 July 2021 (UTC)
    • That's based on a theoretical wikipedia where there are sufficient human editors to perform such a task. If a bot can add the tag [15], a bot can remove it, no?> NE Ent 10:47, 13 July 2021 (UTC)
      NE Ent It didn't add it, just added the date to it for proper sorting after a human added it. --Trialpears (talk) 11:55, 13 July 2021 (UTC)

If there's no interest in auto removing the tag, perhaps a bot could post a notice on the original posters talk page asking them to review the page? NE Ent 11:40, 14 July 2021 (UTC)

On a slightly related note, do we have a bot that removes [citation needed] tags that are directly placed next to sources? So, [1][citation needed] would never be a thing? I can't say I've seen it much, but that is related to this point. Best Wishes, Lee Vilenski (talkcontribs) 12:19, 14 July 2021 (UTC)
I have seen more in the reverse order of [citation needed][2]. Keith D (talk) 20:09, 14 July 2021 (UTC)
Sure, I have also seen this - but it's clearly not true. Maybe [unreliable source?] or the reference doesn't cover the content, but it's never true that a reference is required when a reference is there. Best Wishes, Lee Vilenski (talkcontribs) 19:30, 23 July 2021 (UTC)

References

  1. ^ Ref 1
  2. ^ Ref 2

Bulk editing of citation links

Hello! I have noticed that most of the articles in https://en.wikipedia.org/wiki/Category:National_Register_of_Historic_Places_in_Virginia link to an old page on the Virginia Department of Historic Resources website (http://www.dhr.virginia.gov/registers/register_counties_cities.htm) that isn't actually informative (and if it was useful at one point, the archivebot doesn't have it).

Ideally, these links would point directly to the listing's page on the Virginia Landmarks Register website. These pages conveniently use the VLR number in the URL. (For example, for the listing https://www.dhr.virginia.gov/historic-registers/014-0041/, "014-0041" is the VLR number.) The vast majority of these pages also have a NRHP Infobox, which usually includes the VLR number as "designated_other1_number =".

Is there a way for a bot/script to crawl instances of the URL: "http://www.dhr.virginia.gov/registers/register_counties_cities.htm" and change it to "https://www.dhr.virginia.gov/historic-registers/{value of "designated_other1_number" in the page's Infobox}/"?

I've been doing this manually and I just realized that A) there are thousands of these and it's going to take me forever, and B) a robot could probably do this.

Thanks! Niftysquirrel (talk) 14:27, 5 August 2021 (UTC)

@Niftysquirrel I can have a go at this and try. 🐶 EpicPupper (he/him | talk, FAQ, contribs) 02:15, 10 August 2021 (UTC)

Sister project templates and Wikidata synchronization

I propose a bot to automatically or semi-automatically parse the various "Sister project" templates across all of the different WMF projects, and synchronize their parameters with Wikidata.

Examples of these templates:

In its most basic form, I think this bot could just parse Wikitext for templates that imply a direct equivalency between two sister project pages, and then add those to their Wikidata entries, with human approval, if they're not already documented. I think this behaviour should be fairly uncontroversial.

In more advanced forms, I think complete two-way synchronization and some degree of semantic behaviour could potentially be useful too. For example, a template on a Wikisource page could be used to automatically populate its Wikidata entry, and that information could then in turn be used to automatically add Template:Wikisource or similar to Wikipedia pages that don't already have it. And to take things even further, you could also E.G. treat links to Wikisource author pages and Wikisource works differently, possibly to the extent of automatically adding "instance of: writer" to Wikidata entries if they're treated as authors in the Wikitext but not currently subclassed as them on Wikidata.

These more advanced forms may require further discussion and consensus. Depending on accuracy, it might be worth keeping a human in the loop for all wiki content changes.

On technical terms, I suggest that the model for parsing these templates into structured data relationships (and the model for vice versa) be kept separate from the code that then applies edits based on those relationships.

Intralexical (talk) 17:03, 7 August 2021 (UTC)

Comment: Maybe it is worth to create a mechanism like interlanguage links? Kanashimi (talk) 06:50, 8 August 2021 (UTC)
@Kanashimi: Hm, as far as I can tell, inter-language links also use Wikidata? That's where it takes me when I click the "Edit links" button, anyway. So the only thing that would be needed to do the same with inter-projects links would be to expose the right already-existing Wikidata fields, either through a template or in the sidebar. (Would changing the sidebar require sysop or WMF involvement? That's the main reason I restricted my pitch to stuff regular users can do.)
In any case, the current "Sister Project" templates represent a lot of data that's not quite structured, but should be easy enough to parse into queryable form. And I think some of the sister projects can probably benefit from that information being organized and exposed more consistently. Intralexical (talk) 16:12, 8 August 2021 (UTC)
Maybe we can let the templates using wikidata, and migrating target to wikidata one by one. So people will know where to modify the link. Kanashimi (talk) 22:24, 8 August 2021 (UTC)
That could work. But it'd only make sense if we replaced all the different templates with Template:Sister project links, and we'd have to figure out what to do for templates with extra semantic implications like Template:Wikisource author. Intralexical (talk) 12:24, 9 August 2021 (UTC)

Categorization/listification for biographies of mononymous people

Background

This is a combination of bot request and request for guidance/assistance on categories/lists/templates.

I created and maintain List of legally mononymous people; see Mononymous person & WP:MONONYM.

I believe it's gotten a bit unwieldy and underinclusive — and that it perpetuates common misconceptions about "legal names". I had initially named it as a contrast to the pre-existing "stage names" and "pseudonyms" lists, which I now believe was a mistake.

I would like to merge it with List of one-word stage names and the mononyms in List of pseudonyms, include the many other mononymous people (e.g. Burmese, royal, pre-modern, etc).

I believe it needs to be converted into something that tracks bio pages more flexibly and automatically, based on a category or template in the bio page itself, rather than through a manual list. I don't know how to do that, which is why I'm requesting help.

Subtypes

I would like the result to differentiate (e.g. by subcategorization or list filter/sort) name-related properties, e.g.:

  • primarity
  • full (i.e. "legal name" or exclusive common-law name, e.g. U Thant; n.b. U is a title, not a name)
  • contextual (i.e. nickname, stage name, pseudonym, penname, etc., e.g. Prince, Notch, Lula)
n.b. someone may have multiple primary and/or secondary names, whether by language/culture (e.g. Gandalf aka Tharkûn/Mithrandir/…), special context (e.g. Malcolm X aka el-Hajj Malik el-Shabazz), legal issues (e.g. trans people whose government has not yet recognized their name change), etc.
  • reason for mononymy
  • pre-modern (i.e. before surnames were a thing, e.g. Plato)
  • ecclesiastical or monastic (e.g. Ramdev), which will often be a parallel name (e.g. Mother Teresa aka Mary Teresa Bojaxhiu)
  • former name (e.g. DotComGuy who then changed back to Mitch Maddox)
  • cultural source, were applicable (e.g. Burmese, Indonesian, etc.)
  • personhood
  • real person
  • fictional person
  • collective emulating a single person (e.g. Publius)

Ideally, I would like the resulting page to include thumbnail-bio info, e.g. (if applicable):

  • name
  • title
  • former name
  • birth year
  • death year
  • concise description of the person

That part isn't obligatory; e.g. it may not be feasible if the result is a category page.

Possible methods

I believe this means some combination of

  • adding a mononym hatnote template with appropriate parameters
  • transcluding a mononym template/category via infoboxes or name hatnotes where sensible
  • bot scraping biography articles (once ever per article) to insert a mononym template/category where it seems likely, with some sort of bot-added tentative status indication
  • bot monitoring and scraping articles with the mononym template/category to synchronize the relevant fields (name, year of birth/death, name at birth, concise summary) in case they change — whether by article improvement, WP:BLPPRIVACY, or a [[WP::BLP]] name change postdating the article's creation
  • bot creating/updating a resulting category page or list

I am not familiar with how WP handles such things, so another solution might well be better. Please take the technical parts above just as suggestions. I don't particularly care if e.g. the result is a category page vs a list.

My request is just that it should be automatic based on something on the article page, be easily filtered by type, and have a nicely usable result.

Usable source data

If the infobox lists a full name, title-excluding field with one name, then they're probably mononymous.

Pages with a single name in the WP:NCP#"X of Y" format will usually be mononyms, especially if royalty or pre-modern.

Pages with Template:Singaporean name with parameter 3 only (1 & 2 blank) should indicate a mononym.

Most pages with Template:Burmese name are mononyms, but one would need to check by deleting all honorifics (wiktionary list) from the person's name. This should use the Burmese text, since ဦး is a title but ဥ is a name, and both are transliterated as "U"; e.g. in U Kyin U (not a mononym), the first U is a title, and the last is a name.

As I suggested above, a bot adding a mononym marker to these pages should do so in a way that's marked as bot-added / tentative. There will of course be false positives and false negatives as always. This is simply a suggestion for doing a bootstrapping pass and extracting the basic info.

Discussion

I previously asked for help re Burmese names, but got no responses: 1, 2, 3, 4. Volteer1 recently suggested that bots might be an effective approach.

So… any suggestions for how this could best be achieved? Sai ¿? 17:15, 10 August 2021 (UTC)

Selective spelling change in court infobox template

Adding url template to bare infobox websites

When I find bare url's in infobox "website" fields, I always wrap it with the url template (example diff: Special:Diff/1039831301). I do this for two reasons: (1) the infobox width is often unnaturally expanded with long links because bare url's don't wrap, and (2) the template strips the display of http://, which looks better. I considered a possible fix in the code of the infoboxes themselves, but I believe that wouldn't work if two websites were added, or if there is already a url template/other template being used. I believe use of the url template in this field is already fairly widespread and common. — Goszei (talk) 01:19, 21 August 2021 (UTC)

@Goszei: When a bot operator files a bot request, they have to demonstrate that there is consensus for the bot task. Could you please point to a conversation that shows the consensus (besides Template:URL)? Thanks! GoingBatty (talk) 05:19, 21 August 2021 (UTC)
We could try building a consensus here, I suppose. I have placed a pointer to this thread from Template talk:URL and Wikipedia talk:WikiProject Microformats. — Goszei (talk) 06:22, 21 August 2021 (UTC)
This seems smart to me. jp×g 12:23, 21 August 2021 (UTC)

AFAIK the {{url}} template is not supported by the archive bots (please correct if wrong). Thus once the link dies - all links die - it will become link rot requiring manual fix. Which is fine, want to include the risks. No guarantee bots will ever support these templates, there are thousands of specialized templates it is not feasible to program for each. The more we use them, the more link rot seeps in over time. I suppose if there are so many {{url}} the sheer magnitude could put pressure on the bot authors to do something, but it would also require modifications to the Lua code probably, consensus on how it works. As it is, bare URLs are supported by most bots and tools. -- GreenC 15:19, 21 August 2021 (UTC)

Comment: Template:URL states this template is used "on approximately 328,000 pages, or roughly 1% of all pages". GoingBatty (talk) 16:02, 21 August 2021 (UTC)

Looking for a bot (crosspost)

There is a discussion over at WT:WPT looking for a bot to add/split some WPBS banner templates. I feel like such a bot already exists, so before I put in my own BRFA I figured I'd post here and hopefully find someone who remembers which bot that is (because at the moment I am coming up completely blank). Please reply there to avoid too much decentralisation. Thanks! Primefac (talk) 14:58, 22 August 2021 (UTC)

@Primefac: User:Yobot#WikiProject tagging used to add WikiProject templates, but hasn't edited since January. Maybe that's what you were remembering? GoingBatty (talk) 15:52, 22 August 2021 (UTC)

Redlinked biographies that are potentially notable?

Hi folks, not sure if this is a practical request, thought I'd ask anyway. The Science Fiction Encyclopedia has approximately 12,000 entries on people, most of whom are likely notable. As I discovered when writing Sarah LeFanu, at least some of them do not have Wikipedia articles. Is it practical for a bot to trawl through this list, check whether Wikipedia has an article on each entry, and save the entry to a list if it doesn't? Vanamonde (Talk) 07:33, 23 August 2021 (UTC)

I usually deal with this sort of thing in one of two ways:
  1. Format a list (Aaargh, Calvin → *[[Calvin Aaargh]]) and view it in a sandbox; grab the source HTML and filter on class="new" or "(page does not exist)" (either works)
  2. Paste the list into PetScan (tab "Other sources", section "Manual list"), set Wiki to enwiki and list pages which do exist, then find the difference.
I expect you're aware of the usual traps, such as people listed under a different name (Dr Acula is Forrest J Ackerman) and non-writers with similar names (John Peel isn't John Peel (writer)). Certes (talk) 12:31, 23 August 2021 (UTC)
Another possibility would be to use Wikidata. Encyclopedia of Science Fiction ID (P5357) links Wikidata items to their associated entries; you can use this query to find items linked to an ID with no page on enwiki. Wikidata doesn't yet have complete coverage of the encyclopedia; that is tracked by mixnmatch:1330. This should help deal with the issue of mismatched names. Vahurzpu (talk) 13:00, 23 August 2021 (UTC)
That, especially the unmatched list, sounds much more reliable than my suggestions. Certes (talk) 13:11, 23 August 2021 (UTC)
Thanks, both; all those methods seem quite straightforward, and I did not know of any of them. Vanamonde (Talk) 13:24, 23 August 2021 (UTC)

Many projects having bad link to toolserver

Hey all, hoping a bot (or an AWB master) could be deployed to help in the project space. Many, many WikiProject pages have code at the bottom that begins with:

[[tools:~dispenser/cgi-bin/transcluded_changes.py/ . . .

and of course that link is now bad (gives 404 error on toolserver); but since the pages use tools:, this is not really a URL change request. An example use is at WP:WikiProject Geelong#External watchlist.

All instances in the Wikipedia namespace that begin with the above string can be deleted, together with all other characters up until the closing ]]; there is no retcon that will fix it. Thanks in advance, UnitedStatesian (talk) 18:35, 23 August 2021 (UTC)

499 instances. Might be easier to get someone to do an AWB run. Primefac (talk) 19:59, 23 August 2021 (UTC)
Cool, Ill take it over to WP:AWBREQ. Thanks, you can close this here. UnitedStatesian (talk) 00:32, 24 August 2021 (UTC)

AINews.com ownership change, now ainews.xxx

Referencing this discussion: Wikipedia:Help desk#AiNews.com - Wrongly Indexed

It seems that ainews.com was formerly "Adult Industry News", a news site for the porn industry, which has a lot of citations. The domain now belongs to "Artifical Intelligence News". Needless to say, the new owner doesn't want its domain linked in porn-related articles.

Adult Industry News is now ainews.xxx.

Experimenting with some of the links from https://en.wikipedia.org/w/index.php?target=*.ainews.com&title=Special%3ALinkSearch it seems that one cannot simply substitute .com with .xxx. The pages must be found on archive.org.

@GreenC: I am not sure if InternetArchiveBot would handle this unless someone went through all ~130 links and tagged them with {{dead link}}. I don't know of another bot that comes close. ~Anachronist (talk) 15:29, 27 August 2021 (UTC)

Sounds like a WP:URLREQ task. Primefac (talk) 15:32, 27 August 2021 (UTC)
Thanks, I'll put this request there. ~Anachronist (talk) 15:35, 27 August 2021 (UTC)

Automatically create ending with“Archipelago”redirects for ending with“islands”class entries

I found that some of the islands artical did not create corresponding archipelago variants. Robots can automatically identify and create them.--q28 (talk) 01:35, 27 July 2021 (UTC)

According to incomplete statistics, at least 1000 redirects (exact amount depends on how it is determined) have not been created, so they are considered batch work and should be completed by robots.--q28 (talk) 01:36, 27 July 2021 (UTC)
It is worth noting that this is not only limited to the geography, it also applicable to foreign cartoons. According to the translator's preferences, some people use archipelago, and some use islands.--q28 (talk) 01:41, 27 July 2021 (UTC)
Is there really a one-to-one correspondence? For example, the Duke of York Archipelago is thousands of miles from the Duke of York Islands, and British Archipelago correctly redirects to British Isles rather than British Islands. Certes (talk) 02:15, 27 July 2021 (UTC)
Impossible It seems likely that this will need to be done manually, Withdraw the request.--Here's 28 and did I make a mess? 11:03, 14 September 2021 (UTC)
@Q28: If you're filling in manually, other mismatches to beware of include Low Islands (Canada) vs Low Archipelago (Pacific), Marshall Islands (Pacific) vs Marshall Archipelago (Antarctic), and The Islands (British Columbia) vs The Archipelago (Ontario). There are also X Islands forming part of a larger X Archipelago, such as Riau Islands and Solomon Islands. Certes (talk) 13:45, 14 September 2021 (UTC)

reflist talk

Would it be possible to have a bot add {{reflist talk}} to talk page threads which have refs?(I am not watching this page, so please ping me if you want my attention.) SSSB (talk) 08:39, 3 September 2021 (UTC)

@SSSB: You may be interested in User:GreenC bot/Job 8. Certes (talk) 10:37, 3 September 2021 (UTC)
@Certes: - didn't know it was already done. Thanks, SSSB (talk) 10:44, 3 September 2021 (UTC)
@SSSB: It ran one time 2.5 years ago. It checks every single talk page (6+ million). Maybe time to run again. -- GreenC 05:22, 7 September 2021 (UTC)
@GreenC: it would probably be worth it to run it somewhat regularly (monthly?), especially if a way can be found to streamline the pages it has to check (only talk pages that had been edited since the last run?) Wouldn't do anything for the first run, but thereafter... SSSB (talk) 07:26, 7 September 2021 (UTC)
The first (next) run could check only talk pages edited in the last 2.5 years. There are a lot of inactive ones to skip. Certes (talk) 10:12, 7 September 2021 (UTC)
@SSSB and Certes: That is a good idea as it would avoid downloading the page and issuing a API:Revisions (rvsections) call. Unfortunately I'm busy right now and rather then investing more programming time, or delaying the run, I went ahead and started it up. It's a single thread running about 2.4/second is 207,360 pages/day should take 30 days for 6.4 million. Watchable at Special:Contributions/GreenC_bot. I'll get the time check done before next run, and depending how this went consider automating it (more steps to fully automate). -- GreenC 03:52, 13 September 2021 (UTC)
Forgot I already had a function to check last edit time in another program, so dropped it in. Seems to be running at 6.66 pages a second which should finish in about 11 days. Significant improvement. Most of them are skipped even after 2.5 years. -- GreenC 16:23, 13 September 2021 (UTC)
@SSSB and Certes: The bot completed its run, it took about 14 days and edited 5,086 pages. I am hand editing about 150 pages due to formatting oddness. It's been 30 months since last run / 5086 = about 170 per month on average. Given the resources required to run the bot (millions of API calls) waiting 6 months would be a backlog of 1,020 between runs seems like a decent trade off. -- GreenC 15:12, 27 September 2021 (UTC)

Post-Move Bot

A helpful message is shown after moving a page:

WP:POSTMOVE

It seems like a lot of this could be be automated fairly straightforwardly.

It was pointed out on the Teahouse that Wikipedia:Double_redirects do get fixed by a bot, but the fair use rationales, navboxes, etc. also seem unnecessary to fix manually.

Intralexical (talk) 13:03, 9 August 2021 (UTC)

Although some of these tasks definitely look doable, there might be some WP:CONTEXTBOT issues. 🐶 EpicPupper (he/him | talk, FAQ, contribs) 02:09, 10 August 2021 (UTC)
@Intralexical, @EpicPupper: I think I could put together a bot to update fair use rationales. Tol (talk | contribs) @ 15:21, 4 October 2021 (UTC)

Contribution Project

hello i am currently running a Project, which counts User's Contributions on daily basis, The Project works on several wikis such as (ckbwiki, SimpleWiki, ksWiki, ArWiki, jawiki) it works by checks User's Contributions and comparing it to previous Contribution, Ranking Top User's Accordingly, if a user is less active than before The Comparison will change to Red otherwise Green, it also Shows User Rights along with their contributions,

i need someone's Help, to make a bot Specially for the Project, cuz i am doing it Manually by myself and it takes so much time and energy, Can anyone help me to make a script for, 😊 i really appreciate it. —— 🌸 Sakura emad 💖 (talk) 19:48, 1 October 2021 (UTC)

We have Wikipedia:List of Wikipedians by number of edits which is updated by User:BernsteinBot; is this not what you're looking for? Primefac (talk) 17:57, 2 October 2021 (UTC)
@Primefac that's right but still i want to install my own script because i need to run it on other projects too, it really helps me alot if someone lends me a hand with coding Besides my project a little bit different than above. —— 🌸 Sakura emad 💖 (talk) 11:53, 3 October 2021 (UTC)
See if MZMcBride will share the bot's code? Primefac (talk) 12:01, 3 October 2021 (UTC)
@Primefac Thank you alot. —— 🌸 Sakura emad 💖 (talk) 19:20, 3 October 2021 (UTC)

Hi Primefac and Sakura emad. Wikipedia:List of Wikipedians by number of edits/Configuration is the bot's source code for this report. --MZMcBride (talk) 22:36, 13 October 2021 (UTC)