fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.8K
active users

#PeerReview

8 posts8 participants0 posts today

🚨 Event for the Stanford open source community!

Your research code matters. Let’s treat it that way.

On Aug 7, pyOpenSci + OpenSource@Stanford () will share how open peer review supports better tools, cleaner code, and academic credit.

🗓 Thursday, Aug 7
⏰ 11AM MT / 10AM PT
🔗 www.pyopensci.org/events/pyopensci-stanford-ospo-peer-review.html

I'm peer reviewing a paper. I bought a printer, so I decided to print it and see how my work goes.

The amount of annotations and reflections increased significantly. Unfortunately, I'm a harder reviewer on paper than on digital.

#vaccines #PeerReview

This is anti-vax propaganda. That doesn't necessarily mean it's not true. The best propaganda is always true. It does come across as cherry picked. After all. It's the Daily Mail, not the most reliable of sources. Also the use of the phrase 'linked to' always triggers my Spidey Sense.

"Scientists discover Pfizer COVID jab linked to major eye damage

(. . .)

The new study specifically examined how the vaccine affected patients' corneas, the clear front part of the eye that allows light to enter.

In 64 people, scientists in Turkey measured changes in the cornea's inner layer, called the endothelium, before taking the first Pfizer dose and two months after receiving the second."

dailymail.co.uk/sciencetech/ar

So I looked up the study it's based on:

"Original Article
Evaluation of the Effects of mRNA-COVID 19 Vaccines on Corneal Endothelium"

tandfonline.com/doi/full/10.10

The DM article is basically a cut and past from the study itself. And yeah, it does purport to be peer reviewed. But the whole peer review process, and especially the publishing thereof has been scammed, faked, and lied about so often that I didn't just automatically trust the publisher, so I looked *them* up, too. Here's what I found:

"Taylor & Francis Online – Bias and Credibility

(. . .)

These sources consist of legitimate science or are evidence-based through credible scientific sourcing. Legitimate science follows the scientific method, is unbiased, and does not use emotional words. These sources also respect the consensus of experts in the given scientific field and strive to publish peer-reviewed science. Some sources in this category may have a slight political bias but adhere to scientific principles."

mediabiasfactcheck.com/taylor-

Sounds good, but sounding good is proof of nothing, so I double checked the endorser:

"Is mediabiasfactcheck.com Legit?

With its medium trust score on our chart, we determined it has a low risk. We determined this score by aggregating 53 powerful factors to expose high-risk activity and see if mediabiasfactcheck.com is safe. Our in-depth review examines the website and its News & Blogs industry.

(. . .)

The Scam Detector’s algorithm gives this business the following rank:

70.4/100"

scam-detector.com/validator/me

So far, so good (sort of), so I checked out the reviewer with Trustpilot. Here's what they said :

"scam-detector.com
Reviews 529

Companies on Trustpilot can’t offer incentives or pay to hide any reviews.

(. . .)

Most reviewers were somewhat happy with their experience overall. Customers appreciate the service for helping them identify potential online scams, allowing them to avoid risky transactions. Many users have used the tool to check their own websites, with some initially receiving low scores. People value the platform as a first step to check websites, especially with the increasing sophistication of online scams.

However, some reviewers express concerns about the accuracy and fairness of the site's ratings. Several users report that their legitimate businesses were wrongly flagged as scams, leading to reputational damage. Some consumers mention that the website's assessment of their business is inaccurate, unsubstantiated, and false. A few reviewers also accuse the site of using questionable tactics, such as assigning low scores to pressure businesses into paying for a better rating."

trustpilot.com/review/www.scam

Conclusions:

As always, I suggest that you do your own research and reach your own conclusions. I got you started. You can go on from there.

Personally, I feel it's probably true, but cherry picked. But what do I know, I barely squirmed out of reform school. My partner has a Ph.D in molecular biology. She said I'm probably right this time. I always defer to expert opinions, especially hers.

We both have taken the vaccines in question. Neither of us had any deleterious results that we know of. But I always prefer to err on the side of caution. Besides, this study agrees with the government, as well as many prominent anti-vaxxers. That alone makes it suspicious. So I scheduled an eye exam. I'll let you know how it turns out.

In the meantime, I heartily recommend that you get the jab. Everybody should get the jab. If this particular COVID vaccine weirds you out, not to worry. There are other brands available. Just saying.

Daily Mail · Scientists discover Pfizer COVID jab linked to major eye damageBy Chris Melore

Wow. AAAI 2026 is running a pilot “AI-Assisted Peer-Review Process”. Besides regular reviews, each paper will receive one extra LLM-generated review. No scores, but visible to reviewers and authors.

I take it that as an AI conference, AAAI was eager to try this out. Not looking forward to writing the author response to an LLM’s opinion on my paper, though. Also not sure how this will help reviewers.

aaai.org/conference/aaai/aaai-

AAAIMain Technical Track: Call for Papers - AAAI
#aaai#aaai2026#llm

#AI #PeerReview

"Researchers have been sneaking secret messages into their papers in an effort to trick artificial intelligence (AI) tools into giving them a positive peer-review report.

The Tokyo-based news magazine Nikkei Asia reported last week on the practice, which had previously been discussed on social media. Nature has independently found 18 preprint studies containing such hidden messages, which are usually included as white text and sometimes in an extremely small font that would be invisible to a human but could be picked up as an instruction to an AI reviewer.

Authors of the studies containing such messages give affiliations at 44 institutions in 11 countries, across North America, Europe, Asia and Oceania. All the examples found so far are in fields related to computer science.
Although many publishers ban the use of AI in peer review, there is evidence that some researchers do use large language models (LLMs) to evaluate manuscripts or help draft review reports. This creates a vulnerability that others now seem to be trying to exploit, says James Heathers, a forensic metascientist at Linnaeus University in Växjö, Sweden. People who insert such hidden prompts into papers could be 'trying to kind of weaponize the dishonesty of other people to get an easier ride', he says.

The practice is a form of ‘prompt injection’, in which text is specifically tailored to manipulate LLMs. Gitanjali Yadav, a structural biologist at the Indian National Institute of Plant Genome Research in New Delhi and a member of the AI working group at the international Coalition for Advancing Research Assessment, thinks it should be seen as a form of academic misconduct. 'One could imagine this scaling quickly,' she adds.'

archive.is/UqGht

I will never understand why the authors of a manuscript that they post on a preprint server spontaneously decide that it will be better for whoever reads their manuscript to have not only all the figures at the end, but also separated from the legends?

WHY 😭

(Same question for papers sent to review btw. Most journals allow for the format of your choice for the first submission. WHY not make it a nice, easily readable format??)

🚨 𝗟𝗮𝘇𝘆 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝘀 𝘁𝗵𝗲 𝗰𝗼𝗿𝗲 𝗿𝗲𝗮𝘀𝗼𝗻 𝗳𝗼𝗿 𝗯𝗮𝗱 𝗿𝗲𝘃𝗶𝗲𝘄 𝗾𝘂𝗮𝗹𝗶𝘁𝘆! 💤
At #ACL 2023, 24.3% of author-reported issues cited heuristic-based reviews 🤯. To solve this issue, we release #LazyReview, a new dataset to expose and understand this trend along with the models! 🔍📉

🖥️ Project: ukplab.github.io/acl2025-lazy-
📄 Paper: arxiv.org/pdf/2504.11042
💻 Code: github.com/UKPLab/acl2025-lazy

(1/🧵)

Defending science in public we often talk about 'peer reviewed science'. But could this framing contribute to undermining trust in science and holding us back from improving the scientific process? How about instead we talk about the work that has received the most thorough and transparent scrutiny?

Peer review goes a step towards this in having a couple of people scrutinise the work, but there are limits on how thorough it can be and in most journals it's not transparent. Switching the framing to transparent scrutiny allows us to experiment with other models with a path to improvement.

For example, making review open to all, ongoing, and all reviews published improves this. When authors make their raw data and code open, it improves this.

It also gives us a way to criticise problematic organisations that formally do peer review but add little value (e.g. predatory journals). If their reviews are not open and observably of poor quality, then they are less 'thoroughly transparent'.

So with this framing the existence of 'peer reviewed' but clearly poor quality work doesn't undermine trust in science as a whole because we don't pin our meaning and value on an exploitable binary measure of 'peer reviewed'.

It also offers a hopeful way forward because it shows us how we can improve, and every step towards this becomes meaningful. If all we have is binary 'peer reviewed' or not, why spend more effort doing it better?

In summary, I think this new framing would be better for science, both in terms of the public perception of it, and for us as scientists.

Very nice email from #UKRI FLF #PeerReview college:

Review quality is higher from PRC members and our "input is a vital element of the peer review process, providing essential assessment of the applications we receive each year to UKRI Talent schemes"

#Appreciated 🙇

More on FLF PRC:
ukri.org/apply-for-funding/gui

www.ukri.orgExpert reviewInformation for reviewers The Future Leaders Fellowships (FLF) team delivers peer review in collaboration with all research councils and Innovate UK.