artificial intelligence

The onslaught of “Tortured phrases”

So I had this funny feeling for quite a while, where I learn about something new like a word or a concept and then I start seeing them everywhere. When this started to happen, I was shook. Although I understand the concept of confirmation or cognitive bias, these words or things I got to know seemed quite uncommonly used so how am I seeing them everywhere all of a sudden. The odds just seemed astronomical if I may say. Like I just learned about the word “onslaught” a couple of weeks ago and now I can’t watch a Youtube video, read an article or listen to a podcast without someone using it. So my initial reaction would be to ask: where was this word all this time, I mean, I’ve been learning this language since I was 4 years old. Is this a new word or is it just a cognitive bias again? So I used Google Ngram viewer to search for how often this word was used between 1995 and now and it did significantly increase in its usage up to 2015. Still, I finally got the courage to put this weird feeling into words and googled if it’s a known phenomenon and behold, it is! It is called the frequency illusion (also known as the Baader–Meinhof phenomenon) and is defined exactly as the first sentence in this blog. So from now on, whenever a similar incident happens, I’d smile knowing it’s just my brain making connections that it would have missed before, had it not known this word.

Regarding the topic of new words, recently, there has been a daily onslaught of news about retractions of academic papers in different fields. Some are because of plagiarism and some are because of faking data. Now from my perspective, there’s two types of plagiarism: scientific-content and textual. But before I start giving my -what people might call ‘radical’- opinion on both types, I’d like to mention the new wave of textual plagiarism detectors. About a year ago or so, I learned about the concept of “tortured phrases”, i.e. phrases that can only make sense in the scientific context if said without paraphrasing or using synonyms such as “heat transfer” not “hotness move” or “wave propagation” not “flow motion”. Now, even before Chatgpt and other AI models became popular, common paraphrasing tools were still used everywhere. People would use them either to talk about something they want to explain but they linguistically can’t, or that these articles where such phrases were used, just wrote anything to fill the space and intentionally paraphrased huge text to avoid plagiarism detectors. So these research-integrity investigators in the attached article below, developed an algorithm to find these phrases. While the article talks about some of their results which show that such phrases were mostly correlated with bad science and sometimes paper-mills, I will focus on the cases that only did this computer-generated paraphrasing without the intention of scientific misconduct.

You see, the issue with the academic publishing of scientific articles in the moment is that it is mostly in English, a language which the majority of scientists from all over the world can master only as a second or third language. It is no wonder, most renowned scientists come from English speaking countries, and yes, I know these countries mostly have better education and invest a lot in research, still, there’s a correlation. Research consensually shows that when students do science in their own native tongue, they flourish. If the opposite happens, as with most academics, they struggle substantially more, even if they master the language. This is why, if you ever visit PubPeer, you’ll find most papers there with misconduct similar to the one described above are by authors from Asia, South America and Africa. Thus, this urge to write papers just like native-speaking academics, pushes these scientists to use such tools to cut corners, since they linguistically can’t write what they understand in their own words. Again, while I understand that countries where most offences take place have lower funding and academic opportunities which makes them vulnerable to more grievous malpractices than just software-based paraphrasing, I am only trying to find an argument for those researchers who have no other means but to use such tools.

Okay, so what’s my opinion on the two types of plagiarism? Well, I don’t think these two are even remotely equal or comparable. People who steal someone’s results, methodological approach etc. and claim it’s theirs to publish are intentionally doing a theft and there is not excuse for it. This is not the case in my opinion for textual-plagiarism (e.g. by failing to paraphrase), where the intention is not to steal but do what every scientist does: paraphrase. Of course I’m not advocating for such practices and I think all scientists should strive to master the language as much as they can but it is also very unfair to assume or require all non-native speakers to learn the language to the same level as native ones. As a possible solution, I had this idea of researchers never having to paraphrase anymore, and just quote anything they want to say with quotation marks and citations of course. One colleague of mine complained about it not letting the text flow and that it will be frustrating to read such paper or book. So maybe in the future, everyone will write in their own mother language and with the rise of machine-learned translators, you’ll be able to read anyone’s publications in your mother language like a Wikipedia page and science will have a universal language. I really hope so, otherwise it will always be a disadvantage to be a non-native speaking scientists.

Standard
artificial intelligence

The Potential Loss of Human Expression in Academic Writing with the Use of AI Tools

As academic writing continues to evolve, new technologies are emerging that can help researchers and students alike to produce high-quality papers. One of the most promising of these technologies is ChatGPT, a powerful language model developed by OpenAI.

One of the key ways that ChatGPT can help with academic writing is by providing researchers and students with the ability to quickly and easily generate high-quality research papers. By providing the model with key information about the topic being studied, ChatGPT can generate a well-structured paper that includes all of the key elements of a high-quality research paper, such as an introduction, a literature review, methodology, results, and discussion.

Another way that ChatGPT can help with academic writing is by providing researchers and students with a tool that can help them to avoid common errors and biases that can often arise when humans are involved in the writing process. For example, ChatGPT can help to avoid issues such as confirmation bias, where researchers only consider evidence that supports their hypothesis, or the use of vague or imprecise language that can make it difficult for readers to understand the research being presented.

The use of AI in art has been a hot topic of discussion, with some experts expressing concern about the potential loss of human creativity and expression. While ChatGPT may be efficient and cost-effective, it also means that the papers lack the unique personal perspective, experiences and creativity that come with human authorship. This can make the paper less engaging and interesting to read, which could potentially affect its impact and reception. In the humanities field, for instance, this could result in a lack of connection with the content, as readers may find it difficult to relate to or empathize with a machine-generated text.

Despite these concerns, there are also many benefits to using AI tools like ChatGPT for academic writing. One of the key advantages is the speed and efficiency of the process. With AI-generated papers, researchers can quickly produce high-quality papers without spending hours or days writing and editing. This can be particularly useful for researchers who are working on tight deadlines or who have a large volume of papers to produce.

Another issue with using ChatGPT for academic writing is the lack of plagiarism detection. Because it is generating text based on a given prompt, it is important for authors to properly cite their sources and avoid plagiarism. However, ChatGPT does not have any built-in plagiarism detection capabilities, leaving it up to the authors to ensure the originality of their work. This could potentially lead to accidental or intentional plagiarism, which could have serious consequences for both the authors and the credibility of the academic community.

The idea for this article was inspired by a tweet from Prof. Chanda Prescod-Weinstein, in which she discussed the influence of AI on art and the potential loss of human expression. As AI continues to advance, it is important for us to consider the potential impact on the fields of academia and art, and to weigh the benefits of using AI tools against the potential drawback and ensure that the use of AI in academic writing does not detract from the unique perspectives and experiences of the authors.

Disclaimer: This article and its title were generated by ChatGPT, an AI tool and were only edited by the owner of this blog.

Standard