In recent years, artificial intelligence has become more than a technological novelty. It now plays a direct role in scientific research—analysing vast datasets, summarising literature, proposing hypotheses, simulating experiments, and supporting the writing of scientific texts. This development is often described with clichés such as “revolution”, “automation”, “replacing humans”. Yet research points to something more nuanced—and more important: AI does not replace the researcher; it changes what is required of them.
The initial dilemma of whether scientists should use AI or not has definitively disappeared. Almost everyone uses it in some form. The question now is whether teams understand how AI can work most effectively and what its limitations are, rather than treating it as a universal substitute for people. This is precisely where the need for a new kind of competence emerges—a set of skills that turn AI from a risky accelerator into a reliable partner in scientific thinking.
№1: Knowing how to assign scientific tasks to AI
One of the most underestimated—and most decisive—skills when working with artificial intelligence is the ability to formulate precise, scientifically meaningful tasks. Most disappointments with AI in a research context do not stem from weaknesses in the algorithm, but from unclear, overly general, or conceptually confused instructions.
The reason is simple: AI does not understand the scientific problem the way a human does. It has no intuition for significance, cannot distinguish the essential from the secondary, and cannot independently judge what is methodologically sound. It operates on formalised tasks that must be clearly bounded in time, scope, data, and objective.
In classical scientific work, the researcher often begins with a broad question: What do we know about a given phenomenon? or How has this field developed? For a person, this is a natural starting point. For AI, however, such a question is too indeterminate. The algorithm has no criterion for what to include, what to exclude, and what “important” means.

That is why the first step is translating the research question into a sequence of operationalised tasks. For example, instead of the abstract:
“What are the main trends in disinformation research?”
a workable formulation would sound like:
“Analyse 400 peer-reviewed articles published between 2016 and 2024 in the field of disinformation; extract the main thematic clusters through topic modelling; describe their frequency and how they change over time.”
In the second case, AI receives clarity on four key parameters: a corpus of texts, a time frame, an analytical method, and the expected output. This makes the task not only feasible, but also verifiable—a fundamental requirement for scientific work.
Such clarity is especially important when AI is used to work with scientific literature. Many researchers expect algorithms to produce a review without specifying what type of review they mean. Yet a systematic review, a narrative review, and a meta-analysis are fundamentally different genres. If the task is not clearly defined, AI will generate a hybrid text that resembles everything and fully answers nothing.
For example:
“Review the research on AI in education”
almost inevitably leads to a generic summary with repetitive claims. In contrast:
“Systematise empirical studies (2019–2024) that assess the effect of language models on learning in secondary education, grouping the results by methodology, educational context, and measured effects”
which already guides AI towards a far more structured and useful output.
Research emphasises that this skill has less to do with programming than with methodological culture. It requires the ability to think simultaneously as a scientist and as a designer of a research process: to anticipate what ambiguities may arise, what assumptions the algorithm will make, and what constraints must be explicitly set.
Particularly telling are examples from interdisciplinary teams. There, concepts are often used that carry different meanings in different disciplines—for example “model”, “validity”, “effect”. If these terms are not specified, AI will choose the dominant interpretation from its training data, which can distort results from the very beginning.
That is why the ability to assign scientific tasks to AI is not merely a technical competence. It is a litmus test for the maturity of the research itself. A clear task forces the team to specify what exactly it is seeking, why it is seeking it, and how it will assess the result. Directed in this way, AI follows instructions precisely and can also act as a mirror that reveals weaknesses in research thinking.
When this skill is missing, AI produces superficial, hard-to-verify, and methodologically blurred results. But when it is present, it becomes a powerful tool for accelerating analysis without undermining scientific rigour. This is where the real added value of artificial intelligence in science begins.

№2: Critical interpretation of AI outputs
One of the most dangerous illusions accompanying the entry of artificial intelligence into science is a sense of certainty. AI-generated outputs often look well structured, confidently phrased, and “scientific” in tone. That is precisely what makes them so treacherous. Algorithms can be wrong not chaotically, but systematically and persuasively—in a way that is difficult to detect without well-developed critical competence.
That is why the second key skill for any research team is the ability to read AI outputs with the same scepticism used for any other source, regardless of how authoritative they appear. AI is not an arbiter of truth. It is a statistical model that reproduces patterns from its data—including gaps, biases, and uneven representation of viewpoints.
In practice, this means the researcher must ask questions such as:
What is this conclusion based on? What alternative interpretations are possible? What is missing from this answer?
For example, when AI is used to summarise scientific literature, the phenomenon of default consensus is often observed. The algorithm extracts the most frequent claims and presents them as the dominant position, without accounting for scientific disputes, marginal yet significant studies, or methodological critiques. In this way, a complex debate can be reduced to a seemingly stable—but in reality simplified—picture.
A particularly telling example is generated reviews in which AI confidently states that studies show a positive effect without specifying effect size, methodological quality, or the presence of contradictory results. For an inexperienced reader, this sounds convincing; for the critical researcher, it is a warning signal.
Another common problem is so-called hallucinations—situations in which AI generates non-existent sources, misattributes authorship, or combines real facts in a way that produces a new but false claim. In a scientific context, this is not merely a technical error, but a potentially serious breach of research integrity if it is not recognised in time.

Critical interpretation also includes the ability to recognise algorithmic biases. Models are often trained on corpora dominated by English-language publications, Western scientific traditions, or specific methodological schools. As a result, AI may undervalue regional research, alternative approaches, or newer works that are not yet widely cited. If the researcher is unaware of this context, they risk treating algorithmic “silence” as a lack of scientific value.
It is important to stress that critical interpretation does not mean rejecting AI, but placing it in the right position within the research process. The most successful practices treat AI outputs as preliminary analysis or a draft—something that accelerates work but does not finalise it. They are validated through human expertise, comparison with other methods, and, where possible, replication.
In this sense, AI functions as an amplifier: it can amplify both good scientific thinking and bad. If the critical filter is missing, well-written but methodologically weak outputs can easily become part of scientific discourse without deserving that place.
That is why critical interpretation of AI outputs is not only a critical technical competence, but also an ethical and scientific responsibility. In the age of artificial intelligence, the ability to doubt intelligently turns out to be just as important as the ability to use new tools. This is where it is decided whether AI will support the production of knowledge—or accelerate the spread of well-formulated but poorly verified claims.
№3: Data literacy in the age of AI
As impressive as modern artificial intelligence models may seem, their intelligence has one fundamental limitation: it is entirely dependent on the data they work with. In a scientific context, this means that the quality of AI outputs cannot exceed the quality of the research data on which they are based. Yet data work is often treated as secondary, technical, or even boring—a mistake that AI makes especially visible and especially costly.
Data literacy in the age of AI is not limited to the ability to collect large volumes of information. It includes understanding the origin of data, how it is collected, its limitations, and the context in which it can be meaningfully interpreted. Without this understanding, AI can produce outputs that are statistically correct but scientifically misleading.
A typical example is the use of unclean or heterogeneous data. When a corpus mixes different definitions, measurement instruments, or time periods, AI rarely signals that there is a problem. On the contrary—it often smooths inconsistencies and produces generalisations that create an illusion of coherence. For the researcher, however, this may mean the algorithm has combined incompatible things into a seemingly meaningful whole.

The issue of data annotation and categorisation is particularly sensitive. In many research projects, categories are not neutral—they reflect theoretical assumptions, cultural contexts, and research choices. When such categories are fed to AI without sufficient clarity, the model treats them as objective and natural, reproducing and amplifying the initial assumptions. This is particularly problematic in the social sciences, education, and medicine, where definitions are often the subject of scientific debate.
Data literacy also includes the ability to recognise missing data—not only as a technical problem, but as a research fact. What is absent from a dataset is often just as important as what is present. AI, however, has no intuition for such absences. If certain groups, regions, or phenomena are underrepresented, the model will not question it—it will simply ignore them.
In interdisciplinary research, this problem becomes even more acute. Data collected for different purposes and with different methodologies is often combined in a single analysis. For humans, these differences may be obvious; for AI, they are reduced to numerical or textual formats. Without active intervention from the researcher, there is a risk the algorithm will treat non-comparable data as homogeneous, leading to methodologically unstable conclusions.
Another key aspect is data documentation—provenance, transformations, assumptions. In traditional science, this is often seen as a bureaucratic obligation. In the age of AI, however, documentation becomes a critical condition for reproducibility and trust. When a research team cannot clearly trace how a given result was obtained, the role of AI further blurs responsibility.
It is important to stress that AI does not remove the need for classical scientific discipline in working with data—it sharpens it. Algorithms do not fix bad data; they turn it into faster and larger-scale mistakes. In this case, data literacy becomes one of the most important protective skills against pseudo-precision and false confidence—a central element of scientific competence in the age of artificial intelligence.

№4: Ethical and regulatory sensitivity
As AI enters scientific research, the boundary between a technical decision and a moral choice becomes increasingly blurred. What until recently looked like a purely instrumental question—can we automate a stage of the research—today inevitably leads to another, harder question: should we do it. This is where the importance of ethical and regulatory sensitivity emerges as a key skill for every modern research team.
AI is not a neutral intermediary. It embodies values, assumptions, and priorities embedded both in the data and in how it is used. When algorithms are involved in analysing sensitive information, evaluating people, or interpreting social processes, every technical decision can have consequences beyond the scientific sphere. That is why ethics can no longer be treated as an external constraint or a formal requirement, but as an integral part of scientific validity.
One of the clearest examples is the question of authorship and responsibility. When AI participates in generating text, analysis, or hypotheses, who is responsible for the final result? Scientific consensus increasingly points to the position that responsibility remains entirely human—but this requires transparency. Research teams must clearly declare how and where AI has been used, so as not to create a false impression of autonomous human authorship.
Another central ethical issue concerns bias and discrimination. Many models are trained on data that reflects existing inequalities—geographical, cultural, linguistic, or social. If such models are used without critical reflection, they can not only reproduce but also legitimise such inequalities in scientific discourse. In this sense, ethical sensitivity includes the ability to recognise situations in which algorithmic “objectivity” conceals structural biases.
The regulatory aspect complements, but does not replace, ethics. European and international frameworks for AI use increasingly define researchers’ obligations, especially in areas such as healthcare, education, and social policy. These regulations are not mere legal formalities. They reflect a public expectation that science should be not only effective, but also responsible. A research team that does not know or disregards these frameworks risks legal consequences as well as a loss of trust.
Ethical and regulatory sensitivity also appears in more subtle decisions—for example, whether to use AI to assess individual behaviour, how to balance automation with human judgement, or how to communicate uncertainties in results. In many cases, the right choice is not obvious and requires collective discussion rather than an individual decision.
It is important to stress that ethics is not a brake on scientific progress. On the contrary, it is a mechanism for its sustainability. Studies show that projects that integrate ethical considerations at the design stage are more reliable, easier to reproduce, and better received by the scientific community and society.
In the age of artificial intelligence, scientific responsibility does not diminish—it becomes more complex. That is why it is also important to have the ability to navigate this complexity consciously, without reducing ethics to a formal requirement or an afterthought. Where this sensitivity is lacking, AI can quickly turn from a scientific tool into a source of legitimised errors and social tension. Where it is present, the technology becomes a means for more responsible and more mature science.

№5: Effective collaboration between humans and AI
The final key skill for working with artificial intelligence is often perceived as the most abstract, but in practice it proves to be the most decisive. While the previous skills concern specific aspects of methodology, data, and ethics, effective collaboration between humans and AI encompasses how the research process itself is organised. This is where it is decided whether AI will be used as an intelligent optimiser of human expertise or will become a source of chaos, mistrust, and superficial results.
Research shows that AI produces its best results not when it works independently, but when it is embedded in a clearly structured cycle of interaction with people. This means its role must be defined in advance: where it accelerates analysis, where it offers options, and where it stops so that human judgement can be brought in. Without such clarity, the algorithm easily begins to be used beyond its original purpose—for example, as a substitute for expert evaluation rather than a complement to it.
A common problem in research teams is the uneven distribution of AI knowledge. When only one person understands the technology and everyone else treats it as a black box, a dangerous asymmetry emerges. Decisions begin to be made based on outputs that few can critically assess. Effective collaboration requires shared literacy—not everyone must be a specialist, but everyone must understand what AI can and cannot do.
In well-functioning teams, AI is usually used as a partner in the early stages of research—for generating ideas, structuring possible approaches, or preliminary analysis. Then human expertise takes the leading role: it verifies, filters, corrects, and interprets. This iterative process—human → AI → human—proves far more productive than one-way use of algorithms.
It is also important to realise that AI influences not only results, but also team dynamics. When algorithms take over routine or labour-intensive tasks, time is freed for conceptual work, but new questions arise about responsibility and control. Who makes the final decision? Who is responsible for errors? These questions cannot be left to individual judgement—they require clear team agreements.
Effective collaboration also means recognising AI’s limitations as a partner. Algorithms do not understand the significance of a discovery, cannot judge its social context, and carry no moral responsibility. They work best within frameworks set by people. When these frameworks are missing, there is a risk AI begins to shape the direction of research not because it is the most scientifically grounded, but because it is statistically the most likely.
That is why the fifth skill also includes cultural change. It requires research teams to develop a new kind of professional reflection—to discuss AI’s role, share good and bad practices, and build common standards. In this way, AI, in addition to expanding the existing toolkit, is also a factor that rearranges the very logic of collaborative scientific work.
When this skill is well developed, artificial intelligence becomes a catalyst for collective intelligence. It accelerates processes, broadens perspectives, and supports more informed decision-making. When it is missing, the same technology can lead to fragmentation, overdependence, and loss of scientific responsibility. That is why effective collaboration between humans and AI is not an add-on to the other skills, but the framework within which they all gain meaning.

Conclusion: the new scientific literacy
These five skills for working with artificial intelligence outline a new set of competencies and a new form of scientific maturity. In the age of AI, the quality of research will depend less and less on whether a team uses algorithms, and more and more on whether it understands them.
Artificial intelligence can accelerate science, but it cannot make it more responsible, more critical, or more ethical. That remains a human task.
This new literacy is not an individual skill acquired through a few hours of training. It is a collective practice developed in teams, institutions, and scientific communities. It requires shared standards, open discussion, and readiness to adapt. That is why the most important question is no longer whether artificial intelligence will be part of scientific research—that is already a given. The question is what kind of science we will do with its help.
If AI is used as a substitute for thinking, it will lead to an inflation of results and an erosion of trust. If it is used as a partner in well-structured and ethically considered research processes, it can expand the boundaries of human knowledge. In this choice lies the true meaning of the new scientific literacy—not in mastering the technology for its own sake, but in the ability to subordinate it to the goals, values, and responsibility of science.
Key sources used
Scientific articles:
- HRMARS – Artificial Intelligence Literacy and Research Practices
https://hrmars.com/journals/papers/IJARPED/v14-i2/24959 - EDRAAK Journal – AI Skills and Academic Competence
https://peninsula-press.ae/Journals/index.php/EDRAAK/article/view/188 - ArXiv – Human–AI Collaboration and Research Workflows
https://arxiv.org/pdf/2412.12107.pdf - IRJAEM – AI Integration in Research and Education
https://goldncloudpublications.com/index.php/irjaem/article/view/1235 - ArXiv – AI Literacy, Critical Reasoning and Scientific Practice
https://arxiv.org/abs/2503.05822 - ArXiv – Large Language Models and Scientific Knowledge Production
https://arxiv.org/pdf/2302.08157.pdf - ArXiv – Data-Centric Risks in AI-Supported Research
http://arxiv.org/pdf/2404.04750.pdf - PubMed Central – Ethical and Methodological Challenges of AI in Research
https://pmc.ncbi.nlm.nih.gov/articles/PMC11850149/ - PubMed Central – AI Bias, Transparency and Scientific Responsibility
https://pmc.ncbi.nlm.nih.gov/articles/PMC11336680/ - SAGE Journals – Responsible AI Use in Knowledge-Intensive Work
https://journals.sagepub.com/doi/10.1177/10464964251384643 - Academic Conferences – AI, Research Integrity and Governance
https://papers.academic-conferences.org/index.php/icair/article/view/4371 - GJABR – AI Adoption and Organizational Research Skills
https://fegulf.com/index.php/gjabr/article/view/107 - ArXiv – Scientific Reasoning in the Age of Generative AI
https://arxiv.org/abs/2504.14191
Policies, analyses, and educational frameworks:
- European Commission – AI Talent, Skills and Literacy
https://digital-strategy.ec.europa.eu/en/policies/ai-talent-skills-and-literacy - Times Higher Education – AI Literacy as an Information Skill
https://www.timeshighereducation.com/campus/ai-literacy-information-skill - Multiverse – Skills Intelligence Report 2025
https://www.multiverse.io/skills-intelligence-report-2025 - EURES (EU) – Five Ways AI Can Help Your Job Search
https://eures.europa.eu/five-ways-ai-can-help-your-job-search-2023-12-19_bg
Text: Radoslav Todorov
Images: canva.com

