What to expect
Nature Research Intelligence aims to provide you with unrivalled and visually-rich insights which help you to define your strategic initiatives and answer those critical research direction questions.
This report addresses the following questions (amongst others):
What are the emerging research areas for AI?
How is AI research related to the Sustainable Development Goals?
Where can funding be accessed for AI projects?
How might AI research be regulated in the future?
How are your peers using AI?
What are your AI collaboration opportunities?
About the report
In this report, the first section uncovers the wide-ranging landscape of AI - from the latest research and funding trends, to how AI is now permeating multiple industries, and the subsequent regulation being drafted across different geographies. A breadth of real-world examples of its use by corporations are embedded throughout this section.
The second section benchmarks your organisation's performance against your research peers, incorporating subject and research concept analysis, and providing strategic recommendations.
The third section offers a view of research collaboration, at a global level and for your organisation - unearthing your leading domestic and international research partnerships.
This AI report utilises research data, drawn from an extensive pool of sources, and measures outputs against a wealth of quantitative indicators.
In addition, the research analysis is presented across a multitude of views: by document type; by major AI field, by research concept; by funding organisation; by author affiliation data and many more.
The second and third sections are tailored for your research organisation, including your selection of up to five research peer organisations.
The report will be available in this interactive format, as well as through a PDF format.
Customers have the option to purchase the full report with the addition of a Nature Navigator page, built specifically for this AI report. For more information about this product, please visit the Nature Navigator homepage: Nature Navigator | Nature Research Intelligence
According to analysis by Swiss Bank UBS, ChatGPT is the fastest growing app of all time, attracting more than one million users in the first five days. The natural language processing tool allows you to have human-like conversations, answering questions and assisting you in a vast array of tasks which will have far-reaching implications for the future of the global labour force.
This report describes researchers' current use of ChatGPT and similar large language models (LLMs).
Users of ChatGPT in first five days
The recent deployment of new LLMs have sparked much excitement about the opportunities this technology can bring to enhance productivity and answer questions that have previously been prohibitively difficult to solve.
However, there is no shortage of voices expressing concerns as to how humans can work harmoniously with AI systems and the quality of the outputs they produce, which have the potential to compromise research integrity and spread misinformation.
In fact, governments are beginning to regulate, with a draft European Union AI act being voted on in June 2023 which demands transparency and calls for an independent scientific body to certify generative AI, helping ensure the technology does not damage science and public trust.
The AI Safety Summit, held in Bletchley Park in the United Kingdom, reached a “world-first agreement” by leading AI nations to work collaboratively to identify, evaluate and regulate the risks of AI and the potentially harmful capabilities of AI models.
Despite the progress made at the Summit, some criticisms were raised that issues such as the energy-intensive nature of AI, which will have an associated impact on the environment, and safety issues, such as deepfake technology, were not adequately addressed.

AI Safety Summit November 2023
This report outlines the latest AI policy initiatives being discussed and the potential implications for the research world.
Clearly, a raft of new policy decisions will have to be made in this space, and a recent Nature article appraises how AI tools can be used to help science policy advisors, who are under pressure to produce policy summaries for civil servants and politicians often in weeks, or even days. This task involves sifting through millions of scientific papers and contemplating reports from advocacy groups - a daunting prospect.
In the future, AI systems should make such syntheses less time-consuming and free up the capacity of advisors, whilst helping policymakers make faster decisions. However, questions remain as to whether AI will undermine rigour or what safeguards will be in place to protect against AI errors that could affect public-policy decisions.
Whilst AI is one of the fastest-growing areas in scientific research, Nature’s 2023 postdoc survey found that only around one-third of postdoc researchers currently felt that AI had changed their day-to-day work, although commentators recognise that this proportion is likely to change rapidly as this nascent technology becomes more established.
Refining text and troubleshooting code were, by far, the main use cases for AI chatbots among this cohort. Whilst tools like ChatGPT have the promise of alleviating administrative workloads for researchers, there is an acknowledgement that chatbots are not best equipped to perform certain scholarly work which requires deep thought and ingenuity.