Have you checked who you are recently?

I’m sure we’ve all Googled our own name at some point and been interested or surprised to see what comes up in the search results. When it comes to your academic profile online, it is always a good idea to keep an eye on which publications are being credited to you – the results can be equally surprising!

Check out your digital researcher identity

If you’ve published a research output in a book, in a conference proceedings or journal, you may have an online identity that you are not aware of and it might not be accurate. Why not do a quick identity health check in the Scopus database to check your details are correct?

What is Scopus?

The Scopus database collates outputs from thousands of journals and other publications and track citations to them. The database is useful for searching for articles relevant to your research, helping you to decide where to publish, identifying potential collaborators and also helping you to discover who is citing your work and how often it is being cited.

In the Scopus database, outputs from the same author are aggregated in to a Scopus Author ID. As the information is collated automatically, you may find that the wrong articles have been attributed to you or that your articles have been split across several duplicate IDs.

Why is it important to check your author ID?

If your details in Scopus are incorrect, your publication record will be incomplete and possibly confusing to those interested in reading or citing your research. It is also worth checking out your Scopus Author ID to make sure that the articles attributed to you are correct because the bibliometric data used in the University of Reading’s Research Outputs Support System (ROSS) dashboards are taken from the Scopus database.  If your details are wrong, unreliable data will be pulled through into the University’s reporting process.

How do I find my author ID?

scopus author ID 1

Find yourself in the Scopus database by using the ‘Author Search’ tab

To check your ID, visit the Scopus website www.scopus.com (available when using the University’s IP range). Choose the ‘Author Search’ tab from the Search menu and enter your details. If you’ve worked at several institutions it is best to leave the affiliation information blank. When the search results appear, it is worth choosing the ‘Show profile matches with one document’ option as publications can sometimes fail to aggregate under one author ID.

If your details are right

Great! Take a look at how your papers/articles are being cited, view your h-graph and analyse your author output. You might want to link your Scopus ID to your ORCID ID if you have one – check out our ORCID library guide  for help on how to do this. Check your Scopus author ID from time to time to check that new publications are being added.

If your details are wrong

If you have several Author IDs or there are publications in your profile that do not belong to you, you can ask Scopus to merge them. You can do this by using the ‘Request author detail corrections’ link (or contact Karen Rowlett, the University’s Research Publications Adviser who can do this on your behalf). It is worth checking that any missing publications have not been attributed to another researcher of a similar name. Corrections are usually done within 2 weeks.

Screenshot from Scopus database

Select the duplicate profiles and choose ‘Request to Merge Authors’ to correct duplicate author IDs

Missing Publications

Publications might be missing from your Author ID either because they have been attributed to someone else or because Scopus does not cover the journal/book/conference in which your article appeared. You can check the Scopus coverage by consulting their guide or downloading the Book and Journal source lists.

Wrong affiliation

If you have recently moved from another institution, it may take a while for your new affiliation to be reflected in your Scopus Author ID. You have to publish three outputs with your new address before it will change. If your affiliation is showing as somewhere that you’ve never worked, you can request a correction.

Help and support

If you are not sure how to check your Scopus Author ID or need help in sorting out your profile, please contact Karen Rowlett, Research Publications Adviser. There are also some regular sessions running through People Development on Managing your digital researcher profile and ORCID. You can check when the next course is running by searching the People Development course database.

 

Posted in Bibliometrics, Publications | Tagged , , , , , , , , , | 1 Comment

CentAUR statistics for December 2016

infographic for CentAUR repository statistics

A selection of statistics for the CentAUR repository

Posted in CentAUR, Statistics | Tagged , , | Leave a comment

Towards Open Research: a new report from the Wellcome Trust

October this year saw publication of Towards Open Research: Practices, experiences, barriers and opportunities, a study investigating researcher’s attitudes and behaviours in respect of open research, commissioned by the Wellcome Trust, and based on surveys of researchers and focus groups conducted by the study and the Economic and Social Research Council (ESRC).

The aim of the study was to identify practical actions the Wellcome Trust can implement to remove barriers and maximise the opportunities for practicing open science. Under the aegis of open science the report study considered Open Access publishing, data sharing and re-use, and code-sharing and re-use.

Both the Wellcome Trust and ESRC are strong proponents of open research. The Wellcome Trust has long been at the forefront of policy initiatives to advance an open science agenda through Open Access and open data practices. It mandates both Open Access publication of funded research outputs and data sharing to the fullest achievable extent, and supports these activities through its research grants. The report commissioned by the Trust coincides with the launch of Wellcome Open Research, a platform on which its funded researchers can rapidly publish any results they wish to share, including study protocols and null and negative results, as well as articles and data, which are all made available without editorial intervention for open peer review.

ESRC similarly mandates both Open Access publication and data sharing wherever possible; Open Access publishing for its funded researchers is supported through the RCUK block grant to institutions, and it manages the UK Data Archive, the UK’s largest collection of social, economic and population data and a service that its funded researchers can use to preserve and share the data arising from their research. The UK Data Archive has been in existence since 1967, and ESRC was one of the earliest among funders to adopt  a data sharing policy, in the mid-1990s, making it an informed and progressive force in the promotion of open and accessible data, with particular expertise in the management of controlled access to disclosive and sensitive data. It also offers a wealth of invaluable supporting information and resources for research data management via the UK Data Service website, covering topics such as participant consent for data sharing, anonymisation of datasets, and dealing with rights in data.

General findings

The study finds that open research is widely practised and on the increase, with researchers not only using Open Access and data sharing, but engaging in growing numbers in other emerging open practices, such as code sharing and open peer review, and experiencing the benefits of increased citation rates, and accelerated communication of a broader range of research outputs beyond the traditional peer-reviewed journal paper.

Open Access

  • Over 70% of Wellcome-funded papers are published as Open Access and a third of researchers publish all their papers as Open Access.
  • Researchers appreciate the value of Wellcome’s new Open Research platform in enabling the rapid dissemination of research materials and results, enabling data visualisation in papers, and providing a forum for open and constructive peer review of of methods and findings.

Data sharing

  • Half of researchers make their data available for use by others, largely via institutional and community data repositories.
  • The drivers for researchers to share data largely come from funder and journal requirements; although researchers generally accept the case for sharing data, very few report any direct benefits from sharing their data, and many are still concerned about possible misuse or misinterpretation of their data, loss of first-use privilege to competitors, and deterred by the effort required to prepare and deposit data.
  • Early career researchers in particular may show reluctance to share their data for fear of losing future publication and career progression opportunities by releasing their research capital. Their supervisors and seniors have a role to play in encouraging good practice.
  • On the plus side, very few researchers have had negative experiences from sharing their data, and many of the fears reported by researchers are largely unfounded.
  • The key things that those who fund and support researchers can do to encourage sharing of data are: provide funding to cover the cost of data preparation, and create incentive systems that reward and recognise researchers for sharing data.
  • Approximately three quarters of researchers have re-used research data, mostly to provide background information and context to research, for research validation, and to help develop methodologies for new analyses.
  • Data sharing can be complex and effort-intensive: researchers need to be given training and guidance, to have easy-to-use data repository services, and to be supported in producing and curating high-quality data and addressing challenges such as making disclosive and confidential data safe for sharing, or sharing large resources, e.g. imaging data.

Code sharing

  • Code-sharing is less well-established than other open practices, at least in part because fewer researchers create software code in their research: two-fifths of researchers do so (mostly those using surveys, secondary analysis and simulation), but less than half of them make it available for access and re-use by others.
  • A significant amount of code use may be hidden and opportunity for code sharing not realised: some researchers may think of research code in terms of software outputs, and not consider processing scripts (such as stata.do files and batch files) within this definition, even though they may be essential to the replication and validation of research results.
  • Where code-sharing takes place, it is driven far less than data sharing by the requirements of funders and journals, and more by a desire to engage in good research practice and to enable other researchers to collaborate and contribute to the work.
  • There are no significant barriers to code-sharing, although lack of skills and funding and rapid changes in software can disincline researchers to invest the time and effort, especially where code sharing is not widely funded, incentivised or rewarded by funders and research organisations, and where norms of code citation and acknowledgement of re-use are not well-established. Code, unlike data in many cases, must be actively maintained and supported once it has been distributed and established a user community, and this demands both commitment from code developers and significant resources. This needs to be recognised by funders and properly supported.
  • Code re-use is currently limited, with just over a third of researchers having used existing code in their research.
  • Many researchers pick up software skills ad hoc, and lack formalised training or knowledge of best practice in code development and sharing. Software skills acquisition needs to be better integrated into standard researcher training models, and researchers should be able to draw on the support offered, e.g. through the Software Sustainability Institute’s Software Carpentry activities.
  • While many code repositories are available for hosting and development, such as GitHub and Bitbucket, it is not clear that these are reliable long-term preservation solutions, and there may be a need for better provision of repositories dedicated to preservation and maintenance of research software. Wellcome is exploring the possibility of setting up such a code repository.

Open research in general

The overall message for funders is that they need to incentivise and reward not just the production of original and interesting research, but the whole ensemble of ‘open’ practices that together ensure the quality, accessibility and usability of all research. The communication of research outputs is not auxiliary or incidental to research, but at its very heart, and however inherently valuable a piece of research may be, its value is diminished if it is not communicated at all because it is a negative or null result, or because the researcher sees no benefit in communicating it, or it is communicated but accessible only to a few selected by ability to pay, or its methods are not transparent and open to replication and critique, or the tools it used are unavailable to others. If researchers are to be incentivised to engage with Open Research practices, then those who fund and reward research need to ensure the money is in the right places for them.

Data from surveys of 583 Wellcome Trust-funded researchers and 259 ESRC-funded researchers, and 5 focus group discussions are available from the UK Data Archive.

 

Posted in Open Access, Research Data Management, Uncategorized | Tagged , , , | Leave a comment

CentAUR statistics for November 2016

Key statistics from the CentAUR repository

Key statistics from the CentAUR repository

Posted in CentAUR, Open Access, Statistics | Tagged , , , | Leave a comment

An open data snapshot

The State of Open Data, a selection of articles and analyses based on a survey of over 2,000 researchers, was published in October this year by Digital Science, owners of the popular figshare research outputs repository.

Key survey findings are summarised in this infographic:

Digital Science, The State of Open Data Infographic

Three findings stand out for me:

1. There is broad interest in and enthusiasm for use of open data practices on the part of researchers across disciplines, career stages and throughout the world, and many researchers (approximately 75% of those surveyed) have experience of sharing data and place value on the credit they receive for doing so.

It’s always good to hear this: the means of sharing data are many and various and it is not easy to get an overview of the totality of data sharing practices.

But I wonder if the picture is quite as rosy as the Digital Science headline suggests, for two reasons.

First, neither the report nor the underlying data explain how the survey sample was obtained or provide any evidence of how representative it is. The survey dataset does not contain a protocol or any explanation of sampling methodology. The survey report merely states: ‘Figshare has garnered many insights from its users in the past, from formal surveys and informal feedback […] Working with Springer Nature and Digital Science, we surveyed researchers […] over 2,000 researchers responded to the survey,spread across continents and disciplines, from all types of institution and researchers at different career stages’ (p. 12), which would suggest (although it is not clear), that researchers were selected from the companies’ contact lists. Given that figshare is a data sharing service and Springer Nature has a strong data policy for its journals, one might expect survey respondents to be more active in data sharing than the global average. One might compare the 76% of researchers in this survey who shared data with the 51% of Wellcome Trust and ESRC-funded researches recently surveyed who had made data available to the research community by one means or another (see survey report, p. 27).

Secondly, the report does not define data sharing with sufficient precision. The survey asked respondents how often they made data ‘freely available’. It’s not clear how ‘freely available’ was defined in the survey, if at all. A survey question about tools researchers used to share data reported responses in the following categories: Email; Google Drive; Dropbox; Figshare; GitHub; Other. This seems a curious list to me, as it identifies tools that I would associate primarily with restricted sharing (e.g. within a project team or among selected peers), such as email and cloud file-sharing services, and does not specify the key categories of open data sharing vehicles: data repositories and journal platforms (which may publish data as supplementary information alongside articles). Only one data repository is identified: figshare, which is owned by… Digital Science, the publisher of this report. Presumably, all the other data repositories in the world are subsumed under the Other category. I would have liked to see in the report a clearer definition of what survey respondents were given to understand ‘freely available’ meant, and whether their responses did fully justify the claim that ‘approximately three quarters of researchers have made research data openly available at some point’ (my emphasis). I would again make a comparison with the Wellcome Trust report (see above), which arrived at its 51% figure by asking researchers if they had made data ‘available to the research community‘ (my emphasis) and specifically excluded informal sharing or sharing on request – since if you have to ask for the data it clearly isn’t open or ‘freely available’.

I believe the rate of open data sharing is in fact considerably lower than the Digital Science report suggests.

2. Where open data practices are adopted, there are likely to be positive correlations with with overall research quality as well as with good practice in management and documentation of data.

This is something I find interesting, as it indicates that basics of good research practice, such as internal discussion and challenge of assumptions and methods, documentation of methods and values, and rigorous quality control in collection and processing, can be reinforced where it is known that data will be made publicly available and open to the same level of scrutiny as peer-reviewed papers, ultimately resulting in findings that are higher in quality, contain fewer errors, and prove more reliable in the long term. It is an effect that has been reported in the literature, and which I think merits greater emphasis as we seek to persuade our researchers of the benefit to them of being open with their data. For a light on this issue, see Wicherts JM, Bakker M, Molenaar D (2011) Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results. PLoS ONE 6(11): e26828. http://dx.doi.org/10.1371/journal.pone.0026828.

3. Researchers can be uncertain of the benefits of sharing data, may be unsure how to manage their data effectively or obtain resources for open data practices, and would welcome more support in these areas from their funders and institutions.

This is definitely the case in my experience: it can be hard to persuade researchers of the benefits of sharing their data, where there is rarely a direct correlation between effort invested and return to the researcher, in terms of recognition and reward. It has been said many times, but in spite of funders’ and institutional and publishers’ data sharing policies, the systemic incentives to share are weak: this is why out of nearly 191,000 research outputs submitted to the 2014 REF, only 68 outputs – that is 0.04% of the total – were Research datasets and databases (see this presentation from Ben Jonson of HEFCE and the REF Research outputs submissions data).

Professionals such as myself providing institutional services supporting research data management need to persuade researchers of the benefits to themselves, to scholarship and to society of sharing their research data; and we need to deliver services that meet their needs in intelligent and efficient ways.

But funders, policy-makers and research organisations also need to restructure the incentive frameworks that define how researchers progress in their careers, and receive recognition and reward for the communication of research. We need a much broader focus in the academic reward systems beyond the published peer-reviewed paper reporting positive, novel and exciting results, both to the papers reporting the less headline-grabbing outcomes (the negative, the null and the apparently nugatory), and to other kinds of output, including the datasets that can serve to validate research results or establish a foundation for future research.

Posted in Research Data Management, Uncategorized | Tagged , , | Leave a comment

What’s new in SciVal ? – Media mentions

A new release of SciVal, the research intelligence tool from Elsevier, was launched in mid-November.

The tool now features two additional sources of information that can be mined: Awarded Grants and Mass Media Mentions. This post covers the Mass Media Mentions and how you can make the best use of this data. An earlier post covered the Awarded Grants feature.

The societal impact of an institution can now be measured in SciVal via mass media mentions of its research outputs. SciVal tracks only English-speaking media sources at present and covers 39,000 online sources and 6,000 print sources. The data cover 2 full years plus the current year for online sources and 5 full years and the current year for print sources. Currently, most of the media sources being tracked are based in the USA and so this should be borne in mind when looking at the data for a particular institution.

Overview module

Mass media mentions in Scival

Image from SciVal (redacted)

To look at mass media mentions for an institution, use the ‘Societal Impact’ tab in the Overview module in SciVal. Use the button to choose which kind of media type you are interested in. You can also use the subject filter to narrow down your search to a particular area of research.

Breakdown of media exposure

Image from SciVal (redacted)

You can also get a breakdown of the kind of media sources that picked up the research and whether they were internationally recognised (eg. BBC) or a local interest source. Below is an example of a Media Exposure graph.

There is also a field-weighted graph that makes comparisons of institutions in the same country possible regardless of their subject areas (medical research is often picked up by the media more than other subjects).

Benchmarking against other institutions

Graph comparing different institutions

Use the Benchmarking module to compare media mentions between institutions in the same country (image from SciVal, redacted)

The data can also be displayed in a table format and downloaded as a PDF, image file or CSV file.

It is anticipated that links to the media mentions will be added in early 2017.

Further information

SciVal is available for all users at the University of Reading. You have to register for an account to use the tool. Access is only available when on campus (or using the VPN). For help and support with SciVal and to gain access to Reading University’s customised structures, contact the Research Publications Adviser.

Posted in Bibliometrics, Research intelligence | Tagged , , , | Leave a comment

CentAUR statistics for October 2016

Infographic featuring key statistics fro the CentAUR repository

Key statistics from the University of Reading’s CentAUR repository

Posted in CentAUR, Open Access, Statistics | Tagged , , , | Leave a comment

What’s new in SciVal ? – Awarded Grants

A new release of SciVal, the research intelligence tool from Elsevier, was launched in mid-November.

The tool now features two additional sources of information that can be mined: Awarded Grants and Mass Media Mentions. This post covers the Awarded Grants feature and how you can make the best use of this data. A second post on Mass Media Mentions will follow.

Awarded Grants

The data on awarded grants comes from major funding organisations across the UK, USA and Australia. For the UK, information from the UK Research Councils and the Wellcome Trust is available. To analyse the data, select an institution from the overview module and then click on the ‘Awarded Grants’ tab.

How to access the awarded grants information in SciVal

Access grant information via the Awarded Grants tab (images from SciVal)

The graphs that appear will show the award volume in US$ and the number of awards. You can also probe deeper into the awards to filter by subject area or by funding body and view the data as a table, bar chart or pie chart.

Table of awards granted

Details of awards from each funding body for a given institution are available (image taken from SciVal)

Pie chart of awards per subject area

Plot the awards by amount received by an institution per subject area (image taken from SciVal)

Benchmarking

It is also possible to perform some benchmarking to make comparisons between the grant funding awarded to different institutions. Use the Benchmarking tab to do this and select the institutions that you are interested in from the Institutions and Groups subsection of the menu on the left hand side of the screen.

Benchmarking of Grant Funding

The benchmarking tab allows you to compare funding between institutions and filter down by subject area (image taken from SciVal)

Collaborations with other institutions

If you are using the Collaboration feature in SciVal to find out more about current or potential collaborators, you can also find out how much funding they have been awarded and how many grants. It is possible to filter down by subject area, for example:

Filtering down by subject area gives the amount and number of awards given

Filtering down by subject area gives the amount and number of awards given (Image from SciVal with some information redacted)

More information is how the awarded grant information is harvested is available in the SciVal Online Manual.

SciVal is available for all users at the University of Reading. You have to register for an account to use the tool. Access is only available when on campus (or using the VPN). For help and support with SciVal and to gain access to Reading University’s customised structures, contact the Research Publications Adviser

Posted in Bibliometrics, Publications, Research intelligence | Tagged , , , , , | Leave a comment

Our Orchid for an ORCID winner

As part of our Open Access week activities at University of Reading, we held an ‘Orchid for an ORCiD’ competition for our researchers, research students and research support staff. We had 59 entries and the lucky winner of our orchid and a £20 book token was Dr Paul Williams of the Department of Meteorology.

When I went to his office to present the prizes, I talked to him about why he signed up for an ORCID Identifier and how he uses it in his research activities.

Photo of Paul Williams with his orchid prize

Paul Williams with his orchid prize and his ORCID iD page open on his computer

How long have you had your ORCID iD?
I signed up for one in December 2015.

Why did you decide to register for an ORCID iD?
There were two factors which influenced my decision to sign up. A number of major publishers, including the Royal Society and the American Geophysical Union, announced that it was going to be obligatory to have an ORCID iD in order to publish in their journals from 2016. I’m an editor for an AGU journal, Geophysical Research Letters, so I thought I better register for one if I expected all the authors to do the same. Another important reason was to stop my publications being mixed up with papers from other authors with a similar or identical name – my name is not unusual and there’s even another UK Paul Williams publishing in the same field.

List of ORCID iDs registered to researchers called Paul Williams

If you search the ORCID registry, you’ll find a lot of researchers called Paul Williams!

How easy was it to add information to your ORCID record?
It was surprisingly easy to populate my ORCID record. The education and employment sections required a small amount of manual input, but the funding and works sections were easy to import from other sources.

How many of your funding awards did you manage to import via the ‘search and link’ feature?
The search feature is very good and all my major grants from the Royal Society and the Natural Environment Research Council were easy to import. The UberResearch wizard was a quick way to pull in my research awards.

Have you entered and used your ORCID ID on any other websites – for example, a publisher’s manuscript submission system, when applying for funding or Researchfish?
I now input my ORCID iD when submitting manuscripts for publication, if the publisher’s manuscript submission system has this enabled.  I have also linked my ORCID iD with my ResearcherID account, so that information on new publications only has to be entered once.

Do you have any plans to add to your ORCID record, for example, linking to your Scopus ID, adding keywords etc. 
I have just added some keywords to my ORCID record today!  I also plan to link to my Scopus Author ID when I get the chance.

Did you know that you can use your ORCID iD to track attention to your outputs using Altmetric?

I really enjoyed using my ORCID iD to track attention to my outputs using Altmetric, now that the University has a subscription.  It was easy to login (no password needed as you can continue as a guest!) and it was straightforward to search the full database using my ORCID iD.  As I am an active Twitter user (@DrPaulDWilliams), I will be using this capability in future to find, reply to, retweet, and favourite tweets directly from within Altmetric.

Altmetric attention for an ORCID ID

You can search the Altmetric database using your ORCID iD to track attention to your research outputs

What’s your ORCID iD?
My ORCID iD is 0000-0002-9713-9820.

If you’re inspired to sign up for an ORCID iD by Paul’s story, it only takes a few minutes. Your ORCID iD belongs to you and not to your institution. Sign up here or find out more from our handy LibGuide.

Posted in Digital Identity, Publications | Tagged , , , , , | Leave a comment

What is an ORCID?

Have you been asked for your ORCID ID yet? Increasingly, research funders, employers and publishers are asking their researchers to sign up for an ORCID ID

 

Picture of two bee orchids taken on Whiteknights campus

Bee orchids found on Reading University’s Whiteknights campus

What is an ORCID ID?

An ORCID® identifier or ORCID iD is a 16-character identifier that can be used to clearly identify you – and not another researcher by a similar name – as the author/owner of an academic output or activity.

Your name is unlikely to be unique and you may find that your research outputs are getting confused with those of another researcher with a similar name.

An ORCID ID can be particularly useful for researchers who have published using several different variants of ther name and initials or have published under different names (for example if you’ve changed your name through marriage/civil partnership/divorce or to suit your gender better).

Example of ORCID record with two distinct surnames

ORCID IDs can bring together different names that you’ve published under

What is it for?

The idea behind ORCID identifiers is that they should be a stable link between all your research activities – grant applications, manuscript submissions, publications, entries in institutional repositories and your peer review activity.

Your ORCID ID belongs to you and you control what information is added to your ID. You can choose to use your ORCID profile as a mini-CV listing all your publications, work history and funding or you can just use the number to identify you and your research outputs.

Why do I need one?

Many publishers and funding organisations are insisting that researchers supply an ORCID ID when submitting a manuscript or peer review or applying for grants. The list is likely to grow in the future. Here are a few examples:

  • Nature journals
  • PLOS
  • eLife
  • Science journals
  • IEEE publications
  • Hindawi publications
  • Wellcome Trust
  • RCUK

Once you have an ORCID ID, make sure you add it to your registration details on manuscript submission sites and other sites such as ResearchFish.

Who is behind ORCIDs?

ORCID is a non-profit organisation that is governed by a board of directors with wide stakeholder representation. Member organisations such as funders, publishers and institutions pay a membership fee but signing up for an ORCID is free.

How much will it cost?

Registration for an ORCID ID is free and maintaining this free status is one of the core principles of the ORCID organisation. To sign up, you will need to agree to ORCID’s Privacy Policy and Terms of Use. You need not have an official affiliation and there is no set of requirements to qualify as a researcher. Adding data to your record, changing your record, sharing your data, and searching the registry are also free.

What does an ORCID ID look like?

Your ORCID ID is a 16 character number that identifies you and not someone else with the same name.

An ORCID ID is a 16-character identifier that is associated with your name and scholarly outputs

An ORCID ID is a 16-character identifier that is associated with your name and scholarly outputs

You can see a (fictitious) example of an ORCID record for Josiah Carberry, an expert on Cracked Pots, here: http://orcid.org/0000-0002-1825-0097 ORCID example

How do I register for an ORCID?

It is very easy to sign up for an ORCID ID – registering  for your ORCID Identifier takes about 30 seconds.

You can then add as much personal information as you want to your record. The minimum recommendation is that you add the country that you are working in, some keywords about your research area and possibly a link to your university webpage. It is always a good idea to add an alternative email address just in case you ever have difficulty accessing your account.

You can add much more information about your research outputs and use your ORCID like a mini-CV.

Help and support

Take a look at our ORCID library guide for more help on how to sign up and populate an ORCID ID or contact the University’s Research Publications Adviser. The ORCID support centre is also full of useful information.

More information from ORCID

This short video shows how ORCID IDs can help researchers gain credit for all their scholarly activities.

Posted in Publications | Tagged , , | Leave a comment