https://biecoll.ub.uni-bielefeld.de/index.php/or/issue/feed International Conference on Open Repositories : Proceedings 2019-06-05T12:55:05+00:00 Susanne Riedel (0521-1064058 / UHG, V1-131) publikationsdienste.ub@uni-bielefeld.de Open Journal Systems https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/50 A Comparative Analysis of Institutional Repository Software 2019-06-05T12:55:05+00:00 Siddharth Kumar Singh ojs.ub@uni-bielefeld.de Michael Witt ojs.ub@uni-bielefeld.de Dorothea Salo ojs.ub@uni-bielefeld.de This proposal outlines the design of a comparative analysis of the four institutional repository software packages that were represented at the 4th International Conference on Open Repositories held in 2009 in Atlanta, Georgia: EPrints, DSpace, Fedora and Zentity (The 4th International Conference on Open Repositories website, https://or09.library.gatech.edu). The study includes 23 qualitative and quantitative measures taken from default installations of the four repositories on a benchmark machine with a predefined base collection. The repositories are also being assessed on the execution of four common workflows: consume, submit, accept, and batch. A panel of external reviewers provided feedback on the design of the study and its evaluative criteria, and input is currently being solicited from the developer and user communities of each repository in order to refine the criteria, measures, data collection methods, and analyses. The aim is to produce a holistic evaluation that will describe the state of the art in repository software packages in a comparative manner, similar in approach to Consumer Reports (Consumer Reports magazine website, http://www.consumerreports.org). The output of this study will be highly useful for repository developers, repository managers, and especially those who are selecting a repository for the first time. As members of these respective communities and the organizations who support them are increasingly collaborating (e.g, DuraSpace), this study will help identify the relative strengths and weaknesses of each repository to inform the "best-of-breed" in future solutions that may be developed. The study’s methods will be presented in a transparent manner with documentation to support their reproducibility by a third party. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/52 An Open-Source Digital Archiving System for Medical and Scientific Research 2019-06-05T12:55:03+00:00 Julien Jomier ojs.ub@uni-bielefeld.de Adrien Bailly ojs.ub@uni-bielefeld.de Mikael Le Gall ojs.ub@uni-bielefeld.de Ricardo Avila ojs.ub@uni-bielefeld.de In this paper, we present MIDAS, an open-source web-based digital archiving system that handles large collections of scientific data. We created a web-based digital archiving repository based on open standards. The MIDAS repository is specifically tuned for medical and scientific datasets and provides a flexible data management facility, a search engine, and an online image viewer. MIDAS allows researchers to store, manage and share scientific datasets, from the convenience of a web browser or through a generic programming interface, thereby facilitating the dissemination of valuable imaging datasets to research collaborators. The system is currently deployed at several research laboratories worldwide and has demonstrated its ability to streamline the full scientific processing workflow from data acquisition to analysis and reports. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/53 Archival description in OAI-ORE 2019-06-05T12:55:02+00:00 Deborah Kaplan ojs.ub@uni-bielefeld.de Anne Sauer ojs.ub@uni-bielefeld.de Eliot Wilczek ojs.ub@uni-bielefeld.de This paper seeks to define a new method for representing and managing description of archival collections using OAI-ORE. This new method has two advantages. Firstly, it adapts traditional archival description methods for the contemporary reality that digital collections, unlike collections of physical materials, are not best described by physical location. Secondly, it takes advantage of the power of OAI-ORE to allow for a multitude of non-linear relationships, providing richer and more powerful access and description. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/54 Author identifiers: 1) Services at arXiv and 2) ORCID and repositories 2019-06-05T12:55:01+00:00 Simeon Warner ojs.ub@uni-bielefeld.de I will present two separate but related topics where experience with the first provides much of my perspective with the second. Public author identifiers and services based on them were introduced in March 2009 and early work and design was reported at OR09. The original services have been running for a year now and additional facilities have been added. I will report and uptake and usage patterns, and describe the more popular services. ORCID is an exciting initiative involving both commercial and academic participants that aims to build a registry and assign identifiers to address the author ambiguity problem. I will report on the current status of this rapidly evolving project and suggest how the repository community may contribute to and benefit from it. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/57 BibApp 1.0: Campus Research Gateway and Expert Finder 2019-06-05T12:54:58+00:00 Sarah L. Shreeves ojs.ub@uni-bielefeld.de Bill Ingram ojs.ub@uni-bielefeld.de Eric Larson ojs.ub@uni-bielefeld.de Dorothea Salo ojs.ub@uni-bielefeld.de Tim Donohue ojs.ub@uni-bielefeld.de Research institutions, particularly universities and colleges, often face challenges in understanding the range of research, publications, and collaborations occurring within their boundaries. They often struggle to keep up with the research and collaborations happening. Faculty and researchers are sometimes at a loss to find fruitful collaborations on campus. Libraries often lack the data to truly understand the publication patterns and trends among faculty. Repository managers spend much time trying to identify publications that can go into a repository. Faculty and departmental web pages are inconsistently complete and sometimes out of date; annual reporting processes are still sometimes paper-based or are not integrated into an institution-wide workflow. Grants and contracts offices generally can only provide a view of the departments that are heavily reliant on grants. There is seldom one place where an administrator, a faculty member, a funder, a potential graduate student, a subject librarian can go to better understand the research occurring on campus. Within this environment a number of tools are in development to help fill gaps in managing, displaying, searching, and mining the publication and citation data that are byproducts of the scholarly communication process. Cornell’s VIVO (http://vivo.cornell.edu/), Harvard’s Catalyst (http://catalyst.harvard.edu/), MIT’s Citeline (http://citeline.mit.edu/), and the BibApp, the subject of this paper, from the University of Illinois at Urbana-Champaign and the University of Wisconsin at Madison (http://bibapp.org with a pilot installation at http://connections.ideals.illinois.edu/) are all examples of such tools. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/59 Blurring the boundaries between an institutional repository and a research information registry: where's the join? 2019-06-05T12:54:56+00:00 Sally Rumsey ojs.ub@uni-bielefeld.de Key motivations for provision of an institutional repository (IR) for research outputs within a higher education institution (HEI) are storage, retention, dissemination and preservation of digital research materials. Increasingly IRs are being considered as tools for research management as part of pan-institutional systems. This might include statutory reporting such as that required for the forthcoming UK REF (Research Excellence Framework). Such functionality generally requires integration with other management systems within the HEI. It is common to find that each research management system has been selected to serve a specific need within an organisational department, any broader aim being out of scope. As a result, data is held in many silos, is duplicated and can even be "locked in" to those systems. This results in problems with data sharing, as well as lacks of efficiency and consistency. Some institutions are addressing this problem by considering CRISs (Current Research Information Systems) or business intelligence systems. The need for easy deposit in the institutional repository at the University of Oxford has prompted the development of a registry and tools to support research information management. Many of the motivations behind the repository are common with those for research information management. Not only do the two areas of focus have many common aims, but there is considerable overlap of design, data, services, and stakeholder requirements. This overlap means that the boundaries between the repository and the resulting tools being implemented for publicly available research activity data are blurred. By considering these two areas together with other related digital repository services, new opportunities and efficiencies can be revealed to the benefit of all stakeholders. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/60 BRIL - Capturing Experiments in the Wild 2019-06-05T12:54:55+00:00 Mark Hedges ojs.ub@uni-bielefeld.de Shrija Rajbhandari ojs.ub@uni-bielefeld.de Stella Fabiane ojs.ub@uni-bielefeld.de This presentation describes a project to embed a repository system (based on Fedora) within the complex, experimental processes of a number of researchers in biophysics and structural biology. The project is capturing not just individual datasets but entire experimental workflows as complex objects, incorporating provenance information based on the Open Provenance Model, to support reproduction and validation of published results. The repository is integrated within these experimental processes, so that data capture is as far as possible automatic and invisible to the researcher. A particular challenge is that the researchers’ work takes place in local environments within the department, entirely decoupled from the repository. In meeting this challenge, the project is bridging the gap between the “wild”, ad hoc and independent environment of the researchers desktop, and the curated, sustainable, institutional environment of the repository, and in the process project crosses the boundary between several of the pairs of polar opposites identified in the call. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/65 Curation Micro-Services: A Pipeline Metaphor for Repositories 2019-06-05T12:54:50+00:00 Stephen Abrams ojs.ub@uni-bielefeld.de Patricia Cruse ojs.ub@uni-bielefeld.de John Kunze ojs.ub@uni-bielefeld.de David Minor ojs.ub@uni-bielefeld.de The effective long-term curation of digital content requires expert analysis, policy setting, and decision making, and a robust technical infrastructure that can effect and enforce curation policies and implement appropriate curation activities. Since the number, size, and diversity of content under curation management will undoubtedly continue to grow over time, and the state of curation understanding and best practices relative to that content will undergo a similar constant evolution, one of the overarching design goals of a sustainable curation infrastructure is flexibility. In order to provide the necessary flexibility of deployment and configuration in the face of potentially disruptive changes in technology, institutional mission, and user expectation, a useful design metaphor is provided by the Unix pipeline, in which complex behavior is an emergent property of the coordinated action of a number of simple independent components. The decomposition of repository function into a highly granular and orthogonal set of independent but interoperable micro-services is consistent with the principles of prudent engineering practice. Since each micro-service is small and self-contained, they are individually more robust and collectively easier to implement and maintain. By being freely interoperable in various strategic combinations, any number of micro-services-based repositories can be easily constructed to meet specific administrative or technical needs. Importantly, since these repositories are purposefully built from policy neutral and protocol and platform independent components to provide the function minimally necessary for a specific context, they are not constrained to conform to an infrastructural monoculture of prepackaged repository solutions. The University of California Curation Center has developed an open source micro-services infrastructure that is being used to manage the diverse digital collections of the ten campus University system and a number of non-university content partners. This paper provides a review of the conceptual design and technical implementation of this micro-services environment, a case study of initial deployment, and a look at ongoing micro-services developments. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/68 Diversity and Interoperability of Repositories in a Grid Curation Environment 2019-06-05T12:54:19+00:00 Jens Ludwig ojs.ub@uni-bielefeld.de Harry Enke ojs.ub@uni-bielefeld.de Thomas Fischer ojs.ub@uni-bielefeld.de Andreas Aschenbrenner ojs.ub@uni-bielefeld.de Repository-based environments are increasingly important in research. While grid technologies and its relatives used to draw most attention, the e-Infrastructure community is now often looking to the repository and preservation communities to learn from their experiences. After all, trustworthy data-management and concepts to foster the agenda for data-intensive research (Data-Intensive Research: how should we improve our ability to use data. e-Science Theme, March 2010. http://www.nesc.ac.uk/esi/events/1047/) are among the key requirements of researchers from a great variety of disciplines. The WissGrid project (WissGrid - Grid for the Sciences, a D-Grid project. Funded by the German Federal Ministry of Education and Research (BMBF). www.wissgrid.de) aims to provide cross-disciplinary data curation tools for a grid environment by adapting repository concepts and technologies to the existing D-Grid e-Infrastructure. To achieve this, it combines existing systems including Fedora, iRODS, DCache, JHove, and others. WissGrid respects diversity of systems, and aims to improve interoperability of the interfaces between those systems. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/69 DOARC - Distributed Open Access Reference Citations Service 2019-06-05T12:54:18+00:00 Michael Maune ojs.ub@uni-bielefeld.de Eberhard R. Hilf ojs.ub@uni-bielefeld.de DOARC (www.isn-oldenburg.de/projects/doarc2/ with its demonstator doarc.projects.isn-oldenburg.de) Distributed Open Access Reference Citations, is a new service under development to be served by DINI as part of its emerging OA-Network System (www.dini.de and www.dini.de/projekte/oa-netzwerk) and funded by the DFG (German Science Foundation DFG, www.dfg.de) which aims at creating an interactive reference index for scientific documents. Special emphasis is given to the Open Access (OA) documents posted by the present German OAI-PMH-Institutional Repositories at Universities and large Research Institutions. One part of it will be a citation-based user interface with tools for authors and readers. The general motivation behind DOARC is to serve add-on services with regard to citations and specically exploit and make use of the opportunities that the OA document world offers by its access to the full text documents. This will provide an extra benefit for both, authors and readers and thus boost the way to spread OA and thus in the end add to increase the rate of citations in an OA world. Specically, by DOARC authors will be given a tool, to ensure that they cite correctly, and that their document's references list will be extracted and added to the pool of DOARC citations. Readers will get a tool by which they can find a document relevant for them by browsing through citations and by a graphical tool which shows the 'content affinity' to other documents in the widely distributed pool of scientific OA-papers. We will exchange our checked metadata with other citation services and further the know-how for non-commercial citation services. In the interface the user will be able to see references with additional information of high value (enriched metadata). We are integrating the services into a wider European context by joining a new initiative organized by Alma Swan of Key Perspectives. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/75 DuraCloud Pilot Program: utilizing cloud infrastructure as an extension of your repository 2019-06-05T12:54:12+00:00 Michele Kimpton ojs.ub@uni-bielefeld.de Cloud compute and cloud storage are now available from several commercial providers around the globe as a commodity service. Due to the extremely low cost, ease of use, and instant scalability the Academic community is taking a hard look at how to best utilize this resource as an extension of its own IT environment. Even though there has been much interest in using cloud infrastructure few organizations have seriously integrated the cloud as part of their environment due to several challenges foreseen, whether real of perceived. In a large survey undertaken by the DuraSpace organization in Winter 2010, some of the biggest challenges identified by our community were security, performance, reliability and trust. Little data has been published to either validate or discredit the key challenges noted within the academic community, although much anecdotal information exists regarding site outages, poor performance, loss of data and the like. The purpose of this presentation is to present the findings of a large scale pilot program utilizing cloud infrastructure from multiple commercial cloud providers, as a utility. The presentation will discuss the key challenges and benefits identified when using cloud storage and compute as a utility during the pilot program. The presentation will provide detailed analysis, where possible, across multiple cloud providers. The analysis presented will include, when applicable, what solutions were deployed to overcome security, reliability, performance and other identified technical issues. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/77 DuraSpace's Solution Communities: Marshalling the Resources for Open-source Development 2019-06-05T12:54:10+00:00 Thornton Staples ojs.ub@uni-bielefeld.de Valorie Hollister ojs.ub@uni-bielefeld.de In his opening plenary address to the Open Repositories Conference, James Hilton made the statement that "open-source software is free like a puppy." This statement succinctly summarizes the need for institutions that benefit from freely available software to get involved in its ongoing development, that an investment of resources is always required. Anyone who has worked with information technology in libraries, museums and archives knows that compared with the total cost of buying vendor software and making it actually do the desired job, there can be a great deal of room to save money while making a significant investment of resources in the process. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/79 Enhancing Statistics: Google Analytics and Visualization APIs 2019-06-05T12:54:08+00:00 Graham Triggs ojs.ub@uni-bielefeld.de Usage statistics have been an important topic in the repository community for some time. From Minho's DSpace additions, through Interoperable Repository Statistics, to @mire's Solr based contribution do DSpace 1.6, there have been many approaches to providing statistics. One technique that has been used in a few places is to set up a Google Analytics account. These have several advantages - free, independent of repository (and it's architecture), proven scalability, excellent tools for visualizing the data. But it has historically had its problems too - doesn't understand the structure of the repository (for displaying totals or top views/downloads for an arbitrary grouping of the content), doesn't track downloads without additional work (or those directly linked from search engines), and the reports are locked behind an authentication wall and can't be opened up to general repository users. With the [April 2009] release of an API to retrieve data from Google Analytics, that has changed. Data that has been calculated in Google Analytics can be pulled back into the repository, so that it can be viewed within context, and by anyone that can access the repository (or not, depending on implementation). This presentation shows how Google Analytics can be integrated with the repository, techniques for capturing data that wouldn't normally be available with Analytics, and making the data comprehensible through visualizations. Whilst the implementation presented here was initially conceived using a DSpace repository, the techniques can be replicated in any repository software. Further, the visualization methods are independent of the analytics data themselves, so can be adapted for other sources of data. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/82 From Dynamic to Static: the challenge of depositing, archiving and publishing constantly changing content from the information environment 2019-06-05T12:54:05+00:00 Richard Jones ojs.ub@uni-bielefeld.de A repository used for storing and disseminating research is a canonical example of a system which strives to produce stable artefacts which can be reliably referenced, if not actually reserved, over time. This is a difficult task, since the normal state of information is constant flux: being updated, revised, rewritten, removed and republished. Recent work in deposit technology has tended to centre around the use of a repository as a 'final resting place' for some research item. It has typically used packages of content, roughly analogous to the SIP (Submission Information Package) in OAIS (http://public.ccsds.org/publications/archive/650x0b1.pdf), to insert 'finished' works into the archive. An example of this is SWORD (http://www.swordapp.org/), which addresses in great detail the deposit mechanism, but is largely reliant on the payload being a single file (for example, a zip), containing all the information that the repository needs in order to create an archival object. This places a burden on the depositor to make an assertion that an item is finished and ready for archiving, and pushes tasks that the repository is traditionally good at (i.e. storing content) out to whatever system the user is creating their work in. Over the past year, Symplectic Ltd (http://www.symplectic.co.uk/) has attempted to break down this reliance on the "package", and move repository deposit in the direction of not only full CRUD (Create, Retrieve, Update, Delete), but also to give repository workflows the opportunity to define when a work is "finished" (at least, provisionally). This will give the repository the opportunity to do what it does best (i.e. store content), and to allow the administrators - experts in repositories and archiving - to have a hand in determining whether an item is "finished", relieving these burdens from the depositor and their research process. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/83 From research data repositories to virtual research environments: a case study from the Humanities 2019-06-05T12:54:04+00:00 Mark Hedges ojs.ub@uni-bielefeld.de Tobias Blanke ojs.ub@uni-bielefeld.de Mike Priddy ojs.ub@uni-bielefeld.de Fabio Simeoni ojs.ub@uni-bielefeld.de Leonardo Candela ojs.ub@uni-bielefeld.de The difference in scholarly practices between the sciences and the mainstream humanities is highlighted in a study (Palmer et al., 2009), which investigated the types of information source materials used in different humanities disciplines, based on results contained in the US Research Libraries Group (RLG) reports. Structured data is relatively little used, except in some areas of historical research, and data as it is traditionally understood in the sciences, i.e. the results of measurements and the lowest level of abstraction for the generation of scientific knowledge, even less so. It is true that the study is partly outdated, containing results from the early 1990s, and that data in the traditional sense is becoming increasingly important in the humanities, particularly for disciplines such as linguistics and archaeology in which scientific techniques have been widely adopted. Nevertheless, it is clear that in general humanities research relies not on measurements as a source of authority, but rather on the provenance of sources and assessment by peers, and that what data repositories are for the sciences, archives are for the humanities. [...] 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/85 Hydra: A Technical and Community Framework For Customized, Reusable, Repository Solutions 2019-06-05T12:54:02+00:00 Tom Cramer ojs.ub@uni-bielefeld.de Lynn McRae ojs.ub@uni-bielefeld.de Willy Mene ojs.ub@uni-bielefeld.de Bess Sadler ojs.ub@uni-bielefeld.de Chris Awre ojs.ub@uni-bielefeld.de Richard Green ojs.ub@uni-bielefeld.de Tim Sigmon ojs.ub@uni-bielefeld.de Thornton Staples ojs.ub@uni-bielefeld.de While repositories provide obvious benefits in hosting and managing content, it is equally clear that there is no "one size fits all" solution to the range of digital asset management needs at a typical institution, much less across institutions. A system that supports the submission, approval and dissemination of electronic theses and dissertations, for example, has demonstrably different requirements than a digitization workflow solution, an e-science data repository, or media preservation and access system. There is a clear need in the repository community to readily develop and deploy content-, domain-, and institution-specific solutions that integrate the flexibility and richness of customized applications and workflows with the underlying power of repositories for content management, access and preservation. Hydra is a multi-institutional, multi-functional, multi-purpose framework that addresses this need on twin fronts. As a technical framework, it provides a toolkit of reusable components that can be combined and configured in different arrays to meet a diversity of content management needs. As a community framework, Hydra provides like-minded institutions with the mechanism to combine their individual development efforts, resources and priorities into a collective solution with breadth and depth that exceeds the capacity of any single institution to create, maintain or enhance on its own. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/88 INSPIRE: A new information system for High-Energy Physics. Lessons learnt 2019-06-05T12:53:59+00:00 Salvatore Mele ojs.ub@uni-bielefeld.de E-Science brings opportunities and challenges for the world of scholarly communication: it amplifies the needs of scientists for fast, effective, unrestricted communication of ideas and scientific results, through Open Access; it enables automation of librarianship intelligence, providing new services to the scientific community for the discovery of information; it calls on libraries and information professionals to fill new roles, as evolving actors in the scholarly communication chain. The field of High-Energy Physics (HEP) has pioneered infrastructures for scholarly communication, with half a century of tradition in Open Access and pre-print dissemination and two decades of experiences in repositories. Scholarly communication in HEP is now moving fast in the e-Science era. This contribution will analyze the status of scholarly communication in the field and the potential offered by the inception of INSPIRE, the next-generation repository for the field. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/89 Institutional Repositories, Long Term Preservation and the changing nature of Scholarly Publications 2019-06-05T12:53:58+00:00 Paul Doorenbosch ojs.ub@uni-bielefeld.de Barbara Sierman ojs.ub@uni-bielefeld.de In Europe over 2.5 million publications of universities and research institutions are stored in institutional repositories. Although institutional repositories make these publications accessible over time, a repository does not have the task to preserve the content for the long term. Some countries have developed an infrastructure dedicated to sustainability. The Netherlands is one of those countries. The Dutch situation could be regarded as a successful example of how long term preservation of scholarly publications is organised through an open access environment. In this contribution to the Open Repository Conference 2010 it will be explained how this infrastructure is structured, and some preservation issues related to it will be discussed. This contribution is based on the long term preservation studies into Enhanced Publications, performed in the FP7 project DRIVER II (2007-2009, Digital Repository Infrastructure Vision for European Research II, WP 4 Technology Watch Report, part 2, Long-term Preservation Technologies (Deliverable 4.3/Milestone 4.2). http://www.driver-repository.eu/. The official report is downloadable at: http://research.kb.nl/DRIVERII/resources/DRIVER_II_D4_3-M2_demonstrator_LTP__final_1_0_.pdf ; the public version is part of Enhanced Publications : Linking Publications and Research Data in Digital Repositories, by Saskia Woutersen-Windhouwer et al. Amsterdam, AUP, 2009, p. 157-209; downloadable as: http://dare.uva.nl/aup/nl/record/316849). The overall conclusion of the DRIVER studies about long term preservation is that the issues are rather of an organisational nature than of a technical one. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/94 Interactive Multi-Submission Deposit Workflows for Desktop Applications 2019-06-05T12:51:41+00:00 David Tarrant ojs.ub@uni-bielefeld.de Les Carr ojs.ub@uni-bielefeld.de Alex D. Wade ojs.ub@uni-bielefeld.de Simeon Warner ojs.ub@uni-bielefeld.de Online submission and publishing is the norm for academic researchers. With the pressure on these authors to submit their work to conferences, journals and Institutional Repositories, this leads to demands on the author to go through multiple web based interfaces, filling in forms with the same information multiple times before they can submit. At the same time, each of these services in turn will have made policy decisions on what types of format they allow and what templates the content has to conform to. The amount of work expected of the author does not adding up to the potential gain, thus most authors will only submit into the repository or publication where they foresee the most benefit. In this paper we propose a solution to this problem that embeds the workflow for multiple submissions into the desktop application of the author, most commonly Microsoft Word. We also propose extending the work done on the Microsoft Word Author Add-in tool to allow two-way negotiation between each repository and the desktop application. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/95 Interoperability for digital repositories: towards a policy and quality framework 2019-06-05T12:51:40+00:00 Giuseppina Vullo ojs.ub@uni-bielefeld.de Perla Innocenti ojs.ub@uni-bielefeld.de Seamus Ross ojs.ub@uni-bielefeld.de Interoperability is a property referring to the ability of diverse systems and organisations to work together. Today interoperability is considered a key-step to move from isolated digital repositories towards a common information space that allow users to browse through different resources within a single integrated environment. In this conference we describe the multi-level challenges that digital repositories face towards policy and quality interoperability, presenting the approaches and the interim outcomes of the Policy and Quality Working Groups within the EU-funded project DL.org (http://www.dlorg.eu/). 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/97 Invenio: A Modern Digital Library System 2019-06-05T12:51:37+00:00 Samuele Kaplun ojs.ub@uni-bielefeld.de Invenio is an integrated digital library system originally developed at CERN to run the CERN Document Server, currently one of the largest institutional repositories worldwide. It was started over 15 years ago and has been matured through many release cycles. Invenio is a GPL2 Open Source project based on an Apache/WSGI+Python+MySQL architecture. Its modular design enables it to serve a wide variety of usages, from a multimedia digital object repository, to a web journal, to a fully functional digital library. The development strategy used to implement Invenio ensures it is flexible in any layer. Being based on open standards such as MARCXML and OAI-PMH 2.0 its interoperability with other digital libraries is guaranteed. Being originally designed to cope with the CERN requirements for digital object management, Invenio is suitable for middle-to-large scale digital repositories (100K~10M records). Records can be of any nature (e.g. papers, books, photos, videos). This presentation will introduce the different features of Invenio, their usage in the CERN context and how other institutions and projects are also driving some of its development. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/104 NARCIS: research information services on a national scale 2019-06-05T12:51:29+00:00 Arnoud Jippes ojs.ub@uni-bielefeld.de Wilko Steinhoff ojs.ub@uni-bielefeld.de Elly Dijk ojs.ub@uni-bielefeld.de As a national aggregator, NARCIS contains the scientific output of 27 institutional OAI-PMH repositories (IRs), with publications and descriptions of research data (datasets) from the Dutch universities, the Academy (KNAW), the Netherlands Organisation for Scientific Research (NWO), the institute for Data Archiving and Networked Services (DANS, http://www.dans.knaw.nl) and other research institutes. NARCIS also contains information from the Current Research Information Systems (CRISs) in the Netherlands on research, researchers (expertise) and research organisations. The data from the IRs and the CRISs in NARCIS are interlinked by identifiers such as the Digital Author Identifier (DAI), a unique identifier assigned to each researcher in the Netherlands. The NARCIS Suite (National Academic Research and Collaborations Information System: http://www.narcis.nl) consists of three main products: the NARCIS Portal (HTTP), the NARCIS Index (SRU) and the NARCIS Repository (OAI-PMH). The NARCIS Portal makes the combined data searchable and available to the public at a national level. Meeting the requirements of modern information systems requires continual development and a good understanding of NARCIS portal visitors and their needs. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/105 On Constructing Repository Infrastructures: The D-NET Software Toolkit 2019-06-05T12:51:28+00:00 Paolo Manghi ojs.ub@uni-bielefeld.de Marko Mikulicic ojs.ub@uni-bielefeld.de Katerina Iatropoulou ojs.ub@uni-bielefeld.de Antonis Lebesis ojs.ub@uni-bielefeld.de Natalia Manola ojs.ub@uni-bielefeld.de Due to the wide diffusion of digital repositories, organizations responsible for large research communities, such as national or project consortia, research institutions, foundations, are increasingly tempted into setting up so-called repository infrastructure systems (e.g., OAIster (http://www.oaister.org), BASE (http://www.base-search.net), DAREnet-NARCIS (http://www.narcis.info)). Such systems offer web portals, services and APIs for cross-operating over the metadata records of publications (lately also of experimental data and compound objects) aggregated from a set of repositories. Generally, they consist of two connected tiers: an aggregation system for populating an information space of metadata records by harvesting and transforming (e.g., cleaning, enriching) records from a set of OAI-PMH compatible data sources, typically repositories; and a web portal, providing end-users with advanced functionality over such information space (search, browsing, annotations, recommendations, collections, user profiling, etc). Typically, information spaces also offer access to third-party applications through standard APIs (e.g., OAI-PMH, SRW, OAI-ORE). Repository infrastructure systems address similar architectural and functional issues across several disciplines and application domains. On the one hand they all deal, with more or less contingent complexity, with the generic problem of harvesting metadata records of a given format, transform them into records of a target format and deliver web portals to operate over these records. On the other hand, they have to cope with arbitrary numbers of repositories, hence administering them, from automatic scheduling of harvesting and transformation actions, definition of relative transformation mappings, to the inherent scalability problems of coping with ever growing incoming records. Existing solutions tend to privilege customization of software, neglecting general-purpose approaches. Typically, for example, aggregation systems are designed to generate metadata records of a format X from records of format Y, and not be parametric with respect to such formats. Similarly, the participation of a repository to an infrastructure is driven by firm policies and administrators often do not have the freedom of specifying their own workflow, by combining as they prefer logical steps such as harvesting, storing, transforming, indexing and validating. In summary, repository infrastructure systems typically provide advanced and effective solutions tailored to the one scenario of interest, while can hardly be applicable to different scenarios, where similar but distinct requirements apply. As a consequence, an organization willing to set up a repository infrastructure system with peculiar requirements has to face the "expensive" problem of designing and developing a new software from scratch. In this paper, we present a general-purpose and cost-efficient solution for the construction of customized repository infrastructures, based on the D-NET Software Toolkit (www.d-net.research-infrastructures.eu), developed in the context of the DRIVER and DRIVER-II projects (http://www.driver-community.eu). D-NET offers a service-oriented framework, whose services can be combined by developers to easily construct customized aggregation systems and personalized web portals. D-NET services can be customized, extended and combined to match domain specific scenarios, while distribution, sharing and orchestration of services enables the construction of scalable and robust repository infrastructures. As we shall describe in the following, D-NET is currently the enabling software of a number of European projects and national initiatives. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/106 Open Access Network, Heading for Joint Standards and Enhancing Cooperation. Value-Added Services for German Open Access Repositories 2019-06-05T12:51:27+00:00 Stefan Buddenbohm ojs.ub@uni-bielefeld.de Maxi Kindling ojs.ub@uni-bielefeld.de OA Network collaborates with other associated German Open Access-related projects and pursues the overarching aim to increase the visibility and the ease of use of the German research output. For this end a technical infrastructure is established to offer value-added services based on a shared information space across all participating repositories. In addition to this OA-Network promotes the DINI-certificate for Open Access repositories (standardization) and a regularly communication exchange in the German repository landscape. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/107 Open Access Statistics: An Examination how to generate Interoperable Usage Information from Distributed Open Access Services Publishing 2019-06-05T12:51:25+00:00 Ulrich Herb ojs.ub@uni-bielefeld.de Daniel Metje ojs.ub@uni-bielefeld.de Publishing and bibliometric indicators are of utmost relevance for scientists and research institutions. The impact or importance of a publication (or even of a scientist or an institution) is mostly regarded to be equivalent to a citation-based indicator, e.g. in form of the Journal Impact Factor (JIF) or the Hirsch-Index (h-index). Both on an individual and an institutional level performance measurement depends strongly on these impact scores. The most common methods to assess the impact of scientific publications show several deficiencies, for instance: · The scope of the databases that are used to calculate citation-based metrics (Web of Science WoS respectively the Journal Citation Reports JCR and Scopus) is restricted and more or less arbitrarily defined. · The JIF and h-index are showing several disciplinary biases (exclusion of many document types, the two years timeframe of the JIF, etc.). · Both JIF and h-index are privileging documents in English language. Even though in principle citation-based metrics provide some arguments pro open access, they mostly disadvantage open access publications - and by that reduce the attractiveness of open access for scientists. Especially documents that are self-archived on open access repositories (and not published in an open access journal) are excluded from the relevant databases that are typically used to calculate JIF-scores or the h-index. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/108 ORBi in orbit, a user-oriented IR for multiple wins: why scholars take a real part in the success story... 2019-06-05T12:51:24+00:00 Paul Thirion ojs.ub@uni-bielefeld.de François Renaville ojs.ub@uni-bielefeld.de Myriam Bastin ojs.ub@uni-bielefeld.de Dominique Chalono ojs.ub@uni-bielefeld.de The University of Liège's institutional repository, ORBi (Open Repository and Bibliography, http://orbi.ulg.ac.be), was officially launched in November 2008. Barely fourteen months later (February 2010), it already contained more than 30,000 bibliographic references with more than 20,000 full texts available. In other words, this represents a growth of more than about 65 new references a day, which is appreciable for a medium-sized university (17,000 students, 2,700 scholars, about 3,500 new publications/year). According to ROAR (http://roar.eprints.org), ORBi is the second institutional repository (for a total of 930) in high activity level (i.e. number of days with more than 100 archived references a day). Furthermore, all these records were archived by the Institution authors themselves, there was neither batch archiving nor mass validation. What are the reasons that may explain such a success? 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/110 PIRUS2: Creating a Common Standard for Measuring Online Usage of Individual Articles 2019-06-05T12:51:22+00:00 Peter Shepherd ojs.ub@uni-bielefeld.de Paul Needham ojs.ub@uni-bielefeld.de This presentation will provide an overview of the PIRUS2 project and will cover the project's background, its main objectives, the planned deliverables, and the benefits to the main stakeholder groups involved in scholarly information, including repositories. A progress report on the project will also be provided. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/112 Preserving repository content: practical steps for repository managers 2019-06-05T12:51:20+00:00 Miggie Pickton ojs.ub@uni-bielefeld.de Steve Hitchcock ojs.ub@uni-bielefeld.de Simon Coles ojs.ub@uni-bielefeld.de Debra Morris ojs.ub@uni-bielefeld.de Stephanie Meece ojs.ub@uni-bielefeld.de Few people would disagree that preservation of repository content is important. Indeed, the stated aim of most repositories is to provide permanent open access to the material therein. Why, then, have so few repositories implemented practical action plans for long term preservation of their content? There could be several reasons. Although a number of preservation tools and services already exist, until now few have addressed the specific needs of repositories; in practical terms they have necessitated action that is additional rather than integral to repository workflow. Repository content is typically highly varied and complex, while descriptive metadata and file formats are used inconsistently and deposited by those without knowledge or expertise in managing digital assets. Busy repository managers with little, if any, experience in digital preservation have lacked time and confidence to tackle what is perceived as an important but complex and scary problem. The JISC-funded KeepIt project is bringing together existing preservation tools and services with appropriate training and advice on preservation strategy, policy, costs, metadata, storage, format management and trust to enable the participating repository managers to formulate practical and achievable preservation plans. From the point of view of the repository manager, this presentation summarises the activities of the KeepIt project, describes the impact that the project has had on the participating repositories, and suggests steps that other repository managers might take to ensure preservation readiness. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/114 Promoting Open Access to librarians and researchers by the international information platform open-access.net 2019-06-05T12:51:17+00:00 Anja Oberländer ojs.ub@uni-bielefeld.de Karlheinz Pappenberger ojs.ub@uni-bielefeld.de Open access has become an important publication form but the concept behind is not as well known as the public discussion makes us believe sometimes. Still today open access is equalized with electronic publication and often mixed with offers like google books. Researchers feel unsure when faced with open access and as a consequence often react conservatively. A German Research Foundation (DFG) funded project (2006-2010) attempts to structure and describe the concept of and the discussion about open access. With the libraries of the Universities of Bielefeld, Goettingen und Konstanz and the Institute of Qualitative Research in Berlin, four German experts in the area of open access took the initiative to create a now well known information platform www.open-access.net. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/116 PubMan - one Repository with multiple Usage und Re-Use Possibilities 2019-06-05T12:51:15+00:00 Juliane Müller ojs.ub@uni-bielefeld.de PubMan is an application which allows members of research organizations to store, manage and enrich their publications. The app is based on the eSciDoc infrastructure, a joint project run by the Max Planck Digital Library (MPDL) and the Fachinformationszentrum (FIZ) Karlsruhe. Presenting scholarly work in the World Wide Web has become an important and common procedure for research communities seeking to enhance the visibility of their research results as well as to initiate scientific collaboration and information exchange. In response to that trend much emphasis has been put on the possibility of providing multiple re-use options for metadata, full texts and supplementary material during the conception and development of PubMan. Our repository software facilitates the integration of user-defined publication lists in local websites as well as in personal and topic-centered WordPress blogs. The paper will depict these two reuse possibilities with examples of operational usage scenarios after giving an overview of the basic concepts and functionalities of PubMan. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/118 Ready or not here it comes: Australian institutional research repository data readiness surveys 2010 2019-06-05T12:50:39+00:00 Caroline Drury ojs.ub@uni-bielefeld.de Peter Sefton ojs.ub@uni-bielefeld.de Kate Watson ojs.ub@uni-bielefeld.de Australian institutional research repositories are now facing a new challenge: datasets and associated metadata. With prior focus predominantly on research outputs, repository managers are now involved in a new phase of repository re-purposing - curation of datasets and associated metadata, and provision of this metadata to a national data commons through ANDS (Australian National Data Service). Through a series of surveys conducted by the national repository support service, CAIRSS (the CAUL Australian Institutional Repository Support Service), this paper examines the research data challenges facing research repository managers, levels of institutional research data identification, and the readiness of traditional institutional research repositories to either curate or work alongside this data. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/124 Repository sustainability: arXiv business model experience and implications 2019-06-05T12:50:31+00:00 Simeon Warner ojs.ub@uni-bielefeld.de Oya Rieger ojs.ub@uni-bielefeld.de In January 2010 Cornell University Library moved to expand the funding base for arXiv by requesting support from user institutions. We hope that this voluntary support model will engage the institutions that benefit most from arXiv while maintaining arXiv's open access mission as a service free to readers and submitters alike. The development of a business model has made us look closely at arXiv's sustainability from both operational and technical standpoints. The engagement of supporting institutions creates new requirements to demonstrate value to these institutions as separate from arXiv's understood value to the community in general. In this presentation we will briefly describe options considered in development of the business model, the model chosen, uptake and feedback. We will then focus on the implications for arXiv's operation, for the long term development of our platform, and new reporting facilities. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/125 Research Data Management in the Lab 2019-06-05T12:50:30+00:00 Matthias Razum ojs.ub@uni-bielefeld.de Simon Einwächter ojs.ub@uni-bielefeld.de Rozita Fridman ojs.ub@uni-bielefeld.de Markus Herrmann ojs.ub@uni-bielefeld.de Michael Krüger ojs.ub@uni-bielefeld.de Norman Pohl ojs.ub@uni-bielefeld.de Frank Schwichtenberg ojs.ub@uni-bielefeld.de Klaus Zimmermann ojs.ub@uni-bielefeld.de Research, especially in science, is increasingly data-driven (Hey & Trefethen, 2003). The obvious type of research data is raw data produced by experiments (by means of sensors and other lab equipment). However, other types of data are highly relevant as well: calibration and configuration settings, analyzed and aggregated data, data generated by simulations. Today, nearly all of this data is born-digital. Based on the recommendations for "good scientific practice", researchers are required to keep their data for a long time. In Germany, DFG demands 8-10 years for published results (Deutsche Forschungsgemeinschaft, 1998). Ideally, data should not only be kept and made accessible upon request, but be published as well - either as part of the publication proper, or as references to data sets stored in dedicated data repositories. Another emerging trend are data publication journals, e.g. the Earth System Science Data Journal (http://www.earth-system-science-data.net/). In contrast to these high-level requirements, many research institutes still lack a well-established and structured data management. Extremely data-intense disciplines like high-energy physics or climate research have built powerful grid infrastructures, which they provide to their respective communities. But for most "small sciences", such complex and highly specialized compute and storage infrastructures are missing and may not even be adequate. Consequently, the burden of setting up a data management infrastructure and of establishing and enforcing data curation policies lie with each institute or university. The ANDS project has shown that this approach is even preferable over a central (e.g., national or discipline-specific) data repository (The ANDS Technical Working Group, 2007). However, delegating the task of proper data curation to the head of a department or a working group adds a huge workload to their daily work. At the same time, they typically have little training and experience in data acquisition and cataloging. The library has expertise in cataloging and describing textual publications with metadata, but typically lacks the disciplinespecific knowledge needed to assess the data objects in their semantic meaning and importance. Trying to link raw data with calibration and configuration data at the end of a project is challenging or impossible, even for dedicated "data curators" and researchers themselves. Consequently, researchers focus on their (mostly textual) publications and have no established procedures on how to cope with data objects after the end of a project or a publication (Helly, Staudigel, & Koppers, 2003). This dilemma can be resolved by acquiring and storing the data automatically at the earliest convenience, i.e. during the course of an experiment. Only at this point in time, all the contextual information is available, which can be used to generate additional metadata. Deploying a data infrastructure to store and maintain the data in a generic way helps to enforce organization-wide data curation policies. Here, repository systems like Fedora (http://www.fedora-commons.org/) (Lagoze, Payette, Shin, & Wilper, 2005) or eSciDoc (https://www.escidoc.org/) (Dreyer, Bulatovic, Tschida, & Razum, 2007) come into play. However, an organization-wide data management has only a limited added-value for the researcher in the lab. Thus, the data acquisition should take place in a non-invasive manner, so that it doesn't interfere with the established work processes of researchers and thus poses a minimal threshold to the scientist. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/127 Researcher Name Resolver: A framework for researcher identification in Japan 2019-06-05T12:50:28+00:00 Kei Kurakawa ojs.ub@uni-bielefeld.de Hideaki Takeda ojs.ub@uni-bielefeld.de Masao Takaku ojs.ub@uni-bielefeld.de Akiko Aizawa ojs.ub@uni-bielefeld.de Institutional repositories with the aim of open access are gradually spreading in academia, and more and more research articles and academic books are being archived on the web. In particular, researchers are accessing more and more electronic articles, papers, and books on the web. This paper describes an information service that firstly provides researcher name authority on the web, and secondly gathers the web locations of academic information resources and organizes them for individual researchers (especially researchers working in Japan). 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/131 Taking the plunge. Repositories and Research pooling in Scotland 2019-06-05T12:50:24+00:00 James Toon ojs.ub@uni-bielefeld.de Defining the Problem: The ERIS Project is working with stakeholders to understand what will motivate them to deposit their research outputs and integrate repository use into a standard part of their research life cycle. A specific focus of the project is on the needs of Research pools. Research Pooling is defined as the formation of strategic collaborations between universities in disciplinary or multi-disciplinary areas involving the international quality departments or individual researchers across Scotland (http://www.sfc.ac.uk/research/researchpools/researchpools.aspx). The emergence of research pooling since the 2001 RAE exercise (http://www.sfc.ac.uk/web/FILES/Our_Priorities_Research/research_pooling_article_july08.pdf) has made a significant contribution to the development and success of Scottish research (http://www.timeshighereducation.co.uk/story.asp?storyCode=404806&sectioncode=26), and as institutions digest the outcome of the 2008 RAE and plan for the Research Excellent Framework (REF), the pools are considering how they can best manage their strategic approaches and meet the growing return on investment and other reporting demands of their investing partners. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/132 Terminology Services in a Digital Repository 2019-06-05T12:50:23+00:00 Michael Durbin ojs.ub@uni-bielefeld.de The uses of controlled vocabularies in digital library applications can be expanded with ease when thesauri are made available using a standard service oriented architecture. Adopting this approach, the Indiana University Digital Library Program has been able to easily adapt existing tools to use controlled vocabularies and to better take advantage of a wide array of controlled vocabulary sources. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/134 The UCAR Open Access Mandate: A Community-Centered Model of Action 2019-06-05T12:50:21+00:00 Mary Marlino ojs.ub@uni-bielefeld.de Jamaica Jones ojs.ub@uni-bielefeld.de Karon Kelly ojs.ub@uni-bielefeld.de Mike Wright ojs.ub@uni-bielefeld.de In its role of managing the US federally-funded National Center for Atmospheric Research (NCAR), the non-profit University Corporation for Atmospheric Research (UCAR) has a strong history of supporting and promoting the atmospheric sciences and related fields. In September 2009, UCAR joined a growing number of other institutions worldwide in passing an Open Access mandate requiring that all peerreviewed research published in scientific journals by its scientists and staff be made publicly available online through its institutional repository, OpenSky. The new policy and accompanying repository will enable UCAR to compile, preserve and share a complete record of its intellectual output; increase its community visibility and impact; and advance research in the atmospheric sciences by providing free, worldwide access to UCAR and NCAR scholarship. The passage of the UCAR Open Access policy was especially noteworthy as it marked the first instance of a National Science Foundation-funded national laboratory to mandate Open Access. Also noteworthy was the broad community-driven process that the NCAR Library, as the leader in this initiative, employed. This presentation will outline the three-phase process adopted by the Library in its effort to reflect both institutional and disciplinary community values and needs through OpenSky services and policies. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/135 Tools for Dataset Lifecycle Management 2019-06-05T12:50:20+00:00 Alex D. Wade ojs.ub@uni-bielefeld.de Dean Guo ojs.ub@uni-bielefeld.de Simon Mercer ojs.ub@uni-bielefeld.de Oscar Naim ojs.ub@uni-bielefeld.de Michael Zyskowski ojs.ub@uni-bielefeld.de With a growing demand for transparency and openness around scientific research and an emphasis on the sharing of scientific workflows and datasets, there is a similarly increasing number in the variety of client and web-based tools required to manage each stage in the lifecycle of individual datasets. Datasets are produced from a variety of instruments and computations; are analyzed and manipulated; are stored and referenced within the context of a research project; and, ideally, are archived, stored, and shared with the rest of the world. Each of these efforts, however, requires a number of user actions involving a growing number of systems and interfaces. In an effort to preserve the flexibility and autonomy of the researchers, but also to minimize the logistical effort involved, we present in this paper a partial solution approach to this problem through the integration of workflow execution, project collaboration, project-based dataset management and versioning, and long-term archiving and dissemination. This example demonstrates the orchestration of a number of existing Microsoft Research projects; however, the interaction between each uses existing web interoperability protocols and can easily support the replacement of individual architectural components with related services. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/142 Worlds Collide: A Repository Based on Technical and Archival Collaboration 2019-06-05T12:50:12+00:00 Erin OMeara ojs.ub@uni-bielefeld.de Gregory Jansen ojs.ub@uni-bielefeld.de The failure of many institutional repositories (IR) to acquire large sets of faculty publications has shown that the traditional IR model is not sustainable without a shift in academic publishing. The Carolina Digital Repository (CDR) aims to be more than a traditional IR and instead of focusing primarily on open access publishing, it will acquire, preserve and make accessible a range of at-risk scholarly output, such as datasets, faculty papers, university records and other faculty research projects. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/51 A service-oriented national e-theses information system and repository 2019-06-05T12:55:04+00:00 Nikos Houssos ojs.ub@uni-bielefeld.de Panagiotis Stathopoulos ojs.ub@uni-bielefeld.de Ioanna Sarantopoulou ojs.ub@uni-bielefeld.de Dimitris Zavaliadis ojs.ub@uni-bielefeld.de Evi Sachini ojs.ub@uni-bielefeld.de Introduction In this article we present an overview of the information technology infrastructure that supports the operation of the Greek National Archive of Doctoral Theses (HEDI). The infrastructure, which has been recently re-developed replacing a legacy system, makes use of repository software, in particular the DSpace platform, as part of a service-oriented information system based on open source components. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/55 Authority framework in 1.6 and CILEA's customization for the Hong Kong University 2019-06-05T12:55:00+00:00 Andrea Bollini ojs.ub@uni-bielefeld.de Allen Lam ojs.ub@uni-bielefeld.de Susanna Mornati ojs.ub@uni-bielefeld.de David T. Palmer ojs.ub@uni-bielefeld.de Universities and researcher centers are rethinking their communication strategies, highlighting the quality of their research output and the profiles of their best researchers. Listing publications from an Expert Finder system may represent a solution. But providing an Expert Finder system within an IR is a more innovative approach. This idea was developed by the University of Hong Kong Libraries and applied to their IR, HKU Scholars Hub at http://hub.hku.hk/, powered by DSpace. This presentation shows how the HKU requirements were implemented by CILEA in the context of the ResearcherPage@HKU project. Using the new authority control framework by Larry Stone, introduced in DSpace 1.6.0, an Expert Finder system can be nicely integrated with DSpace but kept technically separated. Its components can evolve separately and are easier and cheaper to maintain. The author (Bollini) has contributed in porting the authority control framework, originally implemented for the XMLUI, to the JSPUI, and extending its architecture to support browse and search variants. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/56 Batch Metadata Editing - Dspace 1.6: a workshop/tutorial to inform and build skills 2019-06-05T12:54:59+00:00 Leonie Hayes ojs.ub@uni-bielefeld.de Stuart Lewis ojs.ub@uni-bielefeld.de Vanessa Newton-Wade ojs.ub@uni-bielefeld.de A new feature of the DSpace 1.6 Software is "Batch Metadata Editing". It gives Repository staff the ability to export metadata and change it easily for re-upload into the system. Once you try this "Data Entry" will never be the same. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/58 Blacklight: Leveraging Next Generation Discovery 2019-06-05T12:54:57+00:00 Tom Cramer ojs.ub@uni-bielefeld.de Blacklight is an open source, next generation discovery application. Originally developed to serve as an overarching "discovery layer" for libraries, its design and engineering give it the necessary feature set and flexibility to also serve as a repository interface, capable of fronting content of any kind, local or remote. With a rich set of search, browse and view functions, Blacklight's look, features and behaviors can be readily configured to meet local needs "out of the box". As an application with a modular architecture, it provides a framework capable of supporting additional libraries and widgets that extend Blacklight's capabilities beyond resource discovery. And as a vibrant open source project integrating enhancements and development from more than a dozen institutions, Blacklight is becoming a proven platform for content discovery and access, agnostic of underlying systems or repositories. This presentation will demonstrate the broad-based utility of Blacklight, including its key features, its use in different contexts, and how it integrates with different repositories to provide a rich and ready-made discovery application. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/62 Building flexible workflows with Fedora, the University of York approach 2019-06-05T12:54:53+00:00 Julie Allinson ojs.ub@uni-bielefeld.de Yankui Feng ojs.ub@uni-bielefeld.de In 2008, the University of York embarked on a project to build a multimedia Digital Library underpinned by Fedora Commons. In the long-term, the York Digital Library (YODL) plans to meet not only multimedia requirements, but multi-disciplinary, institutional, multi-user and multiple access control needs. In order to do this, we needed a flexible, scalable approach to fulfil the following three strands of our roadmap: * An ‘administrative’ workflow, including metadata creation forms, automatic extraction of metadata and data/resource transformation for images, video, music, audio and text resources to be extensible as new resource types are identified. * A self-deposit workflow for non-administrative users to deposit to YODL, White Rose Research Online (WRRO) and other targets as appropriate. * Bulk ingest tools and procedures, to include a desktop deposit tool. This paper will outline current and future work at York which builds on Fedora Commons, initially drawing on the Muradora interface and access control layer with a SWORD-enabled simple deposit tool in development and future plans for making this more flexible with Mura-independent applications. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/64 Content Model Driven Software 2019-06-05T12:54:51+00:00 Kåre Fiedler Christiansen ojs.ub@uni-bielefeld.de Asger Askov Blekinge ojs.ub@uni-bielefeld.de Digital collections often have very different properties. Fedora Commons is flexible enough to contain collections with varying structures, file formats and metadata formats. However, that flexibility makes it difficult to work with the data, since very little is known about the data's properties. We present a way to use machine-readable, detailed content models, called Enhanced Content Models, that allows software to adapt automatically to specific collections. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/66 Custom Rich Client, Multimedia Interfaces for the Web and Mobile for Fedora and Duracloud Using Adobe Open Source Solutions 2019-06-05T12:54:49+00:00 Greg Hamer ojs.ub@uni-bielefeld.de Adobe supports several open source projects for creating custom rich client, multimedia interfaces for both the web and now mobile devices. This session will focus on using Fedora and Duracloud's web service and REST APIs in conjunction with the following open source frameworks and servers supported by Adobe: -- Flex SDK http://opensource.adobe.com/wiki/display/flexsdk -- Open Source Media Framework (aka OSMF) http://opensource.adobe.com/wiki/display/osmf -- BlazeDS http://opensource.adobe.com/wiki/display/blazeds 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/67 Developing publishing process support system with Fedora and Orbeon Forms - A case study 2019-06-05T12:54:48+00:00 Matti Varanka ojs.ub@uni-bielefeld.de Ville Varjonen ojs.ub@uni-bielefeld.de Tapio Ryhänen ojs.ub@uni-bielefeld.de In University of Oulu, almost ready dissertation theses are processed to follow local ACTA templates and conventions. This process can take many iterations between dissertants, series editors and the print. Also, a lot of data about the publication and creators should be gathered in order to create cover pages, abstracts, and other informational pieces related to the publication. Since this kind of process is hard to manage via e-mail, some kind of supporting software for this publication process is necessary. This article (and presentation) describes a work-in-progress case study of this publishing process support system development. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/70 DSpace 1.6 usage statistics: What it can do for you? 2019-06-05T12:54:17+00:00 Ben Bosman ojs.ub@uni-bielefeld.de Introduction DSpace 1.6 has been extended with a new Apache Solr based statistics solution. This contribution to DSpace is the open-source version of @mire's commercial "Content and Usage Analysis" DSpace module. The DSpace 1.6 statistics offer storage of usage data including bitstream downloads, item display page visits, collection and community homepage visits, ... . 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/71 DSpace Discovery: Unifying DSpace Search and Browse with Solr 2019-06-05T12:54:16+00:00 Mark Diggory ojs.ub@uni-bielefeld.de One key innovation long awaited by the DSpace community is a more intuitive and unified search and browse experience. NESCent and @mire NV have collaborated to create a new Faceted Search and Browse experience for NESCent's DSpace repository, Dryad. DSpace Discovery is a modular Add-on for DSpace XMLUI that replaces DSpace search and browse with Solr. The implementation of Discovery's Services utilize the DSpace Services API originally developed for DSpace 2.0 and back-ported for use within the recent release of DSpace 1.6.0. Thus, DSpace Discovery represents the next stage in @mire's DSpace 2.0 development initiative. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/72 DSpace Helps Irish National Learning Repository To Change Its Focus 2019-06-05T12:54:15+00:00 Catherine Bruen ojs.ub@uni-bielefeld.de Gavin Henric ojs.ub@uni-bielefeld.de Bob Strunz ojs.ub@uni-bielefeld.de This paper describes how the Irish National Digital Learning Resource Repository (NDLR) has implemented a DSpace-based platform which enables it to more effectively utilise its limited resources to serve customer need. The implementation of a DSpace-based solution in partnership with two private-sector service providers has permitted a refocusing of the available resources from software licensing to research and development of the platform. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/73 DSpace Under the Hood: How DSpace works 2019-06-05T12:54:14+00:00 Stuart Lewis ojs.ub@uni-bielefeld.de Leonie Hayes ojs.ub@uni-bielefeld.de Elin Stangeland ojs.ub@uni-bielefeld.de Kim Shepherd ojs.ub@uni-bielefeld.de Richard Jones ojs.ub@uni-bielefeld.de Monica Roos ojs.ub@uni-bielefeld.de Whilst you don't need to be a mechanic to drive a car, it is helpful if you have a basic understanding of how a car works, what bits do different jobs, and how to top up your oil and pump up your tyres / tires. This presentation will give an overview of the DSpace architecture, and will give you enough knowledge to understand how DSpace works. By knowing this, you will also learn about ways DSpace could be used, and ways in which it can't be used. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/74 DSpace Under the Hood: The development process and YOUR role in it 2019-06-05T12:54:13+00:00 Stuart Lewis ojs.ub@uni-bielefeld.de Leonie Hayes ojs.ub@uni-bielefeld.de Elin Stangeland ojs.ub@uni-bielefeld.de Kim Shepherd ojs.ub@uni-bielefeld.de Richard Jones ojs.ub@uni-bielefeld.de Monica Roos ojs.ub@uni-bielefeld.de DSpace development in undertaken by the DSpace community. No one, or no organisation is in charge, and without contributions from the DSpace community the platform would not continue to develop and evolve. Sometimes it can appear that there are people in charge, or that unless you are a technical developer then there is no way or need to contribute. This presentation will explain how DSpace development usually takes place, where and who has input at different stages, and will equip you to contribute further, or to help you contribute for the first time. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/76 DuraSpace strategic overview 2019-06-05T12:54:11+00:00 Sandy Payette ojs.ub@uni-bielefeld.de Bradley McLean ojs.ub@uni-bielefeld.de Thornton Staples ojs.ub@uni-bielefeld.de Tim Donohue ojs.ub@uni-bielefeld.de Michele Kimpton ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/78 Enhanced Content Models 2019-06-05T12:54:09+00:00 Asger Askov Blekinge ojs.ub@uni-bielefeld.de Kåre Fiedler Christiansen ojs.ub@uni-bielefeld.de With the release of Fedora Commons 3.0, the Content Model Architecture (CMA) was added to Fedora. It was not meant as an end-all solution, but rather as a starting point for building more advanced content models. The Fedora team expected the user community to figure out the best ways to use and amend this design. Now, the CMA have been around for a while, and certain improvements have, by agreement of the Fedora Committer Team, matured enough to be brought back into the core Fedora system, probably with the coming Fedora 3.4 release. This proposal aims to present these improvements to the Fedora community. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/84 Harvest: A Digital Object Search and Discovery System for Distributed Collections with Different File Types and Structures 2019-06-05T12:54:03+00:00 Frances Webb ojs.ub@uni-bielefeld.de Joy Paulson ojs.ub@uni-bielefeld.de The Harvest site, http://harvest.mannlib.cornell.edu is implemented using Fedora for data management, SOLR/Lucene for search, and Drupal for the user interface. Its goals are to provide an integrated search interface in which differences in format, structure and location are disguised in favor of treating objects that are conceptually alike as like, parallel objects. This is done by building Fedora content models that keep track of the complexity while providing services normalized to the objects' conceptual types; Lucene search documents that are fully normalized to hide implementation differences; and a Drupal front end that can treat all of the objects as generic objects until and unless specialized front-end services are built. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/87 Improving DSpace Backups, Restores, Migrations 2019-06-05T12:54:00+00:00 Tim Donohue ojs.ub@uni-bielefeld.de In the past, backing up your DSpace contents has involved semi-synchronized backups of both your database and your files. Although this procedure generally works fine, it can prove problematic when you suddenly need to restore the contents of a single Community, Collection or Item (both metadata and files). There is also the problem of metadata and content files residing in separate backups - if either one of these backups becomes corrupted, it is nearly impossible to completely restore your DSpace contents. This talk will introduce a new DSpace feature being developed as part of the DuraCloud integration project. This new feature will allow you to export your entire DSpace Community / Collection hierarchy (including all Items, and their metadata and files) into a series of METS-based packages. These METS-based packages may be used to restore all of your DSpace contents (into another DSpace), or just the contents of a single Community, Collection or Item. These packages can also provide a more stable way to backup your DSpace contents, or an additional means of getting content in or out of DSpace. This work is based on a DSpace plugin built as part of the MIT PLEDGE project; however it has been updated to allow for a complete restore of your DSpace hierarchy. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/90 Integrating DSpace in a Multinational Context: Challenges and Reflections 2019-06-05T12:53:58+00:00 Kyle Strand ojs.ub@uni-bielefeld.de Background Sharing and accessing knowledge is possible in ways that were not available even in the recent past. These methods make the exchange of ideas, experiences, and knowledge efficient, relatively inexpensive, and simple. A number of international standards have been developed and agreed upon by academic institutions, research enterprises, governments, and others which increase the ease of knowledge exchange through interoperability of systems and consistency in the presentation of expected features and data. With an internet connection and a few clicks of a computer's mouse, a vast array of knowledge is instantly accessible. The Inter-American Development Bank (IDB), established to accelerate economic and social development in Latin America and the Caribbean, is committed to ensuring its knowledge products are accessible and visible for Bank employees, constituents of borrowing and non-borrowing member countries, partners and other practitioners in the region and the public at large. Due to the nature of the IDB, access to and the sharing of knowledge are critical for the success of its development mission. It is of the utmost importance to the institution, its mission, and its role in the region that the knowledge produced by the Bank be easily accessible and visible for all in the development community and beyond. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/92 Integrating research output in UPC repositories 2019-06-05T12:53:56+00:00 Antonio Juan Prieto-Jimenez ojs.ub@uni-bielefeld.de Yolanda Cacho-Figueras ojs.ub@uni-bielefeld.de Ruth Iñigo-Robles ojs.ub@uni-bielefeld.de Anna Rovira-Fernandez ojs.ub@uni-bielefeld.de Jordi Serrano-Muñoz ojs.ub@uni-bielefeld.de Introduction In 2007, the Universitat Politecnica de Catalunya (UPC) started a strategical Project for the University called DRAC (Descriptor de la Recerca i l'Activitat Academica or Academic Activity and Research Descriptor): the main goal of which was the development of a new information system for managing, evaluating and publishing the research output. The Library participated in the project from the beginning. The other two partners were: OTRDI (the office manager of the research output) and UPCnet as the technological partner. DRAC, the new software, has the following applications: the main one is that academics can develop a curriculum following the national standard for presentation to national and regional administrations. It is also the tool that allows research groups to publish their output on the Internet. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/93 Integrating true multilingual capabilities into an Institutional Repository : Building the World Health Organization's Institutional Repository for Information Sharing 2019-06-05T12:51:42+00:00 Michael Guthrie ojs.ub@uni-bielefeld.de Cristiane de Oliveira ojs.ub@uni-bielefeld.de Philippe Veltsos ojs.ub@uni-bielefeld.de Yousef Elbes ojs.ub@uni-bielefeld.de Ian Roberts ojs.ub@uni-bielefeld.de Dorothy Leonor ojs.ub@uni-bielefeld.de Graham Triggs ojs.ub@uni-bielefeld.de Hayden Young ojs.ub@uni-bielefeld.de Introduction In a global context, how do we facilitate the dissemination and access if the material in a repository is primarily searchable and retrievable in only in one or two languages? It has been observed that there is much research and public health guidelines that goes unknown to large numbers of researchers, health workers and to the general public when they are only able to access in one language or another. How do we promote integration of various information sources in an international organization with 147 country offices, six regional offices and one headquarters, and with material being published in 6 official languages and 53 non-official languages? Research ethics should start considering, at design stage, the outreach of methods used and results obtained beyond the boundaries of the research language. Access to information in as many languages as possible should become a major component of any accessibilityrelated debate. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/96 Interoperability Issues Between Learning Object Repositories and Metadata Harvesters 2019-06-05T12:51:38+00:00 Ricard de la Vega ojs.ub@uni-bielefeld.de Jordi Conesa ojs.ub@uni-bielefeld.de Julia Minguillon ojs.ub@uni-bielefeld.de In this paper we describe an open learning object repository on Statistics based on DSpace which contains true learning objects, that is, exercises, equations, data sets, etc. This repository is part of a large project intended to promote the use of learning object repositories as part of the learning process in virtual learning environments. This involves the creation of a new user interface that provides users with additional services such as resource rating, commenting and so. Both aspects make traditional metadata schemes such as Dublin Core to be inadequate, as there are resources with no title or author, for instance, as those fields are not used by learners to browse and search for learning resources in the repository. Therefore, exporting OAI-PMH compliant records using OAI-DC is not possible, thus limiting the visibility of the learning objects in the repository outside the institution. We propose an architecture based on ontologies and the use of extended metadata records for both storing and refactoring such descriptions. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/98 JorumOpen - customising DSpace for a national repository of Open Educational Resources 2019-06-05T12:51:36+00:00 Ryan Hargreaves ojs.ub@uni-bielefeld.de Gareth Waller ojs.ub@uni-bielefeld.de Christine Rees ojs.ub@uni-bielefeld.de Laura Shaw ojs.ub@uni-bielefeld.de Jorum, a JISC-funded service in development begun in 2002, has been committed to collecting and sharing learning and teaching materials within the UK Further and Higher Education community. With the growing interest in and increase in "open" content, Jorum released a new option in January 2010 - JorumOpen, which provides a focus to find nationally hosted learning materials developed by the UK Further and Higher Education sector. JorumOpen allows any user, from any country, free and unrestricted access to learning materials licensed under a Creative Commons licence. The central component of JorumOpen is an open source digital repository based on a modified version of DSpace 1.5.2. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/100 Maintaining Standards - managing metadata consistency across collections and tools 2019-06-05T12:51:34+00:00 Michael Durbin ojs.ub@uni-bielefeld.de The flexibility to store any type of metadata in a fedora repository is one of its greatest strengths, but it externalizes the burden of metadata standardization and validation. This burden is exacerbated by the fact that during the course of a digital object’s life, its metadata may be modified by several different applications or utilities operated by any number of users. This talk will cover strategies to deal with the task of maintaining consistency and the creation of and adherence to institutional-specific policies or standards for metadata quality. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/102 Move On Up: TCD TARA and the Value of Dspace 1.6 2019-06-05T12:51:31+00:00 Niamh Brennan ojs.ub@uni-bielefeld.de Gavin Henrick ojs.ub@uni-bielefeld.de Before In 2006 Enovation Solutions with Trinity College Dublin developed TARA - Trinity's Access to Research Archive. TARA was based on DSpace customised with specific enhancements for TCD and integrated with the university's CERIF-compliant CRIS, the TCD Research Support System (RSS). The version of Dspace deployed was 1.3.2. Over the years, much further integration, customisation and configuration of complex workflows, metadata fields and web services were implemented as TARA became an integral part of the university's fully-integrated research environment. However, it was realised that the version of Dspace was starting to creak, and that an upgrade was needed to move the repository to a new level of capacity, functionality and responsiveness to the needs of the research community. In 2009 TCD approached Enovation Solutions to investigate and plan the upgrade of TARA from DSpace v1.3.2. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/113 Primo discovery and delivery of Fedora content 2019-06-05T12:51:19+00:00 Maude Frances ojs.ub@uni-bielefeld.de Stefania Riccardi ojs.ub@uni-bielefeld.de Carmel Carlsen ojs.ub@uni-bielefeld.de Angela Dawson ojs.ub@uni-bielefeld.de The paper describes and demonstrates the use of Primo as the discovery layer for a Fedora repository. Primo is an Ex Libris product designed to be a one-stop solution for discovery and delivery of resources from various sources. Fedora/Primo systems have been deployed on two UNSW eResearch projects, based on requirements of research groups in public health and social sciences. Planning has commenced for implementation of Primo on existing Fedora/VITAL systems, including MemRE (Membranes Research Environment). With the general release of Primo 3 in April 2010, VITAL will be replaced as the search and discovery layer of the institutional repository also. The presentation demonstrates KnowlHEG, an electronic gateway for Human Resources for Health (HRH) material relating to Asia and the Pacific region, which was jointly developed by the University Library and the School of Public Health and Community Medicine (SPHCM) at UNSW. Primo provides the user interface, search functionality and persistent URLs on a Fedora repository. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/115 Public Policy Research in PolicyArchive: Use of DSpace When Metadata Meets Open Access 2019-06-05T12:51:16+00:00 Sarah Buchanan ojs.ub@uni-bielefeld.de Legislation at the state and national level is shaped through the conduct of research on the existing conditions, infrastructure, and community practice of specific areas of social inquiry. Public policy research, both within the academy and in government, serves a vital role in preparing legislators and policymakers to enact well-informed measures that are grounded in regional studies. Such work is becoming increasingly urgent as social issues and phenomena become more layered, complex, and interdisciplinary, and require more nuanced investigations. The results of this research, while essential at the time of need, have not historically been collected in a systematic manner that would enable access to the data beyond the immediate time period. A clear need has been expressed by researchers in many contexts who seek access to the rich content produced by local organizations, publishers, specialized institutes, and independent bodies of experts. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/117 Re-thinking Fedora's storage layer: A new high-level interface to remove old assumptions and allow novel use cases 2019-06-05T12:51:14+00:00 Aaron Birkland ojs.ub@uni-bielefeld.de Asger Askov Blekinge ojs.ub@uni-bielefeld.de Traditionally, the pluggable storage interface in Fedora has followed a "low-level" paradigm where objects and datastreams are presented to the storage layer as independent, anonymous blobs of data. This arrangement has proven simple, reliable, and generally flexible. In the past few years however, there has been an increasing need for Fedora to mediate storage in more complex scenarios. Managing large numbers of big datastreams, multiplexing storage between different devices or cloud storage, and archiving content in a transparent manner are tasks that are difficult to achieve through Fedora currently. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/128 Solrizer: Pragmatically Connecting Search, Management and Indexing in a Repository Solution 2019-06-05T12:50:27+00:00 Matt Zumwalt ojs.ub@uni-bielefeld.de Any repository solution provides facilities for Creation, Management, and Editing of Content as well as facilities for Searching and Browsing through that content. Experience has shown that when a solution binds these two areas of functionality together too tightly, the system becomes brittle and unworkable, discouraging innovation. Our work on the Hydra project has produced a flexible and intuitive solution that combines these two areas in an almost entirely decoupled fashion. This solution, which is already working in multiple Hydra applications, is built on a three-part pattern where Blacklight handles Search and Discovery, ActiveFedora handles Creation, Management and Editing of Content, and a small application called Shelver supplies the crossover point by indexing the content into Solr so that it will show up in Blacklight. This three-part approach reflects a strong pattern for designing and/or improving repository solutions. The main pivot of this approach is to treat indexing as its own separate part of the application and to allow that indexing processes to evolve constantly as part of the application development cycle. This work is the product of combining established best practices, best of breed software, and lessons learned from an iterative approach to application development. While our implementation is focused on Fedora Repositories, the software could be used in multiple contexts and the pattern is certainly applicable to any content-oriented application. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/130 Symplectic Repository Tools: Deposit Lifecycle with Interchangeable Repositories 2019-06-05T12:50:25+00:00 Richard Jones ojs.ub@uni-bielefeld.de This paper presents the development of software at Symplectic Ltd to provide a full CRUD (Create, Retrieve, Update, Delete) interface for a number of digital repositories using common, open standards: Symplectic Repository Tools [http://www.symplectic.co.uk/products/repositorytools.html]. The primary objective of this work has been to integrate a research management system (Symplectic Elements [http://www.symplectic.co.uk/products/publications.html]) with both DSpace [http://www.dspace.org/] and Fedora [http://www.fedoracommons.org] (specifically, for the University of Oxford)1, such that the academic's experience of managing their repository content is simple, straightforward and in no way diverting from the overall process of managing their research outputs. The consequences of this include increased uptake of the repository and higher throughput of fulltext content. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/136 Towards Interoperable Preservation Repositories (TIPR) 2019-06-05T12:50:18+00:00 Priscilla Caplan ojs.ub@uni-bielefeld.de William Kehoe ojs.ub@uni-bielefeld.de Joseph Pawletko ojs.ub@uni-bielefeld.de The TIPR Project, Towards Interoperable Preservation Repositories, was begun in October 2008, its participants being the Florida Center for Library Automation, the Bobst Library at New York University, and Cornell University Library. Our goal has been to develop, test, and promote a standard format for exchanging information packages among dissimilar preservation repositories - an intermediary information package that all repositories can read and write, overcoming the mismatch between repository types. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/137 Using DSpace as a Closed Research Repository 2019-06-05T12:50:17+00:00 Ianthe Hind ojs.ub@uni-bielefeld.de Robin Taylor ojs.ub@uni-bielefeld.de Introduction This year the University of Edinburgh introduced a dual DSpace (http://www.dspace.org/) repository architecture: A closed repository to hold all research outputs (no full-text required and password protected) and an open access repository (full-text only). This presentation will focus on the closed repository. In our repository we have our university hierarchy: colleges and schools as the communities and the academic staff at the collection level. This is not the conventional way of setting up a DSpace repository and immediately allows an author to be associated with a collection. We used the item based submission from the 2009 Google Summer of Code Submission Enhancements (http://www.fedora-commons.org/confluence/display/DSPACE/Google+Summer+of+Code+2009+Submission+Enhancements). This has allowed us to only display relevant metadata based on publication type during submission. We have added functionality for an academic (on the collection level) to export a list of their publications to their own personal webpage by inserting some JavaScript (or as XML). This dynamically fetches the list of their publications from the repository at each load, with the most recent publications and orderings as defined by the academic reflected in the list. The same can be done by a research administrator (on the community level) to export a list of publications for their school or research group. We have developed the ability to export items across to our open repository using SWORD - open access items can be copied across on deposit and those under embargo can be copied across once the embargo period has passed. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/138 Using DSpace for Publishing Electronic Theses and Dissertations 2019-06-05T12:50:16+00:00 James Halliday ojs.ub@uni-bielefeld.de Randall Floyd ojs.ub@uni-bielefeld.de The IUScholarWorks Repository is a DSpace-based institutional repository for the dissemination and preservation of Indiana University's scholarly output. Some time ago, our team made a decision to incorporate electronic theses and dissertations (ETD's) into our DSpace repository, and this created several technical challenges for us. Getting ETD's into DSpace is a challenge that a number of institutions have tackled recently, and several innovative solutions have been found, such as Vireo, the ETD submission management tool from the Texas Digital Library. However, we were faced with a number of requirements in our ETD workflow that had not yet been encountered by other institutions, and required some interesting solutions. In this proposal, we will provide an outline for a presentation that will discuss these challenges, and the solutions that were envisioned. We will also provide an update on our current progress towards implementing our plans, and discuss the future work that is left to be done. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/61 Building a DDC Annotated Corpus from OAI Metadata 2019-06-05T12:54:54+00:00 Mathias Lösch ojs.ub@uni-bielefeld.de Ulli Waltinger ojs.ub@uni-bielefeld.de Wolfram Horstmann ojs.ub@uni-bielefeld.de Alexander Mehler ojs.ub@uni-bielefeld.de A frequently overlooked benefit of open access publications is that they are an easy accessible and cost-effective data source for research disciplines like text mining, natural language processing or computational linguistics. In those fields, linguistic data is usually managed in the form of corpora, i.e. machine readable bodies of texts that represent a particular variety of language. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/81 ESCAPE: A generic tool for enhanced scientific communication 2019-06-05T12:54:06+00:00 Maarten van Bentum ojs.ub@uni-bielefeld.de Dennis Vierkant ojs.ub@uni-bielefeld.de Jan M. Gutteling ojs.ub@uni-bielefeld.de General scope In order to enhance communication of research results as part of a network of actors in a particular field one wants to · relate relevant objects (documents, persons, institutions, projects, ... on the basis of content and describe/annotate these relations · communicate and present these aggregated objects for various target groups, not only scientists but also policy makers, journalists, companies, and the general public · enhance this communication by commenting and tagging related objects The tool ESCAPE is a tool in which users can aggregate digital objects stored at any location and describe, annotate, comment and tag the relations between these objects. The system not only allows formal relations (like bibliographic metadata) but especially "content relations" concerning topics, reviews, comments, discussions, applications, etc. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/86 Implementing citation management and report generation value-added services over OAI-PMH compliant repositories 2019-06-05T12:54:01+00:00 Nikos Houssos ojs.ub@uni-bielefeld.de Christina Paschou ojs.ub@uni-bielefeld.de Ioanna-Ourania Stathopoulou ojs.ub@uni-bielefeld.de Konstantinos Stamatis ojs.ub@uni-bielefeld.de Despina Hardouveli ojs.ub@uni-bielefeld.de The National Documentation Center (EKT) has developed HELIOS (http://helios-eie.ekt.gr) - the institutional repository of the National Hellenic Research Foundation (NHRF) aiming at collecting the scientific work of its associate researchers. DSpace has been used as the repository platform in the implementation of HELIOS. According to the repository literature (A DRIVER’s Guide to European Repositories, Amsterdam University Press, 2008), offering value-added services to researchers can be an important factor for repository take-up, able to significantly increase deposits through self-archiving. Therefore, in order to encourage the usage of HELIOS among the NHRF researchers, an application providing value-added services over the repository has been developed. In brief, this application harvests the digital repository's data and presents them outside the repository's framework, enabling citation management and configurable custom reporting, for example producing publication lists per researcher and institute, exactly in the format applied in the institute annual report. The application is in operation on top of the HELIOS DSpace-based repository; however it has been designed and implemented to depend only on information retrieved via OAI-PMH, so that it can work with any OAI-PMH compliant repository platform (DSpace, Eprints, Fedora, etc.) 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/109 Pedocs and kopal: Co-operation of subject repository and long-term archiving 2019-06-05T12:51:23+00:00 Julia Kreusch ojs.ub@uni-bielefeld.de The Information Center for Education of the German Institute for International Educational Research (DIPF) in Frankfurt on Main offers information and advice services for all areas of education and educational science. These are online portals, full text and bibliographic databases, information systems and participative Web 2.0 applications. Since 2008, the Information Center for Education has operated its subject repository "pedocs" which focuses on digital publications in the field of educational science, pedagogical practice and the history of education. For the time being this repository is run as a project in parallel with a second project, "Long term preservation - pedagogics" aiming to preserve texts acquired, recorded and stored in the "pedocs" repository on a long term basis. The synchrony of both projects offers an opportunity to prepare the digital objects in pedocs from the outset in a way that makes them suitable for subsequent transfer into a long-term archive. While the repository is solely managed by the Information Center for Education, the archiving will be operated in co-operation with the German National Library (DNB). The poster will outline the workflows which have been developed and established for acquisition of open-access publications on the one hand and for long term preservation of the text objects on the other hand. It will illustrate which aspects (i.e legal, technical, organisational) had to be considered to prepare the preserving objects for transfer into the long term archive. The co-operation and resource sharing of the DIPF and DNB taking place in the framework of a superordinate project using the archiving system "kopal", a joint-venture development of the DNB and the Göttingen State and University Library, will be presented. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/63 Complete Preservation with EPrints 2019-06-05T12:54:52+00:00 David Tarrant ojs.ub@uni-bielefeld.de Steve Hitchcock ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/80 EPrints Funding Data and Workflow 2019-06-05T12:54:07+00:00 William J. Nixon ojs.ub@uni-bielefeld.de Lesley Drysdale ojs.ub@uni-bielefeld.de This short paper provides details of the addition of new fields for funder and award data and the creation of a new Funding option in the deposit workflow. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/99 Kultivating Kultur 2019-06-05T12:51:35+00:00 Carlos Silva ojs.ub@uni-bielefeld.de Stephanie Meece ojs.ub@uni-bielefeld.de Institutional Repositories (IRs) within the UK have traditionally focused upon text based research and have had a low uptake within the creative arts. The Kultur Project, funded by the Joint Information Systems Committee (JISC) for the period 2007-2009, was a highly successful collaboration between University of Southampton, including Winchester School of Art, University of the Arts London, University for the Creative Arts and the Visual Arts Data Service (VADS). Using EPrints software the project investigated how best to store, share and promote research in the creative arts in a way that could function as a multimedia showcase for digital versions of creative works as well documenting performances, exhibitions and installations. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/101 MePrints: Building User Centred Repositories 2019-06-05T12:51:32+00:00 D. Millard ojs.ub@uni-bielefeld.de H. Davis ojs.ub@uni-bielefeld.de Yvonne Howard ojs.ub@uni-bielefeld.de S. Francois ojs.ub@uni-bielefeld.de Patrick McSweeney ojs.ub@uni-bielefeld.de Debra Morris ojs.ub@uni-bielefeld.de M. Ramsden ojs.ub@uni-bielefeld.de S. White ojs.ub@uni-bielefeld.de Over the last few years we have been working to reinvent Teaching and Learning Repositories learning from the best practices of Web 2.0. Over this time we have successfully deployed a number of innovative repositories, including Southampton University EdShare, The Language Box, The HumBox, Open University’s LORO and Worcester Learning Box. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/103 Moving Southampton ePrints to Oracle 2019-06-05T12:51:30+00:00 Wendy White ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/111 Portal Workflow Integration: Cardiff University I-WIRE Project 2019-06-05T12:51:21+00:00 Scott Hill ojs.ub@uni-bielefeld.de The I-WIRE project is designing and developing an enhanced deposit workflow that will be presented to Cardiff University's researchers via our Modern Working Environment portal. Presenting the workflow in this way gives us opportunities to integrate the deposit process and data with other research-related process and systems. We are also exploring DOI deposit, and a Web of Science import, within the same portal, all of which is aimed at making it easier for academics to deposit their publications. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/119 Reinventing Teaching and Learning Repositories 2019-06-05T12:50:37+00:00 Yvonne Howard ojs.ub@uni-bielefeld.de Through the Faroes and OneShare projects the EdShare team have developed an innovative approach to teaching and learning repositories, learning from the best practices of Web 2.0 and re-imagining these repositories from the ground up as living sites, whether for a community or an institution. Many existing teaching and learning repositories base themselves closely on the research repository model, but research repositories are about Archiving, and ordinary practitioners rarely want to archive their teaching materials. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/120 Report on the Development of Breadcrumbs Navigation Feature in EPrints 2019-06-05T12:50:36+00:00 Tomasz Neugebauer ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/121 Report on the Development of ETD-MS Export Plugin in EPrints 2019-06-05T12:50:35+00:00 Tomasz Neugebauer ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/123 Repository Software as a Platform for the Registry of Open Access Repositories 2019-06-05T12:50:33+00:00 T. Brody ojs.ub@uni-bielefeld.de Leslie Carr ojs.ub@uni-bielefeld.de S. Harnad ojs.ub@uni-bielefeld.de We have migrated the ROAR service to a repository software-based platform. The goal of this project was to reduce the administrative overhead for us and improve the experience for users by enabling them to control and update their own records. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/126 Research reporting using Eprints at The University of Northampton 2019-06-05T12:50:29+00:00 Miggie Pickton ojs.ub@uni-bielefeld.de Each year The University of Northampton research administrators produce an "Annual Research Report" for each of the university's six Schools. Before 2007, and in the absence of any centralised research database, research details were simply collated and word-processed into one-off documents. NECTAR provided the opportunity to store bibliographic details in a systematic manner and the potential to re-use these data for research reporting. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/129 Symplectic Integration 2019-06-05T12:50:26+00:00 Richard Jones ojs.ub@uni-bielefeld.de This paper presents the development of software at Symplectic Ltd to provide a full CRUD (Create, Retrieve, Update, Delete) interface to EPrints using common, open standards: Symplectic Repository Tools. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/133 The State and Future of EPrints 2019-06-05T12:50:22+00:00 Leslie Carr ojs.ub@uni-bielefeld.de Les Carr, the EPrints Technical Director, will give an overview of the developments of the past year, and a preview of upcoming features. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/139 Using EPrints to Understand Research / The Semantics of Reading Lists 2019-06-05T12:50:15+00:00 Patrick McSweeney ojs.ub@uni-bielefeld.de The department of Electronics and Computer Science (ECS) at the University of Southampton has about two hundred research staff. Often staff members will have overlapping research areas without knowing. It is useful to discover what the people around you working on and to identify those with similar research intrests so that ideas can be shared. Electronics and computer science are fields which evolve rapidly, therefore a researcher's interests change or drift. This means that a researcher' list of interests 6 months ago may be quite different to their research interests now. Rather than encouraging researchers to constantly update their own interests, this information could be derived automatically to determine interests based on their reading material. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/140 Video and Mass Storage 2019-06-05T12:50:14+00:00 David Tarrant ojs.ub@uni-bielefeld.de There are a number of challenges in handling large amounts of data for multimedia content - the sheer size of raw HD video data and the many versions of a video that are created during production and dissemination requires a bespoke infrastructure that is different in nature to the normal browser-based solutions. This session looks at the vidEPrints customisation designed to integrate with an institution's video production environment and handle dissemination via multiple services including YouTube and iTunes U. 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/91 Integrating Repositories With Research Infrastructure: The Astronomical Virtual Observatory 2019-06-05T12:53:57+00:00 Francoise Genova ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/141 We can Build Amazing Public Islands - but Should We? 2019-06-05T12:50:13+00:00 Sandy Payette ojs.ub@uni-bielefeld.de 2010-12-31T00:00:00+00:00 Copyright (c) 2010 Sandy Payette https://biecoll.ub.uni-bielefeld.de/index.php/or/article/view/122 Repositories and Linked Open Data: the view from myExperiment 2019-06-05T12:50:34+00:00 David De Roure ojs.ub@uni-bielefeld.de While some repositories are focused on data, the myExperiment project has demonstrated the value in sharing the methods that are used to process that data - sharing know-how and building new capabilities through the community. Evolving usage of the website provides glimpses of the future behaviour of researchers and an exploration of what researchers might be sharing in the future instead of papers. This exploration of social sharing and ad hoc reuse has taken the project into the world of scholarly research objects, linked data and what might be described as "Linked Open Methods". We now see researchers beginning to share new methods that operate at this next level of research. 2010-12-31T00:00:00+00:00 Copyright (c)