This webinar was presented to the RDAP community on December 2, 2019 at 12 pm EST.
The goal of the webinar was to hear from the RDAP community about their experiences with institutional research data policies that regulate the ownership, management, and transfer of research data in an institution.
The webinar organizing committee was Sophie Hou, Amy Schuler, and Clara Liebot
invited panelists were:
Kristin Briney, Biology & Biochemistry Librarian, Caltech University,
Heather Coates, Digital Scholarship & Data Management Librarian / Co-Director, Center for Digital Scholarship, Indiana University Purdue University Indianapolis,
Abigail Goben, Information Services and Data Management Librarian Associate Professor, University of Illinois-Chicago,
Jonathan Petters, University Libraries Data Management Consultant and Curation Services Coordinator, Virginia Polytechnic Institute and State University.
Background/Use Case (provided by Clara Llebot of Oregon State University):
I work in a research intensive university as the library data management specialist. I have worked occasionally on data policies during my time here, like when we wrote the policy that regulates dataset reviews in our institutional repository. These policies were usually flexible, informative, and a helpful tool for me. Earlier this year I was asked to be part of a committee that would create an institutional research data management policy in our institution.
I was thrilled that the library was being asked to participate, and at the same time terrified that I had no idea what I was getting into. I have been generally interested in concepts around data ownership, the interactions between copyright and data, decision making regarding research data, etc., but I felt unprepared.An institutional research data policy is, from my perspective, a policy that affects a lot of people, and that has the potential of changing behaviors and research practices in a way that I am definitely not used to. We are still beginning the process of creating the policy, so right now what I have is mostly questions, not answers, about what an institutional research data policy should say.
Main Discussion Questions:
1. Motivations for the policy
Is an institutional research data policy necessary in any institution?
What are the issues/gaps that we are trying to address through this policy?
What should be the goal of an institutional research data policy?
2. Roles and responsibilities
Who should be involved in creating this kind of policy?
How should the faculty be involved in the creation of this policy?
How should a research data policy be enforced?
How should students be affected by this policy?
3. Outcomes of existing data policies
What is the type of content addressed in an institutional research data policy? Should ownership be a part of it?
Are research data policies encouraging or deterring open data?
What can we do, when writing this type of policy, to make clear that the university supports open data? Or should this be in different policies?
What are some examples of situations that are easier/better because there is a research data policy at an institution?
Architecture has a unique responsibility to anticipate and shape the future. Buildings and spaces designed today will not be completed for years, then are expected to remain relevant for decades or indefinitely. For this reason, design must always look forward to anticipate what changes in society and technology may bring and conversely what elements remain timeless. This studio sought to investigate one facet of the future that has already gone through many rounds of radical change, human work and the space that supports it.
A new formula has been developed that determines the passage of time. In the paper, this is particularized for cases of temporary dilation due to speed and gravity.
Additionally, using the previous equation, an interpretation of the nature of black holes, their formation, growth, and dimension can be developed.
Moreover, and based on all of the above, a different way of understanding mass and space is proposed. Which ultimately implies an alternative expression that relates mass and energy.
The development of complex and dependable systems like autonomous vehicles relies increasingly on the use of systems modeling language (SysML). In fact, SysML has become a de facto standard for systems engineering. With model-driven engineering, a SysML model serves as a reference for the early defect detection of the system under design: the earlier the errors are detected, the less is the cost of handling the errors. Mutation testing is a fault-based technique that has recently seen its applications to SysML behavioral models (e.g., state machine diagrams). Specifically, a system's state-transition design can be fed to a model checker where mutants are automatically generated and then killed against the desired design specifications (e.g., safety properties). In this paper, we present a novel approach based on process mining to improve the effectiveness and efficiency of the SysML mutation testing based on model checking. In our approach, the mutation operators are applied directly to the state machine diagram. These mutants are then fed as traces into a process mining tool and checked according to the event logs. Our initial results indicates that the process mining approach kills more mutants faster than the model checking method.
Artificial Intelligence (AI) is a cognitive science to enables human to explore many intelligent ways to model our sensing and reasoning processes. Industrial AI is a systematic discipline to enable engineers to systematically develop and deploy AI algorithms with repeating and consistent successes. In this paper, the key enablers for this transformative technology along with their significant advantages are discussed. In addition, this research explains Lighthouse Factories as an emerging status applying to the top manufacturers that have implemented Industrial AI in their manufacturing ecosystem and gained significant financial benefits. It is believed that this research will work as a guideline and roadmap for researchers and industries towards the real-world implementation of Industrial AI.
The NATO and the EU Peacebuilding Missions Dataset is created to use fuzzy seta Qualitative Comparative Analysis (fsQCA) analysis as a method of researching how NATO and the EU missions’ outcomes are influences by organizational assets and decision-making in both organizations. Outcome pertaining to these two sets of missions are intended to measure various aspects of organizational efficacy. There are two groups of variables – condition variables and outcome variables. In the next few sections, we will explain how these two groups of variables were generated, what existing sources and datasets were used and how mission indicators were generated. See attached research note for more detailed information.
Condition Sets: Description
By and large conditions sets that have been generated measure organizational assets for these NATO and EU missions, as well as patterns in their decision-making process. Two critical organizational assets used for both sets of missions are their annual operational budget and their annual deployed personnel. The dataset contains two control variables measuring operational legitimacy – number of contributing nations and number of UN resolutions passed in relevance to the situation in the area of deployment for the duration of the EU and NATO Mission.
Operational Duration – duration of the operation (in months). For ongoing missions, we have used December 31, 2018 as the end date. All data reflect occurrences no later than December 31, 2018.
Type of Operation – based on their mandate, operations are classified as civilian (coded as 0), military (coded as 1) and hybrid (i.e. with military and civilian components, coded as 0.5).
Annual Operational Budget – total annual mission budget in USD. Sources include SIPRI yearbook and peace operations database. In cases of missing data from the SIPRI yearbook, mission factsheets and original data from the mission have been used. This latter technique applies for the following missions: AMUK, AVSEC, BAM1, BAM2, CAP1, CAP2, MAM1, NAVF1, NAVF2, TMC1, EUAMI. If data is reported in EUR, average exchange rate for the duration of the mission has been used to convert the cost. Data has been adjusted to reflect operational budget over a 12-month period.
Average Annual Mission Personnel – it reflects the average total number of personnel/ staff supporting the NATO or EU peacebuilding mission per annum. Sources have been collected from SIPRI yearbook based on reportings for actual deployments on the ground. In cases when no data has been reported I the SIPRI yearbook/ peace operations dataset, mission factsheets and original data from the mission have been used. The data has been averaged and adjusted for a 12-month period.
Days to Launch – describes the number of days needed from the time a decision has been made by the IO top decision-making body (the European Council and NAC) to launch the mission to the time that the mission is officially declared “operational.” If no declaration that the mission is “fully operational” exists, landmark indicators that the mission is fully operational include: ceremony on the ground marking the beginning of the mission, the appointment of mission commander or first recoded operational presence involving activity on the ground. Sources include official EU and NATO documents announcing the decision to create the peacebuilding operation as well as official documents, press releases and reports in reliable media outlets (including New Agencies) documenting an event that would indicate the mission is “fully operational.”
Number of Contributing Nations –highest reported number of contributing nations for the duration of the NATO and the EU peacebuilding operation.
UN Security Council Resolutions – total number of UN Security Council (UNSG) resolutions relevant for the area of conflict adopted for the duration of the NATO and the EU mission. In cases when UNSC resolutions are relevant for multiple NATO and EU peacebuilding missions those have been reported to all relevant missions.
Outcome Sets: Description
Outcome sets include various indicators created to measure operational efficacy. They include annual events contributing toward peace, conflict and the mission’s functioning, annual fatalities and annual deaths among mission personnel, as well as annual difference in fatalities. A more detailed description of these indicators is included below:
Annual Peace Events – this is an annual indicator based on chronologically recorded events by the SIPRI yearbook that have contributed for the peace process in the conflict area where NATO and EU mission have been deployed. Examples of peace events include steps taken to contribute to the peace process (e.g. creation of buffer zone, cession of hostilities, meeting intended to cease fire or set up the peace process, political events related to or contributing toward the peace process and successful conclusion of a peace agreement. It may also include a decision of an international body (e.g. UN Security Council, UN General Assembly or UN Secretary General, as well as a decision made by the NATO and the EU D-M bodies that contributes toward the peace process in the areas where the mission operates. For ongoing missions is December 31, 2017 the last date when annual peace events are recoded.
Annual Conflict Events -- this is an annual indicator based on chronologically recorded events by the SIPRI yearbook that have increased the conflict and the conflict potential in the area where NATO and EU mission have been deployed. Instances include resumption of hostilities among warring parties, occurrence of attacks, clashes, eruption of violence, the killing of civilians, military and peacemaking personnel and other violence-related events that contribute toward instability in the mission’s area. For ongoing missions is December 31, 2017 the last date when annual conflict events are recoded.
Annual Mission-related Events -- this is an annual indicator based on chronologically recorded events by the SIPRI yearbook that measures events related to functioning of the mission – the decision to launch, the actual launch, implementation, transfer of authority and/ or mandate, transformation and termination of the mission. It also includes events that reflect decisions made by the contributing nations or sponsoring IOs intended to impact mission’s performances (e.g. decisions related to funding, control and command, transformation of mission mandate and rules and other similar events). For ongoing missions is December 31, 2017 the last date when annual mission-related events are recoded.
Average Annual Fatalities – this indicator reports how many average annual civilian deaths have been recorded for the duration of the mission. The data is drawn from the Armed Conflict Dataset (ACD) managed by the London-based International Institute for Strategic Studies ( https://acd.iiss.org/member/datatools.aspx).
Average Annual Mission Casualties – average annual number of deaths among peacebuilding personnel as reported in SIPRI yearbook/ peace operations database for the duration of the mission. Authors have used discretion to determine the accuracy in cases when there is discrepancy of reported data.
Fatalities Annual Difference – an indicator of differenced annual data of civilian casualties on the ground for the duration of the mission. The indicator is calculated as follows: Differenced Fatalities = Ʃ (CasualtiesY1-Y2 … Casualties Yn-Y(n-1))/ Duration of the mission (in years). It is intended to capture improvement of situation on the ground as a result of presence of the peacebuilding effort.
Condition Sets: Calibration and Rationale
Annual Operational Budget – mission budget reflects resources USD 5 million or less indicate fully out while USD 100 million or more would indicate fully in. A budget of USD 30 should be the watershed borderline of “nether in, not out.” [5-100 million]
Average Annual Mission Personnel – this indicator draws distinction between larger well-resourced missions and smaller missions with limited assets. By and large, missions with 20 personnel or less are fully out, while those with 20,000 or more are fully in. The borderline (net hither in, not out) is 130 people.
Days to Launch – the speed with which the decision is taken indicates how decision-making operated in the case of this mission. D-M that took 5 days or less should be fully out (in, change direction) while D-M 150 days or more should be fully in (out, change direction). 30 days (1 month) should be the neither in, nor out border.
Number of Contributing Nations –control indicator that demotes how high number of contributing nations contribute toward greater legitimacy (30 or more countries marks fully in), while 5 or fewer nations marks fully out. The “nether fully in, nor fully out” is at 15 nations.
UN Security Council Resolutions – total number of UNSC resolutions can vary, fully out is at 0 resolutions while fully in at 50 or more. Since moist of the missions are shorter, Nether fully in, not fully out would be at 8 UNSC resolutions. [Inductive]
Operational Duration – 1 year (12 months) denotes fully out (i.e. short-term mission) while 10 year 120 months denotes fully in; nether in not out would be for missions lasting 5 years (60 months). In other words, a decade is too long, a year is to short, five years is in the middle.
Outcome Variables: Calibration and Rationale
Annual Peace Events – this variable measures the occurrence of peace-related events – 0 events per annum is fully out; 10 events per annum is fully in. 1 event is nether in not out.
Annual Conflict Events -- this variable measures the occurrence of conflict-related events – 0 events per annum is fully out; 10 events per annum is fully in. 1 event is nether in not out.
Annual Mission-related Events -- this variable measures the occurrence of peace-related events – 0 events per annum is fully out; 10 events per annum is fully in. 0.5 event is nether in not out.
Average Annual Fatalities – this set measures average number of annual fatalities for the duration of the mission. Cases with 0 fatalities are fully out; cases with 10,000 fatalities are fully in. 1,000 fatalities represent “nether in, not out” value.
Fatalities Annual Difference – this is an indicator that measures the average year-to-year difference in number of fatalities for the duration of the conflict. -50 casualties is fully out (i.e. average growth of casualties by 50 per annum) as this indicator reflects low mission efficacy. 500 is fully in. This number indicates high efficacy; it denotes an average annual decline of casualties by 500 people. If the average number of casualties remains unchanged, then 0 denotes nether in, nor out.
Average Annual Mission Casualties – this indicator measures average number of annual casualties for the duration of the mission. 0 casualties is fully out; 500 casualties is fully in. 0.5 is nether in, nor out.
Shortly after the comparative analysis of Codding et al. was published, I prepared a comment on the article that I submitted for publication. In response to feedback from the editors, I eventually revised the manuscript substantially. That revised version has now been published. In this paper, I share the original submission of the comment, which focuses on important considerations for future studies of risk-‐ sensitive foraging. Meanwhile, Codding and his colleagues have published a response to my comment. They exhibit some confusion about my position, which they describe as “paradoxical.” In a reply to their response, I have therefore added some clarifying remarks at the end of this paper
The aims of this study is to evaluate the impact of interactive student response software (SRS technology) in large introductory classes in Political Science taught at the University of Cincinnati. Getting the students engaged in these classes has been one of the main priorities of the College of Arts and Sciences. This study draws on data from Introduction to International Relations offerings from Fall 2012 to Spring 2018, some of which have used interactive software while others have not used any software. Additionally, some offerings have had an assigned supplemental instructor (IS) while others have not had SI. The overall aim is to evaluate whether these instructional innovations have helped improved student performance in this class. The main hypothesis tested during the study is that availability of SRS technology tends to improve student performance during exams. The secondary hypothesis is that the availability of more advanced (second-generation) student response technology (such as Echo 360) tends to improve students performance in class in comparison to earlier (first-generation) SRS devices (known as “clickers”).
Background and significance
The positive impact of SRS engagement technology on student performance the across different disciplines been well documented in the literature (Marlow et al 2009; Kam and Sommer 2006; Prezler et al 2007 and others). Most of the literature focuses on first generation student response system, also known as clickers (Elliott 2003; Riebens 2007; Crossgove and Curan 2008, Shapiro 2009). Some of the studies focus on the use of this technology without a control group (Beavers 2010; DeBourgh 2008; Kennedy and Cutts 2005; Sprague and Dahl 2010) while others discuss how personal response software impact student performance throughout the whole semester (Evans, 2012). This study differs from existing ones in several ways. First, by collecting data over 5-year period, not only can we compare groups of students using SRS systems with those who don’t but also we can compare offerings using first-generation SRS technology (e.g. the “clickers”) and those using second-generation SRS software (such as Echo 360) that contains more advanced interactive features. Second, the study allows comparison of the SRS impact on different course components and requirements. Third, it evaluates the impact of the student response system in combination with other techniques used in a large classroom such as supplemental instruction or SI. This new setting offers valuable insights about the impact of different types of SRS technology and other interactive techniques designed to engage students in large classrooms.
Approach and Source of records
Records for student performance collected throughout the whole semester for each student. Demographic information for the students enrolled in class collected from the course rosters and from the University of Cincinnati’s student information system Catalyst ( https://catalyst.uc.edu/). All records are electronic. Those that are not available on Catalyst but are generated as a part of the student performance are currently stored in excel format by the instructor and researcher in an external USB drive which is only accessible to the instructor and PI (same person). No other person has access to the data.
The research does not involve the collection of data or other results from individuals that will be submitted to, or held for inspection by, the FDA. No part of the research involves any data that will be provided (in any form) to a pharmaceutical, medical device or biotech company.
This paper presents a prime aspect of Augmented and Virtual Reality development in the field of healthcare. We explored several recent works and articles and a comparison between generic application development and immersive technology-based application is included. The paper talks about more practical approaches that can be taken to enhance the effectiveness of the application.
The resources (infrastructure) to complete this study are provided by the University of Cincinnati’s Center for Simulation and Virtual Environment Research (UCSIM). And several experiments and projects in the field of health care are used as a reference to make conclusions.
Abstract: Can a library support an overseas program with a full-time librarian position? Can this position provide distant services successfully through e-learning techniques, social media and other methods? The answer is yes. As many American universities enroll students through a shared or global campus, librarians can play a vital role as the primary information and library services provider. The University of Cincinnati (UC) and Chongqing University, China (CQU) established the first shared engineering programs in China with mandatory co-operative education, the Joint Co-op Institute (JCI), in 2013. Students primarily receive on-campus instruction in China from JCI instructors; however, no UC librarian is onsite to provide dedicated support. In response, UC Libraries developed the new Global Services Librarian position as the lead presence for support of the Libraries’ growing global engagement and partnerships, especially with the JCI. This Librarian provides a full range of services, mostly at a distance, including instruction, outreach, and faculty support. This presentation will describe the development of the Global Services Librarian position, its roles in supporting the JCI, lessons learned in the first year of this position, and how this role could be adapted for other library environments.