computational:preservation
This is an old revision of the document!
Table of Contents
CompF7: Reinterpretation and long-term preservation of data and code
Working Group Co-Conveners
Name | Institution | |
---|---|---|
Kyle S Cranmer | NYU | kc90[at]nyu.edu |
Michael Hildreth | Notre Dame | mhildret[at]nd.edu |
Matias Carrasco Kind | Illinois/NCSA | mcarras2[at]illinois.edu |
Description
- Functional areas
- Public data (comes in many forms … HepData, public likelihoods, CERN OpenData, data for education/outreach)
- Tools for generating annotated public data and software
- Tools for combining results across experiments and frontiers
- Tools for archiving and re-running the analysis (RECAST/REANA, … )
- Mandate
- Define the stakeholders and consumers of the data and software
- What are the needs/requirements of the stakeholders?
- What resources are needed?
- e.g. long-term storage with external access, infrastructure for preserving executable code, etc.
- metadata infrastructure
- What technologies are available or will be available, what is the technology evolution of these tools?
- Discussed in common with CompF5: End User Analysis:
- version control
- Containers/VMs
- proprietary software/licenses
- How are/will the stakeholders use these technologies?
- How are other science domains handling this topic?
- What are the workflows that are used to combine results across experiments and frontiers?
- What tools are used/needed by the stakeholders to combine results across experiments and frontiers?
- What is the technology evolution of these tools?
- What are other science domains using, what is industry using?
computational/preservation.1595371855.txt.gz · Last modified: 2020/07/21 17:50 by sg_indiana.edu