, , , , - , ,
, -, , , , - ,
, ,
, , , ,
, , , .
+.. _dataset-file-upload:
+
File Upload
===========
@@ -129,15 +133,15 @@ The open-source DVUploader tool is a stand-alone command-line Java application t
Usage
~~~~~
-The DVUploader is open source and is available as source, as a Java jar, and with documentation at https://github.com/IQSS/dataverse-uploader. The DVUploader requires Java 1.8+. Users will need to install Java if they don't already have it and then download the DVUploader-v1.0.0.jar file. Users will need to know the URL of the Dataverse installation, the DOI of their existing dataset, and have generated an API Key for the Dataverse installation (an option in the user's profile menu).
+The DVUploader is open source and is available as source, as a Java jar, and with documentation at https://github.com/GlobalDataverseCommunityConsortium/dataverse-uploader. The DVUploader requires Java 1.8+. Users will need to install Java if they don't already have it and then download the latest release of the DVUploader - jar file. Users will need to know the URL of the Dataverse installation, the DOI of their existing dataset, and have generated an API Key for the Dataverse installation (an option in the user's profile menu).
Basic usage is to run the command: ::
- java -jar DVUploader-v1.0.0.jar -server= -did= -key=
+ java -jar DVUploader-*.jar -server= -did= -key=
Additional command line arguments are available to make the DVUploader list what it would do without uploading, limit the number of files it uploads, recurse through sub-directories, verify fixity, exclude files with specific extensions or name patterns, and/or wait longer than 60 seconds for any Dataverse installation ingest lock to clear (e.g. while the previously uploaded file is processed, as discussed in the :ref:`File Handling ` section below).
-DVUploader is a community-developed tool, and its creation was primarily supported by the Texas Digital Library. Further information and support for DVUploader can be sought at `the project's GitHub repository `_ .
+DVUploader is a community-developed tool, and its creation was primarily supported by the Texas Digital Library. Further information and support for DVUploader can be sought at `the project's GitHub repository `_ .
.. _duplicate-files:
@@ -153,6 +157,19 @@ Beginning with Dataverse Software 5.0, the way a Dataverse installation handles
- If a user attempts to replace a file with another file that has the same checksum, an error message will be displayed and the file will not be able to be replaced.
- If a user attempts to replace a file with a file that has the same checksum as a different file in the dataset, a warning will be displayed.
+BagIt Support
+-------------
+
+BagIt is a set of hierarchical file system conventions designed to support disk-based storage and network transfer of arbitrary digital content. It offers several benefits such as integration with digital libraries, easy implementation, and transfer validation. See `the Wikipedia article `__ for more information.
+
+If the Dataverse installation you are using has enabled BagIt file handling, when uploading BagIt files the repository will validate the checksum values listed in each BagIt’s manifest file against the uploaded files and generate errors about any mismatches. The repository will identify a certain number of errors, such as the first five errors in each BagIt file, before reporting the errors.
+
+|bagit-image1|
+
+You can fix the errors and reupload the BagIt files.
+
+More information on how your admin can enable and configure the BagIt file handler can be found in the :ref:`Installation Guide `.
+
.. _file-handling:
File Handling
@@ -211,6 +228,72 @@ Finally, automating your code can be immensely helpful to the code and research
**Note:** Capturing code dependencies and automating your code will create new files in your directory. Make sure to include them when depositing your dataset.
+Computational Workflow
+----------------------
+
+Computational Workflow Definition
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Computational workflows precisely describe a multi-step process to coordinate multiple computational tasks and their data dependencies that lead to data products in a scientific application. The computational tasks take different forms, such as running code (e.g. Python, C++, MATLAB, R, Julia), invoking a service, calling a command-line tool, accessing a database (e.g. SQL, NoSQL), submitting a job to a compute cloud (e.g. on-premises cloud, AWS, GCP, Azure), and execution of data processing scripts or workflow. The following diagram shows an example of a computational workflow with multiple computational tasks.
+
+|cw-image1|
+
+
+FAIR Computational Workflow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FAIR Principles (Findable, Accessible, Interoperable, Reusable) apply to computational workflows (https://doi.org/10.1162/dint_a_00033) in two areas: as FAIR data and as FAIR criteria for workflows as digital objects. In the FAIR data area, "*properly designed workflows contribute to FAIR data principles since they provide the metadata and provenance necessary to describe their data products, and they describe the involved data in a formalized, completely traceable way*" (https://doi.org/10.1162/dint_a_00033). Regarding the FAIR criteria for workflows as digital objects, "*workflows are research products in their own right, encapsulating methodological know-how that is to be found and published, accessed and cited, exchanged and combined with others, and reused as well as adapted*" (https://doi.org/10.1162/dint_a_00033).
+
+How to Create a Computational Workflow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are multiple approaches to creating computational workflows. You may consider standard frameworks and tools such as Common Workflow Language (CWL), Snakemake, Galaxy, Nextflow, Ruffus or *ad hoc* methods using different programming languages (e.g. Python, C++, MATLAB, Julia, R), notebooks (e.g. Jupyter Notebook, R Notebook, and MATLAB Live Script) and command-line interpreters (e.g. Bash). Each computational task is defined differently, but all meet the definition of a computational workflow and all result in data products. You can find a few examples of computational workflows in the following GitHub repositories, where each follows several aspects of FAIR principles:
+
+- Common Workflow Language (`GitHub Repository URL `__)
+- R Notebook (`GitHub Repository URL `__)
+- Jupyter Notebook (`GitHub Repository URL `__)
+- MATLAB Script (`GitHub Repository URL `__)
+
+You are encouraged to review these examples when creating a computational workflow and publishing in a Dataverse repository.
+
+At https://workflows.community, the Workflows Community Initiative offers resources for computational workflows, such as a list of workflow systems (https://workflows.community/systems) and other workflow registries (https://workflows.community/registries). The initiative also helps organize working groups related to workflows research, development and application.
+
+How to Upload Your Computational Workflow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After you :ref:`upload your files `, you can apply a "Workflow" tag to your workflow files, such as your Snakemake or R Notebooks files, so that you and others can find them more easily among your deposit’s other files.
+
+|cw-image3|
+
+|cw-image4|
+
+How to Describe Your Computational Workflow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Dataverse installation you are using may have enabled Computational Workflow metadata fields for your use. If so, when :ref:`editing your dataset metadata `, you will see the fields described below.
+
+|cw-image2|
+
+As described in the :ref:`metadata-references` section of the :doc:`/user/appendix`, the three fields are adapted from `Bioschemas Computational Workflow Profile, version 1.0 `__ and `Codemeta `__:
+
+- **Workflow Type**: The kind of Computational Workflow, which is designed to compose and execute a series of computational or data manipulation steps in a scientific application
+- **External Code Repository URL**: A link to another public repository where the un-compiled, human-readable code and related code is also located (e.g., GitHub, GitLab, SVN)
+- **Documentation**: A link (URL) to the documentation or text describing the Computational Workflow and its use
+
+
+How to Search for Computational Workflows
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the search page of the Dataverse repository you are using includes a "Dataset Feature" facet with a Computational Workflows link, you can follow that link to find only datasets that contain computational workflows.
+
+You can also search on the "Workflow Type" facet, if the Dataverse installation has the field enabled, to find datasets that contain certain types of computational workflows, such as workflows written in Common Workflow Language files or Jupyter Notebooks.
+
+|cw-image5|
+
+You can also search for files within datasets that have been tagged as "Workflow" files by clicking the Files checkbox to show only files and using the File Tag facet to show only files tagged as "Workflow".
+
+|cw-image6|
+
Astronomy (FITS)
----------------
@@ -622,6 +705,20 @@ If you deaccession the most recently published version of the dataset but not al
:class: img-responsive
.. |image-file-tree-view| image:: ./img/file-tree-view.png
:class: img-responsive
+.. |cw-image1| image:: ./img/computational-workflow-diagram.png
+ :class: img-responsive
+.. |cw-image2| image:: ./img/computational-workflow-metadata.png
+ :class: img-responsive
+.. |cw-image3| image:: ./img/file-tags-link.png
+ :class: img-responsive
+.. |cw-image4| image:: ./img/file-tags-options.png
+ :class: img-responsive
+.. |cw-image5| image:: ./img/computational-workflow-facets.png
+ :class: img-responsive
+.. |cw-image6| image:: ./img/file-tags-facets.png
+ :class: img-responsive
+.. |bagit-image1| image:: ./img/bagit-handler-errors.png
+ :class: img-responsive
.. _Make Data Count: https://makedatacount.org
.. _Crossref: https://crossref.org
diff --git a/doc/sphinx-guides/source/user/dataverse-management.rst b/doc/sphinx-guides/source/user/dataverse-management.rst
index efe98e8327c..ed90497da8c 100755
--- a/doc/sphinx-guides/source/user/dataverse-management.rst
+++ b/doc/sphinx-guides/source/user/dataverse-management.rst
@@ -44,7 +44,7 @@ To edit your Dataverse collection, navigate to your Dataverse collection's landi
- :ref:`Theme `: upload a logo for your Dataverse collection, add a link to your department or personal website, add a custom footer image, and select colors for your Dataverse collection in order to brand it
- :ref:`Widgets `: get code to add to your website to have your Dataverse collection display on it
- :ref:`Permissions `: give other users permissions to your Dataverse collection, i.e.-can edit datasets, and see which users already have which permissions for your Dataverse collection
-- :ref:`Dataset Templates `: these are useful when you have several datasets that have the same information in multiple metadata fields that you would prefer not to have to keep manually typing in
+- :ref:`Dataset Templates `: these are useful when you want to provide custom instructions on how to fill out fields or have several datasets that have the same information in multiple metadata fields that you would prefer not to have to keep manually typing in
- :ref:`Dataset Guestbooks `: allows you to collect data about who is downloading the files from your datasets
- :ref:`Featured Dataverse collections `: if you have one or more Dataverse collection, you can use this option to show them at the top of your Dataverse collection page to help others easily find interesting or important Dataverse collections
- **Delete Dataverse**: you are able to delete your Dataverse collection as long as it is not published and does not have any draft datasets
@@ -52,7 +52,7 @@ To edit your Dataverse collection, navigate to your Dataverse collection's landi
.. _general-information:
General Information
----------------------
+-------------------
The General Information page is how you edit the information you filled in while creating your Dataverse collection. If you need to change or add a contact email address, this is the place to do it. Additionally, you can update the metadata elements used for datasets within the Dataverse collection, change which metadata fields are hidden, required, or optional, and update the facets you would like displayed for browsing the Dataverse collection. If you plan on using templates, you need to select the metadata fields on the General Information page.
@@ -60,8 +60,8 @@ Tip: The metadata fields you select as required will appear on the Create Datase
.. _theme:
-Theme
----------
+Theme
+-----
The Theme features provides you with a way to customize the look of your Dataverse collection. You can:
@@ -77,7 +77,7 @@ Supported image types for logo images and footer images are JPEG, TIFF, or PNG a
.. _dataverse-widgets:
Widgets
---------------
+-------
The Widgets feature provides you with code for you to put on your personal website to have your Dataverse collection displayed there. There are two types of Widgets for a Dataverse collection, a Dataverse collection Search Box widget and a Dataverse collection Listing widget. Once a Dataverse collection has been published, from the Widgets tab on the Dataverse collection's Theme + Widgets page, it is possible to copy the code snippets for the widget(s) you would like to add to your website. If you need to adjust the height of the widget on your website, you may do so by editing the `heightPx=500` parameter in the code snippet.
@@ -94,7 +94,7 @@ The Dataverse Collection Listing Widget provides a listing of all your Dataverse
.. _openscholar-dataverse-level:
Adding Widgets to an OpenScholar Website
-******************************************
+****************************************
#. Log in to your OpenScholar website
#. Either build a new page or navigate to the page you would like to use to show the Dataverse collection widgets.
#. Click on the Settings Cog and select Layout
@@ -102,8 +102,8 @@ Adding Widgets to an OpenScholar Website
.. _dataverse-permissions:
-Roles & Permissions
----------------------
+Roles & Permissions
+-------------------
Dataverse installation user accounts can be granted roles that define which actions they are allowed to take on specific Dataverse collections, datasets, and/or files. Each role comes with a set of permissions, which define the specific actions that users may take.
Roles and permissions may also be granted to groups. Groups can be defined as a collection of Dataverse installation user accounts, a collection of IP addresses (e.g. all users of a library's computers), or a collection of all users who log in using a particular institutional login (e.g. everyone who logs in with a particular university's account credentials).
@@ -127,7 +127,7 @@ When you access a Dataverse collection's permissions page, you will see three se
Please note that even on a newly created Dataverse collection, you may see user and groups have already been granted role(s) if your installation has ``:InheritParentRoleAssignments`` set. For more on this setting, see the :doc:`/installation/config` section of the Installation Guide.
Setting Access Configurations
-*******************************
+*****************************
Under the Permissions tab, you can click the "Edit Access" button to open a box where you can add to your Dataverse collection and what permissions are granted to those who add to your Dataverse collection.
@@ -140,7 +140,7 @@ The second question on this page allows you to choose the role (and thus the per
Both of these settings can be changed at any time.
Assigning Roles to Users and Groups
-*************************************
+***********************************
Under the Users/Groups tab, you can add, edit, or remove the roles granted to users and groups on your Dataverse collection. A role is a set of permissions granted to a user or group when they're using your Dataverse collection. For example, giving your research assistant the "Contributor" role would give them the following self-explanatory permissions on your Dataverse collection and all datasets within your Dataverse collection: "ViewUnpublishedDataset", "DownloadFile", "EditDataset", and "DeleteDatasetDraft". They would, however, lack the "PublishDataset" permission, and thus would be unable to publish datasets on your Dataverse collection. If you wanted to give them that permission, you would give them a role with that permission, like the Curator role. Users and groups can hold multiple roles at the same time if needed. Roles can be removed at any time. All roles and their associated permissions are listed under the "Roles" tab of the same page.
@@ -155,15 +155,16 @@ Note: If you need to assign a role to ALL user accounts in a Dataverse installat
.. _dataset-templates:
Dataset Templates
--------------------
+-----------------
-Templates are useful when you have several datasets that have the same information in multiple metadata fields that you would prefer not to have to keep manually typing in, or if you want to use a custom set of Terms of Use and Access for multiple datasets in a Dataverse collection. In Dataverse Software 4.0+, templates are created at the Dataverse collection level, can be deleted (so it does not show for future datasets), set to default (not required), or can be copied so you do not have to start over when creating a new template with similar metadata from another template. When a template is deleted, it does not impact the datasets that have used the template already.
+Templates are useful when you want to provide custom instructions on how to fill out a field, have several datasets that have the same information in multiple metadata fields that you would prefer not to have to keep manually typing in, or if you want to use a custom set of Terms of Use and Access for multiple datasets in a Dataverse collection. In Dataverse Software 4.0+, templates are created at the Dataverse collection level, can be deleted (so it does not show for future datasets), set to default (not required), or can be copied so you do not have to start over when creating a new template with similar metadata from another template. When a template is deleted, it does not impact the datasets that have used the template already.
How do you create a template?
#. Navigate to your Dataverse collection, click on the Edit Dataverse button and select Dataset Templates.
#. Once you have clicked on Dataset Templates, you will be brought to the Dataset Templates page. On this page, you can 1) decide to use the dataset templates from your parent Dataverse collection 2) create a new dataset template or 3) do both.
#. Click on the Create Dataset Template to get started. You will see that the template is the same as the create dataset page with an additional field at the top of the page to add a name for the template.
+#. To add custom instructions, click on ''(None - click to add)'' and enter the instructions you wish users to see. If you wish to edit existing instructions, click on them to make the text editable.
#. After adding information into the metadata fields you have information for and clicking Save and Add Terms, you will be brought to the page where you can add custom Terms of Use and Access. If you do not need custom Terms of Use and Access, click the Save Dataset Template, and only the metadata fields will be saved.
#. After clicking Save Dataset Template, you will be brought back to the Manage Dataset Templates page and should see your template listed there now with the make default, edit, view, or delete options.
#. A Dataverse collection does not have to have a default template and users can select which template they would like to use while on the Create Dataset page.
@@ -174,7 +175,7 @@ How do you create a template?
.. _dataset-guestbooks:
Dataset Guestbooks
------------------------------
+------------------
Guestbooks allow you to collect data about who is downloading the files from your datasets. You can decide to collect account information (username, given name & last name, affiliation, etc.) as well as create custom questions (e.g., What do you plan to use this data for?). You are also able to download the data collected from the enabled guestbooks as CSV files to store and use outside of the Dataverse installation.
@@ -227,7 +228,7 @@ Similarly to dataset linking, Dataverse collection linking allows a Dataverse co
If you need to have a Dataverse collection linked to your Dataverse collection, please contact the support team for the Dataverse installation you are using.
Publish Your Dataverse Collection
-=================================================================
+=================================
Once your Dataverse collection is ready to go public, go to your Dataverse collection page, click on the "Publish" button on the right
hand side of the page. A pop-up will appear to confirm that you are ready to actually Publish, since once a Dataverse collection
diff --git a/doc/sphinx-guides/source/user/img/DatasetDiagram.png b/doc/sphinx-guides/source/user/img/DatasetDiagram.png
old mode 100755
new mode 100644
index 45a21456a08..471a54c2d83
Binary files a/doc/sphinx-guides/source/user/img/DatasetDiagram.png and b/doc/sphinx-guides/source/user/img/DatasetDiagram.png differ
diff --git a/doc/sphinx-guides/source/user/img/bagit-handler-errors.png b/doc/sphinx-guides/source/user/img/bagit-handler-errors.png
new file mode 100644
index 00000000000..d4059ca53c9
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/bagit-handler-errors.png differ
diff --git a/doc/sphinx-guides/source/user/img/computational-workflow-diagram.png b/doc/sphinx-guides/source/user/img/computational-workflow-diagram.png
new file mode 100644
index 00000000000..efb073737dd
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/computational-workflow-diagram.png differ
diff --git a/doc/sphinx-guides/source/user/img/computational-workflow-facets.png b/doc/sphinx-guides/source/user/img/computational-workflow-facets.png
new file mode 100644
index 00000000000..c790e1d5ffb
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/computational-workflow-facets.png differ
diff --git a/doc/sphinx-guides/source/user/img/computational-workflow-metadata.png b/doc/sphinx-guides/source/user/img/computational-workflow-metadata.png
new file mode 100644
index 00000000000..2c477e75b1e
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/computational-workflow-metadata.png differ
diff --git a/doc/sphinx-guides/source/user/img/file-tags-facets.png b/doc/sphinx-guides/source/user/img/file-tags-facets.png
new file mode 100644
index 00000000000..ce2a9bd72a8
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/file-tags-facets.png differ
diff --git a/doc/sphinx-guides/source/user/img/file-tags-link.png b/doc/sphinx-guides/source/user/img/file-tags-link.png
new file mode 100644
index 00000000000..c0496a4e1ba
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/file-tags-link.png differ
diff --git a/doc/sphinx-guides/source/user/img/file-tags-options.png b/doc/sphinx-guides/source/user/img/file-tags-options.png
new file mode 100644
index 00000000000..4af196c690e
Binary files /dev/null and b/doc/sphinx-guides/source/user/img/file-tags-options.png differ
diff --git a/doc/sphinx-guides/source/versions.rst b/doc/sphinx-guides/source/versions.rst
index f46b9477d92..1cbd785b5dd 100755
--- a/doc/sphinx-guides/source/versions.rst
+++ b/doc/sphinx-guides/source/versions.rst
@@ -6,7 +6,8 @@ Dataverse Software Documentation Versions
This list provides a way to refer to the documentation for previous versions of the Dataverse Software. In order to learn more about the updates delivered from one version to another, visit the `Releases `__ page in our GitHub repo.
-- 5.11.1
+- 5.12
+- `5.11.1 `__
- `5.11 `__
- `5.10.1 `__
- `5.10 `__
diff --git a/downloads/download.sh b/downloads/download.sh
index 3d37d9f0940..7b9de0397cb 100755
--- a/downloads/download.sh
+++ b/downloads/download.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-curl -L -O https://s3-eu-west-1.amazonaws.com/payara.fish/Payara+Downloads/5.2021.6/payara-5.2021.6.zip
+curl -L -O https://s3-eu-west-1.amazonaws.com/payara.fish/Payara+Downloads/5.2022.3/payara-5.2022.3.zip
curl -L -O https://archive.apache.org/dist/lucene/solr/8.11.1/solr-8.11.1.tgz
curl -L -O https://search.maven.org/remotecontent?filepath=org/jboss/weld/weld-osgi-bundle/2.2.10.Final/weld-osgi-bundle-2.2.10.Final-glassfish4.jar
curl -s -L http://sourceforge.net/projects/schemaspy/files/schemaspy/SchemaSpy%205.0.0/schemaSpy_5.0.0.jar/download > schemaSpy_5.0.0.jar
diff --git a/local_lib/com/apicatalog/titanium-json-ld/1.3.0-SNAPSHOT/titanium-json-ld-1.3.0-SNAPSHOT.jar b/local_lib/com/apicatalog/titanium-json-ld/1.3.0-SNAPSHOT/titanium-json-ld-1.3.0-SNAPSHOT.jar
new file mode 100644
index 00000000000..ee499ae4b76
Binary files /dev/null and b/local_lib/com/apicatalog/titanium-json-ld/1.3.0-SNAPSHOT/titanium-json-ld-1.3.0-SNAPSHOT.jar differ
diff --git a/modules/dataverse-parent/pom.xml b/modules/dataverse-parent/pom.xml
index 22ea30795ba..8fe611d7716 100644
--- a/modules/dataverse-parent/pom.xml
+++ b/modules/dataverse-parent/pom.xml
@@ -129,7 +129,7 @@
- 5.11.1
+ 5.12
11
UTF-8
@@ -146,11 +146,11 @@
-Duser.timezone=${project.timezone} -Dfile.encoding=${project.build.sourceEncoding} -Duser.language=${project.language} -Duser.region=${project.region}
- 5.2021.6
- 42.3.5
+ 5.2022.3
+ 42.5.0
8.11.1
- 1.11.762
- 0.157.0
+ 1.12.290
+ 0.177.0
8.0.0
@@ -164,7 +164,7 @@
1.15.0
- 0.4.1
+ 2.10.1
4.13.1
5.7.0
diff --git a/pom.xml b/pom.xml
index ce9f1c4b63d..6faba5086be 100644
--- a/pom.xml
+++ b/pom.xml
@@ -24,7 +24,7 @@
1.20.1
0.8.7
5.2.1
- 2.3.0
+ 2.4.1
-
+
@@ -112,12 +112,12 @@
com.apicatalog
titanium-json-ld
- 0.8.6
+ 1.3.0-SNAPSHOT
com.google.code.gson
gson
- 2.2.4
+ 2.8.9
compile
@@ -142,7 +142,7 @@
org.mindrot
jbcrypt
- 0.3m
+ 0.4
org.postgresql
@@ -347,7 +347,7 @@
org.jsoup
jsoup
- 1.14.2
+ 1.15.3
io.searchbox
@@ -357,7 +357,7 @@
commons-codec
commons-codec
- 1.9
+ 1.15
@@ -380,7 +380,7 @@
com.nimbusds
oauth2-oidc-sdk
- 9.9.1
+ 9.41.1
@@ -463,7 +463,7 @@
org.duracloud
common
- 7.1.0
+ 7.1.1
org.slf4j
@@ -478,7 +478,7 @@
org.duracloud
storeclient
- 7.1.0
+ 7.1.1
org.slf4j
@@ -516,7 +516,19 @@
google-cloud-storage
-
+
+
+
+ com.auth0
+ java-jwt
+ 3.19.1
+
+
+
+ io.github.erdtman
+ java-json-canonicalization
+ 1.1
+
@@ -601,9 +613,9 @@
test
- org.microbean
- microbean-microprofile-config
- ${microbean-mpconfig.version}
+ io.smallrye.config
+ smallrye-config
+ ${smallrye-mpconfig.version}
test
@@ -641,10 +653,17 @@
**/*.xml
**/firstNames/*.*
**/*.xsl
- **/*.properties
**/services/*
+
+ src/main/resources
+
+ true
+
+ **/*.properties
+
+
diff --git a/scripts/api/data/metadatablocks/citation.tsv b/scripts/api/data/metadatablocks/citation.tsv
index 1b14f9d0c14..29d121aae16 100644
--- a/scripts/api/data/metadatablocks/citation.tsv
+++ b/scripts/api/data/metadatablocks/citation.tsv
@@ -1,84 +1,84 @@
-#metadataBlock name dataverseAlias displayName blockURI
- citation Citation Metadata https://dataverse.org/schema/citation/
+#metadataBlock name dataverseAlias displayName blockURI
+ citation Citation Metadata https://dataverse.org/schema/citation/
#datasetField name title description watermark fieldType displayOrder displayFormat advancedSearchField allowControlledVocabulary allowmultiples facetable displayoncreate required parent metadatablock_id termURI
- title Title Full title by which the Dataset is known. Enter title... text 0 TRUE FALSE FALSE FALSE TRUE TRUE citation http://purl.org/dc/terms/title
- subtitle Subtitle A secondary title used to amplify or state certain limitations on the main title. text 1 FALSE FALSE FALSE FALSE FALSE FALSE citation
- alternativeTitle Alternative Title A title by which the work is commonly referred, or an abbreviation of the title. text 2 FALSE FALSE FALSE FALSE FALSE FALSE citation http://purl.org/dc/terms/alternative
- alternativeURL Alternative URL A URL where the dataset can be viewed, such as a personal or project website. Enter full URL, starting with http:// url 3 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE citation https://schema.org/distribution
- otherId Other ID Another unique identifier that identifies this Dataset (e.g., producer's or another repository's number). none 4 : FALSE FALSE TRUE FALSE FALSE FALSE citation
- otherIdAgency Agency Name of agency which generated this identifier. text 5 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE otherId citation
- otherIdValue Identifier Other identifier that corresponds to this Dataset. text 6 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE otherId citation
- author Author The person(s), corporate body(ies), or agency(ies) responsible for creating the work. none 7 FALSE FALSE TRUE FALSE TRUE TRUE citation http://purl.org/dc/terms/creator
- authorName Name The author's Family Name, Given Name or the name of the organization responsible for this Dataset. FamilyName, GivenName or Organization text 8 #VALUE TRUE FALSE FALSE TRUE TRUE TRUE author citation
- authorAffiliation Affiliation The organization with which the author is affiliated. text 9 (#VALUE) TRUE FALSE FALSE TRUE TRUE FALSE author citation
- authorIdentifierScheme Identifier Scheme Name of the identifier scheme (ORCID, ISNI). text 10 - #VALUE: FALSE TRUE FALSE FALSE TRUE FALSE author citation http://purl.org/spar/datacite/AgentIdentifierScheme
- authorIdentifier Identifier Uniquely identifies an individual author or organization, according to various schemes. text 11 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE author citation http://purl.org/spar/datacite/AgentIdentifier
- datasetContact Contact The contact(s) for this Dataset. none 12 FALSE FALSE TRUE FALSE TRUE TRUE citation
- datasetContactName Name The contact's Family Name, Given Name or the name of the organization. FamilyName, GivenName or Organization text 13 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE datasetContact citation
- datasetContactAffiliation Affiliation The organization with which the contact is affiliated. text 14 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE datasetContact citation
- datasetContactEmail E-mail The e-mail address(es) of the contact(s) for the Dataset. This will not be displayed. email 15 #EMAIL FALSE FALSE FALSE FALSE TRUE TRUE datasetContact citation
- dsDescription Description A summary describing the purpose, nature, and scope of the Dataset. none 16 FALSE FALSE TRUE FALSE TRUE TRUE citation
- dsDescriptionValue Text A summary describing the purpose, nature, and scope of the Dataset. textbox 17 #VALUE TRUE FALSE FALSE FALSE TRUE TRUE dsDescription citation
- dsDescriptionDate Date In cases where a Dataset contains more than one description (for example, one might be supplied by the data producer and another prepared by the data repository where the data are deposited), the date attribute is used to distinguish between the two descriptions. The date attribute follows the ISO convention of YYYY-MM-DD. YYYY-MM-DD date 18 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE dsDescription citation
- subject Subject Domain-specific Subject Categories that are topically relevant to the Dataset. text 19 TRUE TRUE TRUE TRUE TRUE TRUE citation http://purl.org/dc/terms/subject
- keyword Keyword Key terms that describe important aspects of the Dataset. none 20 FALSE FALSE TRUE FALSE TRUE FALSE citation
- keywordValue Term Key terms that describe important aspects of the Dataset. Can be used for building keyword indexes and for classification and retrieval purposes. A controlled vocabulary can be employed. The vocab attribute is provided for specification of the controlled vocabulary in use, such as LCSH, MeSH, or others. The vocabURI attribute specifies the location for the full controlled vocabulary. text 21 #VALUE TRUE FALSE FALSE TRUE TRUE FALSE keyword citation
- keywordVocabulary Vocabulary For the specification of the keyword controlled vocabulary in use, such as LCSH, MeSH, or others. text 22 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE keyword citation
- keywordVocabularyURI Vocabulary URL Keyword vocabulary URL points to the web presence that describes the keyword vocabulary, if appropriate. Enter an absolute URL where the keyword vocabulary web site is found, such as http://www.my.org. Enter full URL, starting with http:// url 23 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE keyword citation
- topicClassification Topic Classification The classification field indicates the broad important topic(s) and subjects that the data cover. Library of Congress subject terms may be used here. none 24 FALSE FALSE TRUE FALSE FALSE FALSE citation
- topicClassValue Term Topic or Subject term that is relevant to this Dataset. text 25 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE topicClassification citation
- topicClassVocab Vocabulary Provided for specification of the controlled vocabulary in use, e.g., LCSH, MeSH, etc. text 26 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE topicClassification citation
- topicClassVocabURI Vocabulary URL Specifies the URL location for the full controlled vocabulary. Enter full URL, starting with http:// url 27 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE topicClassification citation
- publication Related Publication Publications that use the data from this Dataset. The full list of Related Publications will be displayed on the metadata tab. none 28 FALSE FALSE TRUE FALSE TRUE FALSE citation http://purl.org/dc/terms/isReferencedBy
- publicationCitation Citation The full bibliographic citation for this related publication. textbox 29 #VALUE TRUE FALSE FALSE FALSE TRUE FALSE publication citation http://purl.org/dc/terms/bibliographicCitation
- publicationIDType ID Type The type of digital identifier used for this publication (e.g., Digital Object Identifier (DOI)). text 30 #VALUE: TRUE TRUE FALSE FALSE TRUE FALSE publication citation http://purl.org/spar/datacite/ResourceIdentifierScheme
- publicationIDNumber ID Number The identifier for the selected ID type. text 31 #VALUE TRUE FALSE FALSE FALSE TRUE FALSE publication citation http://purl.org/spar/datacite/ResourceIdentifier
- publicationURL URL Link to the publication web page (e.g., journal article page, archive record page, or other). Enter full URL, starting with http:// url 32 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE publication citation https://schema.org/distribution
- notesText Notes Additional important information about the Dataset. textbox 33 FALSE FALSE FALSE FALSE TRUE FALSE citation
- language Language Language of the Dataset text 34 TRUE TRUE TRUE TRUE FALSE FALSE citation http://purl.org/dc/terms/language
- producer Producer Person or organization with the financial or administrative responsibility over this Dataset none 35 FALSE FALSE TRUE FALSE FALSE FALSE citation
- producerName Name Producer name FamilyName, GivenName or Organization text 36 #VALUE TRUE FALSE FALSE TRUE FALSE TRUE producer citation
- producerAffiliation Affiliation The organization with which the producer is affiliated. text 37 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE producer citation
- producerAbbreviation Abbreviation The abbreviation by which the producer is commonly known. (ex. IQSS, ICPSR) text 38 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE producer citation
- producerURL URL Producer URL points to the producer's web presence, if appropriate. Enter an absolute URL where the producer's web site is found, such as http://www.my.org. Enter full URL, starting with http:// url 39 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE producer citation
- producerLogoURL Logo URL URL for the producer's logo, which points to this producer's web-accessible logo image. Enter an absolute URL where the producer's logo image is found, such as http://www.my.org/images/logo.gif. Enter full URL for image, starting with http:// url 40
FALSE FALSE FALSE FALSE FALSE FALSE producer citation
- productionDate Production Date Date when the data collection or other materials were produced (not distributed, published or archived). YYYY-MM-DD date 41 TRUE FALSE FALSE TRUE FALSE FALSE citation
- productionPlace Production Place The location where the data collection and any other related materials were produced. text 42 FALSE FALSE FALSE FALSE FALSE FALSE citation
- contributor Contributor The organization or person responsible for either collecting, managing, or otherwise contributing in some form to the development of the resource. none 43 : FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/contributor
- contributorType Type The type of contributor of the resource. text 44 #VALUE TRUE TRUE FALSE TRUE FALSE FALSE contributor citation
- contributorName Name The Family Name, Given Name or organization name of the contributor. FamilyName, GivenName or Organization text 45 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE contributor citation
- grantNumber Grant Information Grant Information none 46 : FALSE FALSE TRUE FALSE FALSE FALSE citation https://schema.org/sponsor
- grantNumberAgency Grant Agency Grant Number Agency text 47 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE grantNumber citation
- grantNumberValue Grant Number The grant or contract number of the project that sponsored the effort. text 48 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE grantNumber citation
- distributor Distributor The organization designated by the author or producer to generate copies of the particular work including any necessary editions or revisions. none 49 FALSE FALSE TRUE FALSE FALSE FALSE citation
- distributorName Name Distributor name FamilyName, GivenName or Organization text 50 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE distributor citation
- distributorAffiliation Affiliation The organization with which the distributor contact is affiliated. text 51 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
- distributorAbbreviation Abbreviation The abbreviation by which this distributor is commonly known (e.g., IQSS, ICPSR). text 52 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
- distributorURL URL Distributor URL points to the distributor's web presence, if appropriate. Enter an absolute URL where the distributor's web site is found, such as http://www.my.org. Enter full URL, starting with http:// url 53 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
- distributorLogoURL Logo URL URL of the distributor's logo, which points to this distributor's web-accessible logo image. Enter an absolute URL where the distributor's logo image is found, such as http://www.my.org/images/logo.gif. Enter full URL for image, starting with http:// url 54
FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
- distributionDate Distribution Date Date that the work was made available for distribution/presentation. YYYY-MM-DD date 55 TRUE FALSE FALSE TRUE FALSE FALSE citation
- depositor Depositor The person (Family Name, Given Name) or the name of the organization that deposited this Dataset to the repository. text 56 FALSE FALSE FALSE FALSE FALSE FALSE citation
- dateOfDeposit Deposit Date Date that the Dataset was deposited into the repository. YYYY-MM-DD date 57 FALSE FALSE FALSE TRUE FALSE FALSE citation http://purl.org/dc/terms/dateSubmitted
- timePeriodCovered Time Period Covered Time period to which the data refer. This item reflects the time period covered by the data, not the dates of coding or making documents machine-readable or the dates the data were collected. Also known as span. none 58 ; FALSE FALSE TRUE FALSE FALSE FALSE citation https://schema.org/temporalCoverage
- timePeriodCoveredStart Start Start date which reflects the time period covered by the data, not the dates of coding or making documents machine-readable or the dates the data were collected. YYYY-MM-DD date 59 #NAME: #VALUE TRUE FALSE FALSE TRUE FALSE FALSE timePeriodCovered citation
- timePeriodCoveredEnd End End date which reflects the time period covered by the data, not the dates of coding or making documents machine-readable or the dates the data were collected. YYYY-MM-DD date 60 #NAME: #VALUE TRUE FALSE FALSE TRUE FALSE FALSE timePeriodCovered citation
- dateOfCollection Date of Collection Contains the date(s) when the data were collected. none 61 ; FALSE FALSE TRUE FALSE FALSE FALSE citation
- dateOfCollectionStart Start Date when the data collection started. YYYY-MM-DD date 62 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE dateOfCollection citation
- dateOfCollectionEnd End Date when the data collection ended. YYYY-MM-DD date 63 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE dateOfCollection citation
- kindOfData Kind of Data Type of data included in the file: survey data, census/enumeration data, aggregate data, clinical data, event/transaction data, program source code, machine-readable text, administrative records data, experimental data, psychological test, textual data, coded textual, coded documents, time budget diaries, observation data/ratings, process-produced data, or other. text 64 TRUE FALSE TRUE TRUE FALSE FALSE citation http://rdf-vocabulary.ddialliance.org/discovery#kindOfData
- series Series Information about the Dataset series. none 65 : FALSE FALSE FALSE FALSE FALSE FALSE citation
- seriesName Name Name of the dataset series to which the Dataset belongs. text 66 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE series citation
- seriesInformation Information History of the series and summary of those features that apply to the series as a whole. textbox 67 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE series citation
- software Software Information about the software used to generate the Dataset. none 68 , FALSE FALSE TRUE FALSE FALSE FALSE citation https://www.w3.org/TR/prov-o/#wasGeneratedBy
- softwareName Name Name of software used to generate the Dataset. text 69 #VALUE FALSE TRUE FALSE FALSE FALSE FALSE software citation
- softwareVersion Version Version of the software used to generate the Dataset. text 70 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE software citation
- relatedMaterial Related Material Any material related to this Dataset. textbox 71 FALSE FALSE TRUE FALSE FALSE FALSE citation
- relatedDatasets Related Datasets Any Datasets that are related to this Dataset, such as previous research on this subject. textbox 72 FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/relation
- otherReferences Other References Any references that would serve as background or supporting material to this Dataset. text 73 FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/references
- dataSources Data Sources List of books, articles, serials, or machine-readable data files that served as the sources of the data collection. textbox 74 FALSE FALSE TRUE FALSE FALSE FALSE citation https://www.w3.org/TR/prov-o/#wasDerivedFrom
- originOfSources Origin of Sources For historical materials, information about the origin of the sources and the rules followed in establishing the sources should be specified. textbox 75 FALSE FALSE FALSE FALSE FALSE FALSE citation
- characteristicOfSources Characteristic of Sources Noted Assessment of characteristics and source material. textbox 76 FALSE FALSE FALSE FALSE FALSE FALSE citation
- accessToSources Documentation and Access to Sources Level of documentation of the original sources. textbox 77 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ title Title The main title of the Dataset text 0 TRUE FALSE FALSE FALSE TRUE TRUE citation http://purl.org/dc/terms/title
+ subtitle Subtitle A secondary title that amplifies or states certain limitations on the main title text 1 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ alternativeTitle Alternative Title Either 1) a title commonly used to refer to the Dataset or 2) an abbreviation of the main title text 2 FALSE FALSE FALSE FALSE FALSE FALSE citation http://purl.org/dc/terms/alternative
+ alternativeURL Alternative URL Another URL where one can view or access the data in the Dataset, e.g. a project or personal webpage https:// url 3 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE citation https://schema.org/distribution
+ otherId Other Identifier Another unique identifier for the Dataset (e.g. producer's or another repository's identifier) none 4 : FALSE FALSE TRUE FALSE FALSE FALSE citation
+ otherIdAgency Agency The name of the agency that generated the other identifier text 5 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE otherId citation
+ otherIdValue Identifier Another identifier uniquely identifies the Dataset text 6 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE otherId citation
+ author Author The entity, e.g. a person or organization, that created the Dataset none 7 FALSE FALSE TRUE FALSE TRUE TRUE citation http://purl.org/dc/terms/creator
+ authorName Name The name of the author, such as the person's name or the name of an organization 1) Family Name, Given Name or 2) Organization XYZ text 8 #VALUE TRUE FALSE FALSE TRUE TRUE TRUE author citation
+ authorAffiliation Affiliation The name of the entity affiliated with the author, e.g. an organization's name Organization XYZ text 9 (#VALUE) TRUE FALSE FALSE TRUE TRUE FALSE author citation
+ authorIdentifierScheme Identifier Type The type of identifier that uniquely identifies the author (e.g. ORCID, ISNI) text 10 - #VALUE: FALSE TRUE FALSE FALSE TRUE FALSE author citation http://purl.org/spar/datacite/AgentIdentifierScheme
+ authorIdentifier Identifier Uniquely identifies the author when paired with an identifier type text 11 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE author citation http://purl.org/spar/datacite/AgentIdentifier
+ datasetContact Point of Contact The entity, e.g. a person or organization, that users of the Dataset can contact with questions none 12 FALSE FALSE TRUE FALSE TRUE TRUE citation
+ datasetContactName Name The name of the point of contact, e.g. the person's name or the name of an organization 1) FamilyName, GivenName or 2) Organization text 13 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE datasetContact citation
+ datasetContactAffiliation Affiliation The name of the entity affiliated with the point of contact, e.g. an organization's name Organization XYZ text 14 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE datasetContact citation
+ datasetContactEmail E-mail The point of contact's email address name@email.xyz email 15 #EMAIL FALSE FALSE FALSE FALSE TRUE TRUE datasetContact citation
+ dsDescription Description A summary describing the purpose, nature, and scope of the Dataset none 16 FALSE FALSE TRUE FALSE TRUE TRUE citation
+ dsDescriptionValue Text A summary describing the purpose, nature, and scope of the Dataset textbox 17 #VALUE TRUE FALSE FALSE FALSE TRUE TRUE dsDescription citation
+ dsDescriptionDate Date The date when the description was added to the Dataset. If the Dataset contains more than one description, e.g. the data producer supplied one description and the data repository supplied another, this date is used to distinguish between the descriptions YYYY-MM-DD date 18 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE dsDescription citation
+ subject Subject The area of study relevant to the Dataset text 19 TRUE TRUE TRUE TRUE TRUE TRUE citation http://purl.org/dc/terms/subject
+ keyword Keyword A key term that describes an important aspect of the Dataset and information about any controlled vocabulary used none 20 FALSE FALSE TRUE FALSE TRUE FALSE citation
+ keywordValue Term A key term that describes important aspects of the Dataset text 21 #VALUE TRUE FALSE FALSE TRUE TRUE FALSE keyword citation
+ keywordVocabulary Controlled Vocabulary Name The controlled vocabulary used for the keyword term (e.g. LCSH, MeSH) text 22 (#VALUE) FALSE FALSE FALSE FALSE TRUE FALSE keyword citation
+ keywordVocabularyURI Controlled Vocabulary URL The URL where one can access information about the term's controlled vocabulary https:// url 23 #VALUE FALSE FALSE FALSE FALSE TRUE FALSE keyword citation
+ topicClassification Topic Classification Indicates a broad, important topic or subject that the Dataset covers and information about any controlled vocabulary used none 24 FALSE FALSE TRUE FALSE FALSE FALSE citation
+ topicClassValue Term A topic or subject term text 25 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE topicClassification citation
+ topicClassVocab Controlled Vocabulary Name The controlled vocabulary used for the keyword term (e.g. LCSH, MeSH) text 26 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE topicClassification citation
+ topicClassVocabURI Controlled Vocabulary URL The URL where one can access information about the term's controlled vocabulary https:// url 27 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE topicClassification citation
+ publication Related Publication The article or report that uses the data in the Dataset. The full list of related publications will be displayed on the metadata tab none 28 FALSE FALSE TRUE FALSE TRUE FALSE citation http://purl.org/dc/terms/isReferencedBy
+ publicationCitation Citation The full bibliographic citation for the related publication textbox 29 #VALUE TRUE FALSE FALSE FALSE TRUE FALSE publication citation http://purl.org/dc/terms/bibliographicCitation
+ publicationIDType Identifier Type The type of identifier that uniquely identifies a related publication text 30 #VALUE: TRUE TRUE FALSE FALSE TRUE FALSE publication citation http://purl.org/spar/datacite/ResourceIdentifierScheme
+ publicationIDNumber Identifier The identifier for a related publication text 31 #VALUE TRUE FALSE FALSE FALSE TRUE FALSE publication citation http://purl.org/spar/datacite/ResourceIdentifier
+ publicationURL URL The URL form of the identifier entered in the Identifier field, e.g. the DOI URL if a DOI was entered in the Identifier field. Used to display what was entered in the ID Type and ID Number fields as a link. If what was entered in the Identifier field has no URL form, the URL of the publication webpage is used, e.g. a journal article webpage https:// url 32 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE publication citation https://schema.org/distribution
+ notesText Notes Additional information about the Dataset textbox 33 FALSE FALSE FALSE FALSE TRUE FALSE citation
+ language Language A language that the Dataset's files is written in text 34 TRUE TRUE TRUE TRUE FALSE FALSE citation http://purl.org/dc/terms/language
+ producer Producer The entity, such a person or organization, managing the finances or other administrative processes involved in the creation of the Dataset none 35 FALSE FALSE TRUE FALSE FALSE FALSE citation
+ producerName Name The name of the entity, e.g. the person's name or the name of an organization 1) FamilyName, GivenName or 2) Organization text 36 #VALUE TRUE FALSE FALSE TRUE FALSE TRUE producer citation
+ producerAffiliation Affiliation The name of the entity affiliated with the producer, e.g. an organization's name Organization XYZ text 37 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE producer citation
+ producerAbbreviation Abbreviated Name The producer's abbreviated name (e.g. IQSS, ICPSR) text 38 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE producer citation
+ producerURL URL The URL of the producer's website https:// url 39 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE producer citation
+ producerLogoURL Logo URL The URL of the producer's logo https:// url 40
FALSE FALSE FALSE FALSE FALSE FALSE producer citation
+ productionDate Production Date The date when the data were produced (not distributed, published, or archived) YYYY-MM-DD date 41 TRUE FALSE FALSE TRUE FALSE FALSE citation
+ productionPlace Production Location The location where the data and any related materials were produced or collected text 42 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ contributor Contributor The entity, such as a person or organization, responsible for collecting, managing, or otherwise contributing to the development of the Dataset none 43 : FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/contributor
+ contributorType Type Indicates the type of contribution made to the dataset text 44 #VALUE TRUE TRUE FALSE TRUE FALSE FALSE contributor citation
+ contributorName Name The name of the contributor, e.g. the person's name or the name of an organization 1) FamilyName, GivenName or 2) Organization text 45 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE contributor citation
+ grantNumber Funding Information Information about the Dataset's financial support none 46 : FALSE FALSE TRUE FALSE FALSE FALSE citation https://schema.org/sponsor
+ grantNumberAgency Agency The agency that provided financial support for the Dataset Organization XYZ text 47 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE grantNumber citation
+ grantNumberValue Identifier The grant identifier or contract identifier of the agency that provided financial support for the Dataset text 48 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE grantNumber citation
+ distributor Distributor The entity, such as a person or organization, designated to generate copies of the Dataset, including any editions or revisions none 49 FALSE FALSE TRUE FALSE FALSE FALSE citation
+ distributorName Name The name of the entity, e.g. the person's name or the name of an organization 1) FamilyName, GivenName or 2) Organization text 50 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE distributor citation
+ distributorAffiliation Affiliation The name of the entity affiliated with the distributor, e.g. an organization's name Organization XYZ text 51 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
+ distributorAbbreviation Abbreviated Name The distributor's abbreviated name (e.g. IQSS, ICPSR) text 52 (#VALUE) FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
+ distributorURL URL The URL of the distributor's webpage https:// url 53 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
+ distributorLogoURL Logo URL The URL of the distributor's logo image, used to show the image on the Dataset's page https:// url 54
FALSE FALSE FALSE FALSE FALSE FALSE distributor citation
+ distributionDate Distribution Date The date when the Dataset was made available for distribution/presentation YYYY-MM-DD date 55 TRUE FALSE FALSE TRUE FALSE FALSE citation
+ depositor Depositor The entity, such as a person or organization, that deposited the Dataset in the repository 1) FamilyName, GivenName or 2) Organization text 56 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ dateOfDeposit Deposit Date The date when the Dataset was deposited into the repository YYYY-MM-DD date 57 FALSE FALSE FALSE TRUE FALSE FALSE citation http://purl.org/dc/terms/dateSubmitted
+ timePeriodCovered Time Period The time period that the data refer to. Also known as span. This is the time period covered by the data, not the dates of coding, collecting data, or making documents machine-readable none 58 ; FALSE FALSE TRUE FALSE FALSE FALSE citation https://schema.org/temporalCoverage
+ timePeriodCoveredStart Start Date The start date of the time period that the data refer to YYYY-MM-DD date 59 #NAME: #VALUE TRUE FALSE FALSE TRUE FALSE FALSE timePeriodCovered citation
+ timePeriodCoveredEnd End Date The end date of the time period that the data refer to YYYY-MM-DD date 60 #NAME: #VALUE TRUE FALSE FALSE TRUE FALSE FALSE timePeriodCovered citation
+ dateOfCollection Date of Collection The dates when the data were collected or generated none 61 ; FALSE FALSE TRUE FALSE FALSE FALSE citation
+ dateOfCollectionStart Start Date The date when the data collection started YYYY-MM-DD date 62 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE dateOfCollection citation
+ dateOfCollectionEnd End Date The date when the data collection ended YYYY-MM-DD date 63 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE dateOfCollection citation
+ kindOfData Data Type The type of data included in the files (e.g. survey data, clinical data, or machine-readable text) text 64 TRUE FALSE TRUE TRUE FALSE FALSE citation http://rdf-vocabulary.ddialliance.org/discovery#kindOfData
+ series Series Information about the dataset series to which the Dataset belong none 65 : FALSE FALSE FALSE FALSE FALSE FALSE citation
+ seriesName Name The name of the dataset series text 66 #VALUE TRUE FALSE FALSE TRUE FALSE FALSE series citation
+ seriesInformation Information Can include 1) a history of the series and 2) a summary of features that apply to the series textbox 67 #VALUE FALSE FALSE FALSE FALSE FALSE FALSE series citation
+ software Software Information about the software used to generate the Dataset none 68 , FALSE FALSE TRUE FALSE FALSE FALSE citation https://www.w3.org/TR/prov-o/#wasGeneratedBy
+ softwareName Name The name of software used to generate the Dataset text 69 #VALUE FALSE TRUE FALSE FALSE FALSE FALSE software citation
+ softwareVersion Version The version of the software used to generate the Dataset, e.g. 4.11 text 70 #NAME: #VALUE FALSE FALSE FALSE FALSE FALSE FALSE software citation
+ relatedMaterial Related Material Information, such as a persistent ID or citation, about the material related to the Dataset, such as appendices or sampling information available outside of the Dataset textbox 71 FALSE FALSE TRUE FALSE FALSE FALSE citation
+ relatedDatasets Related Dataset Information, such as a persistent ID or citation, about a related dataset, such as previous research on the Dataset's subject textbox 72 FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/relation
+ otherReferences Other Reference Information, such as a persistent ID or citation, about another type of resource that provides background or supporting material to the Dataset text 73 FALSE FALSE TRUE FALSE FALSE FALSE citation http://purl.org/dc/terms/references
+ dataSources Data Source Information, such as a persistent ID or citation, about sources of the Dataset (e.g. a book, article, serial, or machine-readable data file) textbox 74 FALSE FALSE TRUE FALSE FALSE FALSE citation https://www.w3.org/TR/prov-o/#wasDerivedFrom
+ originOfSources Origin of Historical Sources For historical sources, the origin and any rules followed in establishing them as sources textbox 75 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ characteristicOfSources Characteristic of Sources Characteristics not already noted elsewhere textbox 76 FALSE FALSE FALSE FALSE FALSE FALSE citation
+ accessToSources Documentation and Access to Sources 1) Methods or procedures for accessing data sources and 2) any special permissions needed for access textbox 77 FALSE FALSE FALSE FALSE FALSE FALSE citation
#controlledVocabulary DatasetField Value identifier displayOrder
subject Agricultural Sciences D01 0
subject Arts and Humanities D0 1
@@ -111,6 +111,7 @@
publicationIDType upc 14
publicationIDType url 15
publicationIDType urn 16
+ publicationIDType DASH-NRS 17
contributorType Data Collector 0
contributorType Data Curator 1
contributorType Data Manager 2
diff --git a/scripts/api/data/metadatablocks/computational_workflow.tsv b/scripts/api/data/metadatablocks/computational_workflow.tsv
new file mode 100644
index 00000000000..51b69cfdb80
--- /dev/null
+++ b/scripts/api/data/metadatablocks/computational_workflow.tsv
@@ -0,0 +1,21 @@
+#metadataBlock name dataverseAlias displayName
+ computationalworkflow Computational Workflow Metadata
+#datasetField name title description watermark fieldType displayOrder displayFormat advancedSearchField allowControlledVocabulary allowmultiples facetable displayoncreate required parent metadatablock_id termURI
+ workflowType Computational Workflow Type The kind of Computational Workflow, which is designed to compose and execute a series of computational or data manipulation steps in a scientific application text 0 TRUE TRUE TRUE TRUE TRUE FALSE computationalworkflow
+ workflowCodeRepository External Code Repository URL A link to the repository where the un-compiled, human readable code and related code is located (e.g. GitHub, GitLab, SVN) https://... url 1 FALSE FALSE TRUE FALSE TRUE FALSE computationalworkflow
+ workflowDocumentation Documentation A link (URL) to the documentation or text describing the Computational Workflow and its use textbox 2 FALSE FALSE TRUE FALSE TRUE FALSE computationalworkflow
+#controlledVocabulary DatasetField Value identifier displayOrder
+ workflowType Common Workflow Language (CWL) workflowtype_cwl 1
+ workflowType Workflow Description Language (WDL) workflowtype_wdl 2
+ workflowType Nextflow workflowtype_nextflow 3
+ workflowType Snakemake workflowtype_snakemake 4
+ workflowType Ruffus workflowtype_ruffus 5
+ workflowType DAGMan workflowtype_dagman 6
+ workflowType Jupyter Notebook workflowtype_jupyter 7
+ workflowType R Notebook workflowtype_rstudio 8
+ workflowType MATLAB Script workflowtype_matlab 9
+ workflowType Bash Script workflowtype_bash 10
+ workflowType Makefile workflowtype_makefile 11
+ workflowType Other Python-based workflow workflowtype_otherpython 12
+ workflowType Other R-based workflow workflowtype_otherrbased 13
+ workflowType Other workflowtype_other 100
diff --git a/scripts/api/data/workflows/internal-ldnannounce-workflow.json b/scripts/api/data/workflows/internal-ldnannounce-workflow.json
new file mode 100644
index 00000000000..9cf058b68a1
--- /dev/null
+++ b/scripts/api/data/workflows/internal-ldnannounce-workflow.json
@@ -0,0 +1,16 @@
+{
+ "name": "LDN Announce workflow",
+ "steps": [
+ {
+ "provider":":internal",
+ "stepType":"ldnannounce",
+ "parameters": {
+ "stepName":"LDN Announce"
+ },
+ "requiredSettings": {
+ ":LDNAnnounceRequiredFields": "string",
+ ":LDNTarget": "string"
+ }
+ }
+ ]
+}
diff --git a/scripts/api/setup-datasetfields.sh b/scripts/api/setup-datasetfields.sh
index 0d2d60b9538..0d79176c099 100755
--- a/scripts/api/setup-datasetfields.sh
+++ b/scripts/api/setup-datasetfields.sh
@@ -7,3 +7,4 @@ curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @da
curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @data/metadatablocks/astrophysics.tsv -H "Content-type: text/tab-separated-values"
curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @data/metadatablocks/biomedical.tsv -H "Content-type: text/tab-separated-values"
curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @data/metadatablocks/journals.tsv -H "Content-type: text/tab-separated-values"
+
diff --git a/scripts/vagrant/setup.sh b/scripts/vagrant/setup.sh
index e4915ae9ffa..0af4afb22af 100644
--- a/scripts/vagrant/setup.sh
+++ b/scripts/vagrant/setup.sh
@@ -51,7 +51,7 @@ SOLR_USER=solr
echo "Ensuring Unix user '$SOLR_USER' exists"
useradd $SOLR_USER || :
DOWNLOAD_DIR='/dataverse/downloads'
-PAYARA_ZIP="$DOWNLOAD_DIR/payara-5.2021.6.zip"
+PAYARA_ZIP="$DOWNLOAD_DIR/payara-5.2022.3.zip"
SOLR_TGZ="$DOWNLOAD_DIR/solr-8.11.1.tgz"
if [ ! -f $PAYARA_ZIP ] || [ ! -f $SOLR_TGZ ]; then
echo "Couldn't find $PAYARA_ZIP or $SOLR_TGZ! Running download script...."
diff --git a/src/main/java/edu/harvard/iq/dataverse/ControlledVocabularyValue.java b/src/main/java/edu/harvard/iq/dataverse/ControlledVocabularyValue.java
index 213d648da71..181d939f4a1 100644
--- a/src/main/java/edu/harvard/iq/dataverse/ControlledVocabularyValue.java
+++ b/src/main/java/edu/harvard/iq/dataverse/ControlledVocabularyValue.java
@@ -148,7 +148,7 @@ public static String getLocaleStrValue(String strValue, String fieldTypeName, St
return sendDefault ? strValue : null;
}
} catch (MissingResourceException | NullPointerException e) {
- logger.warning("Error finding" + "controlledvocabulary." + fieldTypeName + "." + key + " in " + ((locale==null)? "defaultLang" : locale.getLanguage()) + " : " + e.getLocalizedMessage());
+ logger.warning("Error finding " + "controlledvocabulary." + fieldTypeName + "." + key + " in " + ((locale==null)? "defaultLang" : locale.getLanguage()) + " : " + e.getLocalizedMessage());
return sendDefault ? strValue : null;
}
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/DataFile.java b/src/main/java/edu/harvard/iq/dataverse/DataFile.java
index b21ab5fb7ba..cb43dff0e20 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DataFile.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DataFile.java
@@ -605,7 +605,11 @@ public void setFilesize(long filesize) {
* @return
*/
public String getFriendlySize() {
- return FileSizeChecker.bytesToHumanReadable(filesize);
+ if (filesize != null) {
+ return FileSizeChecker.bytesToHumanReadable(filesize);
+ } else {
+ return BundleUtil.getStringFromBundle("file.sizeNotAvailable");
+ }
}
public boolean isRestricted() {
diff --git a/src/main/java/edu/harvard/iq/dataverse/Dataset.java b/src/main/java/edu/harvard/iq/dataverse/Dataset.java
index c60ea7020bd..a4f82d41bac 100644
--- a/src/main/java/edu/harvard/iq/dataverse/Dataset.java
+++ b/src/main/java/edu/harvard/iq/dataverse/Dataset.java
@@ -33,8 +33,8 @@
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;
-import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.StringUtil;
+import edu.harvard.iq.dataverse.util.SystemConfig;
/**
*
@@ -152,6 +152,19 @@ public void setCitationDateDatasetFieldType(DatasetFieldType citationDateDataset
this.citationDateDatasetFieldType = citationDateDatasetFieldType;
}
+
+ @ManyToOne
+ @JoinColumn(name="template_id",nullable = true)
+ private Template template;
+
+ public Template getTemplate() {
+ return template;
+ }
+
+ public void setTemplate(Template template) {
+ this.template = template;
+ }
+
public Dataset() {
DatasetVersion datasetVersion = new DatasetVersion();
datasetVersion.setDataset(this);
@@ -743,6 +756,11 @@ public void setHarvestIdentifier(String harvestIdentifier) {
this.harvestIdentifier = harvestIdentifier;
}
+ public String getLocalURL() {
+ //Assumes GlobalId != null
+ return SystemConfig.getDataverseSiteUrlStatic() + "/dataset.xhtml?persistentId=" + this.getGlobalId().asString();
+ }
+
public String getRemoteArchiveURL() {
if (isHarvested()) {
if (HarvestingClient.HARVEST_STYLE_DATAVERSE.equals(this.getHarvestedFrom().getHarvestStyle())) {
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetFieldServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/DatasetFieldServiceBean.java
index 192052c68c5..9bc5a5c09a7 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetFieldServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetFieldServiceBean.java
@@ -672,6 +672,10 @@ public List getVocabScripts( Map cvocConf) {
for(JsonObject jo: cvocConf.values()) {
scripts.add(jo.getString("js-url"));
}
+ String customScript = settingsService.getValueForKey(SettingsServiceBean.Key.ControlledVocabularyCustomJavaScript);
+ if (customScript != null && !customScript.isEmpty()) {
+ scripts.add(customScript);
+ }
return Arrays.asList(scripts.toArray(new String[0]));
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetLock.java b/src/main/java/edu/harvard/iq/dataverse/DatasetLock.java
index d0ba86ab68e..7b857545c20 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetLock.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetLock.java
@@ -77,7 +77,10 @@ public enum Reason {
/** DCM (rsync) upload in progress */
DcmUpload,
-
+
+ /** Globus upload in progress */
+ GlobusUpload,
+
/** Tasks handled by FinalizeDatasetPublicationCommand:
Registering PIDs for DS and DFs and/or file validation */
finalizePublication,
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetPage.java b/src/main/java/edu/harvard/iq/dataverse/DatasetPage.java
index 1a2bcee4b12..0a8db69bf5b 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetPage.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetPage.java
@@ -63,6 +63,8 @@
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
import java.sql.Timestamp;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
@@ -111,6 +113,7 @@
import edu.harvard.iq.dataverse.engine.command.impl.SubmitDatasetForReviewCommand;
import edu.harvard.iq.dataverse.externaltools.ExternalTool;
import edu.harvard.iq.dataverse.externaltools.ExternalToolServiceBean;
+import edu.harvard.iq.dataverse.globus.GlobusServiceBean;
import edu.harvard.iq.dataverse.export.SchemaDotOrgExporter;
import edu.harvard.iq.dataverse.externaltools.ExternalToolHandler;
import edu.harvard.iq.dataverse.makedatacount.MakeDataCountLoggingServiceBean;
@@ -249,6 +252,8 @@ public enum DisplayMode {
LicenseServiceBean licenseServiceBean;
@Inject
DataFileCategoryServiceBean dataFileCategoryService;
+ @Inject
+ GlobusServiceBean globusService;
private Dataset dataset = new Dataset();
@@ -332,7 +337,7 @@ public void setSelectedHostDataverse(Dataverse selectedHostDataverse) {
private Boolean hasRsyncScript = false;
private Boolean hasTabular = false;
-
+
/**
* If the dataset version has at least one tabular file. The "hasTabular"
@@ -343,6 +348,10 @@ public void setSelectedHostDataverse(Dataverse selectedHostDataverse) {
private boolean versionHasTabular = false;
private boolean showIngestSuccess;
+
+ private Boolean archivable = null;
+ private Boolean versionArchivable = null;
+ private Boolean someVersionArchived = null;
public boolean isShowIngestSuccess() {
return showIngestSuccess;
@@ -412,7 +421,7 @@ public Boolean isHasValidTermsOfAccess() {
private Boolean hasRestrictedFiles = null;
- public Boolean isHasRestrictedFiles(){
+ public boolean isHasRestrictedFiles(){
//cache in page to limit processing
if (hasRestrictedFiles != null){
return hasRestrictedFiles;
@@ -1185,7 +1194,7 @@ public String getComputeUrl(FileMetadata metadata) {
} catch (IOException e) {
logger.info("DatasetPage: Failed to get storageIO");
}
- if (settingsWrapper.isTrueForKey(SettingsServiceBean.Key.PublicInstall, false)) {
+ if (isHasPublicStore()) {
return settingsWrapper.getValueForKey(SettingsServiceBean.Key.ComputeBaseUrl) + "?" + this.getPersistentId() + "=" + swiftObject.getSwiftFileName();
}
@@ -1762,6 +1771,7 @@ public void handleChangeButton() {
workingVersion.initDefaultValues(licenseServiceBean.getDefault());
updateDatasetFieldInputLevels();
}
+ dataset.setTemplate(selectedTemplate);
/*
Issue 8646: necessary for the access popup which is shared by the dataset page and the file page
*/
@@ -1821,15 +1831,21 @@ public void updateOwnerDataverse() {
// initiate from scratch: (isolate the creation of a new dataset in its own method?)
init(true);
- // rebuild the bred crumbs display:
+ // rebuild the bread crumbs display:
dataverseHeaderFragment.initBreadcrumbs(dataset);
}
}
public boolean rsyncUploadSupported() {
- return settingsWrapper.isRsyncUpload() && DatasetUtil.isAppropriateStorageDriver(dataset);
+ return settingsWrapper.isRsyncUpload() && DatasetUtil.isRsyncAppropriateStorageDriver(dataset);
}
+
+ public boolean globusUploadSupported() {
+ return settingsWrapper.isGlobusUpload() && settingsWrapper.isGlobusEnabledStorageDriver(dataset.getEffectiveStorageDriverId());
+ }
+
+
private String init(boolean initFull) {
@@ -1999,10 +2015,10 @@ private String init(boolean initFull) {
}
} catch (RuntimeException ex) {
logger.warning("Problem getting rsync script(RuntimeException): " + ex.getLocalizedMessage());
- FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Problem getting rsync script:", ex.getLocalizedMessage()));
+ FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Problem getting rsync script:", ex.getLocalizedMessage()));
} catch (CommandException cex) {
logger.warning("Problem getting rsync script (Command Exception): " + cex.getLocalizedMessage());
- FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Problem getting rsync script:", cex.getLocalizedMessage()));
+ FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Problem getting rsync script:", cex.getLocalizedMessage()));
}
}
@@ -2049,6 +2065,8 @@ private String init(boolean initFull) {
selectedTemplate = testT;
}
}
+ //Initalize with the default if there is one
+ dataset.setTemplate(selectedTemplate);
workingVersion = dataset.getEditVersion(selectedTemplate, null);
updateDatasetFieldInputLevels();
} else {
@@ -2056,7 +2074,7 @@ private String init(boolean initFull) {
updateDatasetFieldInputLevels();
}
- if (settingsWrapper.isTrueForKey(SettingsServiceBean.Key.PublicInstall, false)){
+ if (isHasPublicStore()){
JH.addMessage(FacesMessage.SEVERITY_WARN, BundleUtil.getStringFromBundle("dataset.message.label.fileAccess"),
BundleUtil.getStringFromBundle("dataset.message.publicInstall"));
}
@@ -2169,6 +2187,10 @@ private void displayLockInfo(Dataset dataset) {
BundleUtil.getStringFromBundle("file.rsyncUpload.inProgressMessage.details"));
lockedDueToDcmUpload = true;
}
+ if (dataset.isLockedFor(DatasetLock.Reason.GlobusUpload)) {
+ JH.addMessage(FacesMessage.SEVERITY_WARN, BundleUtil.getStringFromBundle("file.globusUpload.inProgressMessage.summary"),
+ BundleUtil.getStringFromBundle("file.globusUpload.inProgressMessage.details"));
+ }
//This is a hack to remove dataset locks for File PID registration if
//the dataset is released
//in testing we had cases where datasets with 1000 files were remaining locked after being published successfully
@@ -2748,7 +2770,7 @@ public String updateCurrentVersion() {
*/
try {
updateVersion = commandEngine.submit(archiveCommand);
- if (updateVersion.getArchivalCopyLocation() != null) {
+ if (!updateVersion.getArchivalCopyLocationStatus().equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)) {
successMsg = BundleUtil.getStringFromBundle("datasetversion.update.archive.success");
} else {
errorMsg = BundleUtil.getStringFromBundle("datasetversion.update.archive.failure");
@@ -2890,7 +2912,7 @@ public String editFileMetadata(){
public String deleteDatasetVersion() {
DeleteDatasetVersionCommand cmd;
-
+
Map deleteStorageLocations = datafileService.getPhysicalFilesToDelete(dataset.getLatestVersion());
boolean deleteCommandSuccess = false;
try {
@@ -2902,7 +2924,7 @@ public String deleteDatasetVersion() {
JH.addMessage(FacesMessage.SEVERITY_FATAL, BundleUtil.getStringFromBundle("dataset.message.deleteFailure"));
logger.severe(ex.getMessage());
}
-
+
if (deleteCommandSuccess && !deleteStorageLocations.isEmpty()) {
datafileService.finalizeFileDeletes(deleteStorageLocations);
}
@@ -3566,6 +3588,7 @@ public String save() {
if (editMode == EditMode.CREATE) {
//Lock the metadataLanguage once created
dataset.setMetadataLanguage(getEffectiveMetadataLanguage());
+ //ToDo - could drop use of selectedTemplate and just use the persistent dataset.getTemplate()
if ( selectedTemplate != null ) {
if ( isSessionUserAuthenticated() ) {
cmd = new CreateNewDatasetCommand(dataset, dvRequestService.getDataverseRequest(), false, selectedTemplate);
@@ -5016,7 +5039,7 @@ public boolean isFileAccessRequestMultiButtonRequired(){
}
for (FileMetadata fmd : workingVersion.getFileMetadatas()){
//Change here so that if all restricted files have pending requests there's no Request Button
- if ((!this.fileDownloadHelper.canDownloadFile(fmd) && (fmd.getDataFile().getFileAccessRequesters() == null
+ if ((!this.fileDownloadHelper.canDownloadFile(fmd) && (fmd.getDataFile().getFileAccessRequesters() == null
|| ( fmd.getDataFile().getFileAccessRequesters() != null
&& !fmd.getDataFile().getFileAccessRequesters().contains((AuthenticatedUser)session.getUser()))))){
return true;
@@ -5550,17 +5573,20 @@ public void refreshPaginator() {
*/
public void archiveVersion(Long id) {
if (session.getUser() instanceof AuthenticatedUser) {
- AuthenticatedUser au = ((AuthenticatedUser) session.getUser());
-
DatasetVersion dv = datasetVersionService.retrieveDatasetVersionByVersionId(id).getDatasetVersion();
- String className = settingsService.getValueForKey(SettingsServiceBean.Key.ArchiverClassName);
+ String className = settingsWrapper.getValueForKey(SettingsServiceBean.Key.ArchiverClassName, null);
AbstractSubmitToArchiveCommand cmd = ArchiverUtil.createSubmitToArchiveCommand(className, dvRequestService.getDataverseRequest(), dv);
if (cmd != null) {
try {
DatasetVersion version = commandEngine.submit(cmd);
- logger.info("Archived to " + version.getArchivalCopyLocation());
+ if (!version.getArchivalCopyLocationStatus().equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)) {
+ logger.info(
+ "DatasetVersion id=" + version.getId() + " submitted to Archive, status: " + dv.getArchivalCopyLocationStatus());
+ } else {
+ logger.severe("Error submitting version " + version.getId() + " due to conflict/error at Archive");
+ }
if (version.getArchivalCopyLocation() != null) {
- resetVersionTabList();
+ setVersionTabList(resetVersionTabList());
this.setVersionTabListForPostLoad(getVersionTabList());
JsfHelper.addSuccessMessage(BundleUtil.getStringFromBundle("datasetversion.archive.success"));
} else {
@@ -5577,6 +5603,70 @@ public void archiveVersion(Long id) {
}
}
}
+
+ public boolean isArchivable() {
+ if (archivable == null) {
+ archivable = false;
+ String className = settingsWrapper.getValueForKey(SettingsServiceBean.Key.ArchiverClassName, null);
+ if (className != null) {
+ try {
+ Class> clazz = Class.forName(className);
+ Method m = clazz.getMethod("isArchivable", Dataset.class, SettingsWrapper.class);
+ Object[] params = { dataset, settingsWrapper };
+ archivable = ((Boolean) m.invoke(null, params) == true);
+ } catch (ClassNotFoundException | IllegalAccessException | IllegalArgumentException
+ | InvocationTargetException | NoSuchMethodException | SecurityException e) {
+ logger.warning("Failed to call isArchivable on configured archiver class: " + className);
+ e.printStackTrace();
+ }
+ }
+ }
+ return archivable;
+ }
+
+ public boolean isVersionArchivable() {
+ if (versionArchivable == null) {
+ // If this dataset isn't in an archivable collection return false
+ versionArchivable = false;
+ if (isArchivable()) {
+ boolean checkForArchivalCopy = false;
+ // Otherwise, we need to know if the archiver is single-version-only
+ // If it is, we have to check for an existing archived version to answer the
+ // question
+ String className = settingsWrapper.getValueForKey(SettingsServiceBean.Key.ArchiverClassName, null);
+ if (className != null) {
+ try {
+ Class> clazz = Class.forName(className);
+ Method m = clazz.getMethod("isSingleVersion", SettingsWrapper.class);
+ Object[] params = { settingsWrapper };
+ checkForArchivalCopy = (Boolean) m.invoke(null, params);
+
+ if (checkForArchivalCopy) {
+ // If we have to check (single version archiving), we can't allow archiving if
+ // one version is already archived (or attempted - any non-null status)
+ versionArchivable = !isSomeVersionArchived();
+ } else {
+ // If we allow multiple versions or didn't find one that has had archiving run
+ // on it, we can archive, so return true
+ versionArchivable = true;
+ }
+ } catch (ClassNotFoundException | IllegalAccessException | IllegalArgumentException
+ | InvocationTargetException | NoSuchMethodException | SecurityException e) {
+ logger.warning("Failed to call isSingleVersion on configured archiver class: " + className);
+ e.printStackTrace();
+ }
+ }
+ }
+ }
+ return versionArchivable;
+ }
+
+ public boolean isSomeVersionArchived() {
+ if (someVersionArchived == null) {
+ someVersionArchived = ArchiverUtil.isSomeVersionArchived(dataset);
+ }
+ return someVersionArchived;
+ }
private static Date getFileDateToCompare(FileMetadata fileMetadata) {
DataFile datafile = fileMetadata.getDataFile();
@@ -5637,9 +5727,7 @@ public void explore(ExternalTool externalTool) {
apiToken.setTokenString(privUrl.getToken());
}
ExternalToolHandler externalToolHandler = new ExternalToolHandler(externalTool, dataset, apiToken, session.getLocaleCode());
- String toolUrl = externalToolHandler.getToolUrlWithQueryParams();
- logger.fine("Exploring with " + toolUrl);
- PrimeFaces.current().executeScript("window.open('"+toolUrl + "', target='_blank');");
+ PrimeFaces.current().executeScript(externalToolHandler.getExploreScript());
}
private FileMetadata fileMetadataForAction;
@@ -5679,7 +5767,7 @@ public boolean isFileDeleted (DataFile dataFile) {
return dataFile.getDeleted();
}
-
+
public String getEffectiveMetadataLanguage() {
return getEffectiveMetadataLanguage(false);
}
@@ -5690,16 +5778,16 @@ public String getEffectiveMetadataLanguage(boolean ofParent) {
}
return mdLang;
}
-
+
public String getLocaleDisplayName(String code) {
String displayName = settingsWrapper.getBaseMetadataLanguageMap(false).get(code);
if(displayName==null && !code.equals(DvObjectContainer.UNDEFINED_METADATA_LANGUAGE_CODE)) {
//Default (for cases such as :when a Dataset has a metadatalanguage code but :MetadataLanguages is no longer defined).
- displayName = new Locale(code).getDisplayName();
+ displayName = new Locale(code).getDisplayName();
}
- return displayName;
+ return displayName;
}
-
+
public Set> getMetadataLanguages() {
return settingsWrapper.getBaseMetadataLanguageMap(false).entrySet();
}
@@ -5711,7 +5799,7 @@ public List getVocabScripts() {
public String getFieldLanguage(String languages) {
return fieldService.getFieldLanguage(languages,session.getLocaleCode());
}
-
+
public void setExternalStatus(String status) {
try {
dataset = commandEngine.submit(new SetCurationStatusCommand(dvRequestService.getDataverseRequest(), dataset, status));
@@ -5942,7 +6030,7 @@ public void validateTerms(FacesContext context, UIComponent component, Object va
}
}
}
-
+
public boolean downloadingRestrictedFiles() {
if (fileMetadataForAction != null) {
return fileMetadataForAction.isRestricted();
@@ -5954,4 +6042,24 @@ public boolean downloadingRestrictedFiles() {
}
return false;
}
+
+
+ //Determines whether this Dataset uses a public store and therefore doesn't support embargoed or restricted files
+ public boolean isHasPublicStore() {
+ return settingsWrapper.isTrueForKey(SettingsServiceBean.Key.PublicInstall, StorageIO.isPublicStore(dataset.getEffectiveStorageDriverId()));
+ }
+
+ public void startGlobusTransfer() {
+ ApiToken apiToken = null;
+ User user = session.getUser();
+ if (user instanceof AuthenticatedUser) {
+ apiToken = authService.findApiTokenByUser((AuthenticatedUser) user);
+ } else if (user instanceof PrivateUrlUser) {
+ PrivateUrlUser privateUrlUser = (PrivateUrlUser) user;
+ PrivateUrl privUrl = privateUrlService.getPrivateUrlFromDatasetId(privateUrlUser.getDatasetId());
+ apiToken = new ApiToken();
+ apiToken.setTokenString(privUrl.getToken());
+ }
+ PrimeFaces.current().executeScript(globusService.getGlobusDownloadScript(dataset, apiToken));
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java
index b9b54fb6216..91ec050fe5c 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java
@@ -16,23 +16,17 @@
import edu.harvard.iq.dataverse.engine.command.impl.FinalizeDatasetPublicationCommand;
import edu.harvard.iq.dataverse.engine.command.impl.GetDatasetStorageSizeCommand;
import edu.harvard.iq.dataverse.export.ExportService;
+import edu.harvard.iq.dataverse.globus.GlobusServiceBean;
import edu.harvard.iq.dataverse.harvest.server.OAIRecordServiceBean;
import edu.harvard.iq.dataverse.search.IndexServiceBean;
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
import edu.harvard.iq.dataverse.workflows.WorkflowComment;
-import java.io.File;
-import java.io.IOException;
-import java.io.InputStream;
+
+import java.io.*;
import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
+import java.util.*;
import java.util.logging.FileHandler;
import java.util.logging.Level;
import java.util.logging.Logger;
@@ -96,6 +90,12 @@ public class DatasetServiceBean implements java.io.Serializable {
@EJB
SystemConfig systemConfig;
+ @EJB
+ GlobusServiceBean globusServiceBean;
+
+ @EJB
+ UserNotificationServiceBean userNotificationService;
+
private static final SimpleDateFormat logFormatter = new SimpleDateFormat("yyyy-MM-dd'T'HH-mm-ss");
@PersistenceContext(unitName = "VDCNet-ejbPU")
@@ -802,6 +802,35 @@ public void exportAllDatasets(boolean forceReExport) {
}
+
+ @Asynchronous
+ public void reExportDatasetAsync(Dataset dataset) {
+ exportDataset(dataset, true);
+ }
+
+ public void exportDataset(Dataset dataset, boolean forceReExport) {
+ if (dataset != null) {
+ // Note that the logic for handling a dataset is similar to what is implemented in exportAllDatasets,
+ // but when only one dataset is exported we do not log in a separate export logging file
+ if (dataset.isReleased() && dataset.getReleasedVersion() != null && !dataset.isDeaccessioned()) {
+
+ // can't trust dataset.getPublicationDate(), no.
+ Date publicationDate = dataset.getReleasedVersion().getReleaseTime(); // we know this dataset has a non-null released version! Maybe not - SEK 8/19 (We do now! :)
+ if (forceReExport || (publicationDate != null
+ && (dataset.getLastExportTime() == null
+ || dataset.getLastExportTime().before(publicationDate)))) {
+ try {
+ recordService.exportAllFormatsInNewTransaction(dataset);
+ logger.info("Success exporting dataset: " + dataset.getDisplayName() + " " + dataset.getGlobalIdString());
+ } catch (Exception ex) {
+ logger.info("Error exporting dataset: " + dataset.getDisplayName() + " " + dataset.getGlobalIdString() + "; " + ex.getMessage());
+ }
+ }
+ }
+ }
+
+ }
+
public String getReminderString(Dataset dataset, boolean canPublishDataset) {
return getReminderString( dataset, canPublishDataset, false);
}
@@ -842,9 +871,11 @@ public String getReminderString(Dataset dataset, boolean canPublishDataset, bool
}
}
- public void updateLastExportTimeStamp(Long datasetId) {
- Date now = new Date();
- em.createNativeQuery("UPDATE Dataset SET lastExportTime='"+now.toString()+"' WHERE id="+datasetId).executeUpdate();
+ @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
+ public int clearAllExportTimes() {
+ Query clearExportTimes = em.createQuery("UPDATE Dataset SET lastExportTime = NULL");
+ int numRowsUpdated = clearExportTimes.executeUpdate();
+ return numRowsUpdated;
}
public Dataset setNonDatasetFileAsThumbnail(Dataset dataset, InputStream inputStream) {
@@ -1135,4 +1166,5 @@ public void deleteHarvestedDataset(Dataset dataset, DataverseRequest request, Lo
hdLogger.warning("Failed to destroy the dataset");
}
}
+
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java b/src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java
index faa91b87e12..30815c43381 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java
@@ -6,11 +6,11 @@
import edu.harvard.iq.dataverse.branding.BrandingUtil;
import edu.harvard.iq.dataverse.dataset.DatasetUtil;
import edu.harvard.iq.dataverse.license.License;
-import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.FileUtil;
import edu.harvard.iq.dataverse.util.StringUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
import edu.harvard.iq.dataverse.util.DateUtil;
+import edu.harvard.iq.dataverse.util.json.JsonUtil;
import edu.harvard.iq.dataverse.util.json.NullSafeJsonBuilder;
import edu.harvard.iq.dataverse.workflows.WorkflowComment;
import java.io.Serializable;
@@ -27,6 +27,7 @@
import javax.json.Json;
import javax.json.JsonArray;
import javax.json.JsonArrayBuilder;
+import javax.json.JsonObject;
import javax.json.JsonObjectBuilder;
import javax.persistence.CascadeType;
import javax.persistence.Column;
@@ -39,6 +40,8 @@
import javax.persistence.Index;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;
+import javax.persistence.NamedQueries;
+import javax.persistence.NamedQuery;
import javax.persistence.OneToMany;
import javax.persistence.OneToOne;
import javax.persistence.OrderBy;
@@ -59,6 +62,13 @@
*
* @author skraffmiller
*/
+
+@NamedQueries({
+ @NamedQuery(name = "DatasetVersion.findUnarchivedReleasedVersion",
+ query = "SELECT OBJECT(o) FROM DatasetVersion AS o WHERE o.dataset.harvestedFrom IS NULL and o.releaseTime IS NOT NULL and o.archivalCopyLocation IS NULL"
+ )})
+
+
@Entity
@Table(indexes = {@Index(columnList="dataset_id")},
uniqueConstraints = @UniqueConstraint(columnNames = {"dataset_id,versionnumber,minorversionnumber"}))
@@ -94,6 +104,14 @@ public enum VersionState {
public static final int ARCHIVE_NOTE_MAX_LENGTH = 1000;
public static final int VERSION_NOTE_MAX_LENGTH = 1000;
+ //Archival copies: Status message required components
+ public static final String ARCHIVAL_STATUS = "status";
+ public static final String ARCHIVAL_STATUS_MESSAGE = "message";
+ //Archival Copies: Allowed Statuses
+ public static final String ARCHIVAL_STATUS_PENDING = "pending";
+ public static final String ARCHIVAL_STATUS_SUCCESS = "success";
+ public static final String ARCHIVAL_STATUS_FAILURE = "failure";
+
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@@ -152,6 +170,11 @@ public enum VersionState {
// removed pending further investigation (v4.13)
private String archiveNote;
+ // Originally a simple string indicating the location of the archival copy. As
+ // of v5.12, repurposed to provide a more general json archival status (failure,
+ // pending, success) and message (serialized as a string). The archival copy
+ // location is now expected as the contents of the message for the status
+ // 'success'. See the /api/datasets/{id}/{version}/archivalStatus API calls for more details
@Column(nullable=true, columnDefinition = "TEXT")
private String archivalCopyLocation;
@@ -180,6 +203,8 @@ public enum VersionState {
@Transient
private DatasetVersionDifference dvd;
+ @Transient
+ private JsonObject archivalStatus;
public Long getId() {
return this.id;
@@ -319,9 +344,39 @@ public void setArchiveNote(String note) {
public String getArchivalCopyLocation() {
return archivalCopyLocation;
}
+
+ public String getArchivalCopyLocationStatus() {
+ populateArchivalStatus(false);
+
+ if(archivalStatus!=null) {
+ return archivalStatus.getString(ARCHIVAL_STATUS);
+ }
+ return null;
+ }
+ public String getArchivalCopyLocationMessage() {
+ populateArchivalStatus(false);
+ if(archivalStatus!=null) {
+ return archivalStatus.getString(ARCHIVAL_STATUS_MESSAGE);
+ }
+ return null;
+ }
+
+ private void populateArchivalStatus(boolean force) {
+ if(archivalStatus ==null || force) {
+ if(archivalCopyLocation!=null) {
+ try {
+ archivalStatus = JsonUtil.getJsonObject(archivalCopyLocation);
+ } catch(Exception e) {
+ logger.warning("DatasetVersion id: " + id + "has a non-JsonObject value, parsing error: " + e.getMessage());
+ logger.fine(archivalCopyLocation);
+ }
+ }
+ }
+ }
public void setArchivalCopyLocation(String location) {
this.archivalCopyLocation = location;
+ populateArchivalStatus(true);
}
public String getDeaccessionLink() {
diff --git a/src/main/java/edu/harvard/iq/dataverse/DatasetVersionServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/DatasetVersionServiceBean.java
index 580d95b4b1d..23fc1961b7d 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DatasetVersionServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DatasetVersionServiceBean.java
@@ -1187,4 +1187,32 @@ private DatasetVersion getPreviousVersionWithUnf(DatasetVersion datasetVersion)
return null;
}
+ /**
+ * Merges the passed datasetversion to the persistence context.
+ * @param ver the DatasetVersion whose new state we want to persist.
+ * @return The managed entity representing {@code ver}.
+ */
+ public DatasetVersion merge( DatasetVersion ver ) {
+ return em.merge(ver);
+ }
+
+ /**
+ * Execute a query to return DatasetVersion
+ *
+ * @param queryString
+ * @return
+ */
+ public List getUnarchivedDatasetVersions(){
+
+ try {
+ List dsl = em.createNamedQuery("DatasetVersion.findUnarchivedReleasedVersion", DatasetVersion.class).getResultList();
+ return dsl;
+ } catch (javax.persistence.NoResultException e) {
+ logger.log(Level.FINE, "No unarchived DatasetVersions found: {0}");
+ return null;
+ } catch (EJBException e) {
+ logger.log(Level.WARNING, "EJBException exception: {0}", e.getMessage());
+ return null;
+ }
+ } // end getUnarchivedDatasetVersions
} // end class
diff --git a/src/main/java/edu/harvard/iq/dataverse/Dataverse.java b/src/main/java/edu/harvard/iq/dataverse/Dataverse.java
index 342aaec187a..bc8716b6129 100644
--- a/src/main/java/edu/harvard/iq/dataverse/Dataverse.java
+++ b/src/main/java/edu/harvard/iq/dataverse/Dataverse.java
@@ -5,6 +5,8 @@
import edu.harvard.iq.dataverse.dataaccess.DataAccess;
import edu.harvard.iq.dataverse.search.savedsearch.SavedSearch;
import edu.harvard.iq.dataverse.util.BundleUtil;
+import edu.harvard.iq.dataverse.util.SystemConfig;
+
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
@@ -322,8 +324,31 @@ public boolean isHarvested() {
return harvestingClient != null;
}
*/
-
-
+ private boolean metadataBlockFacetRoot;
+
+ public boolean isMetadataBlockFacetRoot() {
+ return metadataBlockFacetRoot;
+ }
+
+ public void setMetadataBlockFacetRoot(boolean metadataBlockFacetRoot) {
+ this.metadataBlockFacetRoot = metadataBlockFacetRoot;
+ }
+
+ @OneToMany(mappedBy = "dataverse",cascade={ CascadeType.REMOVE, CascadeType.MERGE,CascadeType.PERSIST }, orphanRemoval=true)
+ private List metadataBlockFacets = new ArrayList<>();
+
+ public List getMetadataBlockFacets() {
+ if (isMetadataBlockFacetRoot() || getOwner() == null) {
+ return metadataBlockFacets;
+ } else {
+ return getOwner().getMetadataBlockFacets();
+ }
+ }
+
+ public void setMetadataBlockFacets(List metadataBlockFacets) {
+ this.metadataBlockFacets = metadataBlockFacets;
+ }
+
public List getParentGuestbooks() {
List retList = new ArrayList<>();
Dataverse testDV = this;
@@ -765,4 +790,8 @@ public boolean isAncestorOf( DvObject other ) {
}
return false;
}
+
+ public String getLocalURL() {
+ return SystemConfig.getDataverseSiteUrlStatic() + "/dataverse/" + this.getAlias();
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/DataverseMetadataBlockFacet.java b/src/main/java/edu/harvard/iq/dataverse/DataverseMetadataBlockFacet.java
new file mode 100644
index 00000000000..a2659b81974
--- /dev/null
+++ b/src/main/java/edu/harvard/iq/dataverse/DataverseMetadataBlockFacet.java
@@ -0,0 +1,82 @@
+package edu.harvard.iq.dataverse;
+
+import javax.persistence.Entity;
+import javax.persistence.GeneratedValue;
+import javax.persistence.GenerationType;
+import javax.persistence.Id;
+import javax.persistence.Index;
+import javax.persistence.JoinColumn;
+import javax.persistence.ManyToOne;
+import javax.persistence.Table;
+import java.io.Serializable;
+import java.util.Objects;
+
+/**
+ *
+ * @author adaybujeda
+ */
+@Entity
+@Table(indexes = {@Index(columnList="dataverse_id")
+ , @Index(columnList="metadatablock_id")})
+public class DataverseMetadataBlockFacet implements Serializable {
+ private static final long serialVersionUID = 1L;
+
+ @Id
+ @GeneratedValue(strategy = GenerationType.IDENTITY)
+ private Long id;
+
+ @ManyToOne
+ @JoinColumn(name = "dataverse_id")
+ private Dataverse dataverse;
+
+ @ManyToOne
+ @JoinColumn(name = "metadatablock_id")
+ private MetadataBlock metadataBlock;
+
+ public Long getId() {
+ return id;
+ }
+
+ public void setId(Long id) {
+ this.id = id;
+ }
+
+ public Dataverse getDataverse() {
+ return dataverse;
+ }
+
+ public void setDataverse(Dataverse dataverse) {
+ this.dataverse = dataverse;
+ }
+
+ public MetadataBlock getMetadataBlock() {
+ return metadataBlock;
+ }
+
+ public void setMetadataBlock(MetadataBlock metadataBlock) {
+ this.metadataBlock = metadataBlock;
+ }
+
+ @Override
+ public int hashCode() {
+ int hash = 0;
+ hash += (this.id != null ? this.id.hashCode() : 0);
+ return hash;
+ }
+
+ @Override
+ public boolean equals(Object object) {
+ if (!(object instanceof DataverseMetadataBlockFacet)) {
+ return false;
+ }
+ DataverseMetadataBlockFacet other = (DataverseMetadataBlockFacet) object;
+ return !(!Objects.equals(this.id, other.id) && (this.id == null || !this.id.equals(other.id)));
+ }
+
+ @Override
+ public String toString() {
+ return String.format("edu.harvard.iq.dataverse.DataverseMetadataBlockFacet[ id=%s ]", id);
+ }
+
+}
+
diff --git a/src/main/java/edu/harvard/iq/dataverse/DvObjectContainer.java b/src/main/java/edu/harvard/iq/dataverse/DvObjectContainer.java
index 746efded48b..6ff01ef3ea8 100644
--- a/src/main/java/edu/harvard/iq/dataverse/DvObjectContainer.java
+++ b/src/main/java/edu/harvard/iq/dataverse/DvObjectContainer.java
@@ -15,7 +15,6 @@
public abstract class DvObjectContainer extends DvObject {
- //Default to "file" is for tests only
public static final String UNDEFINED_METADATA_LANGUAGE_CODE = "undefined"; //Used in dataverse.xhtml as a non-null selection option value (indicating inheriting the default)
@@ -93,6 +92,9 @@ public void setMetadataLanguage(String ml) {
}
}
+ public static boolean isMetadataLanguageSet(String mdLang) {
+ return mdLang!=null && !mdLang.equals(UNDEFINED_METADATA_LANGUAGE_CODE);
+ }
/* Dataverse collections can be configured to allow use of Curation labels and have this inheritable value to decide which set of labels to use.
diff --git a/src/main/java/edu/harvard/iq/dataverse/EditDataFilesPageHelper.java b/src/main/java/edu/harvard/iq/dataverse/EditDataFilesPageHelper.java
index c708c2e28e2..1bf6bee82eb 100644
--- a/src/main/java/edu/harvard/iq/dataverse/EditDataFilesPageHelper.java
+++ b/src/main/java/edu/harvard/iq/dataverse/EditDataFilesPageHelper.java
@@ -2,9 +2,11 @@
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.file.CreateDataFileResult;
+import org.apache.commons.text.StringEscapeUtils;
import javax.ejb.Stateless;
import javax.inject.Inject;
+import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
@@ -22,6 +24,14 @@ public class EditDataFilesPageHelper {
@Inject
private SettingsWrapper settingsWrapper;
+ public String consolidateHtmlErrorMessages(List errorMessages) {
+ if(errorMessages == null || errorMessages.isEmpty()) {
+ return null;
+ }
+
+ return String.join("", errorMessages);
+ }
+
public String getHtmlErrorMessage(CreateDataFileResult createDataFileResult) {
List errors = createDataFileResult.getErrors();
if(errors == null || errors.isEmpty()) {
@@ -33,8 +43,8 @@ public String getHtmlErrorMessage(CreateDataFileResult createDataFileResult) {
return null;
}
- String typeMessage = Optional.ofNullable(BundleUtil.getStringFromBundle(createDataFileResult.getBundleKey())).orElse("Error processing file");
- String errorsMessage = errors.stream().limit(maxErrorsToShow).map(text -> String.format("- %s
", text)).collect(Collectors.joining());
- return String.format("%s:
%s
", typeMessage, errorsMessage);
+ String typeMessage = Optional.ofNullable(BundleUtil.getStringFromBundle(createDataFileResult.getBundleKey(), Arrays.asList(createDataFileResult.getFilename()))).orElse("Error processing file");
+ String errorsMessage = errors.stream().limit(maxErrorsToShow).map(text -> String.format("- %s
", StringEscapeUtils.escapeHtml4(text))).collect(Collectors.joining());
+ return String.format("%s
%s
", typeMessage, errorsMessage);
}
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/EditDatafilesPage.java b/src/main/java/edu/harvard/iq/dataverse/EditDatafilesPage.java
index b1d178f51d9..6cf294ffd6d 100644
--- a/src/main/java/edu/harvard/iq/dataverse/EditDatafilesPage.java
+++ b/src/main/java/edu/harvard/iq/dataverse/EditDatafilesPage.java
@@ -51,6 +51,7 @@
import java.util.Iterator;
import java.util.List;
import java.util.Map;
+import java.util.Optional;
import java.util.logging.Logger;
import javax.ejb.EJB;
import javax.ejb.EJBException;
@@ -649,8 +650,8 @@ public String init() {
setUpRsync();
}
- if (settingsService.isTrueForKey(SettingsServiceBean.Key.PublicInstall, false)){
- JH.addMessage(FacesMessage.SEVERITY_WARN, getBundleString("dataset.message.publicInstall"));
+ if (isHasPublicStore()){
+ JH.addMessage(FacesMessage.SEVERITY_WARN, getBundleString("dataset.message.label.fileAccess"), getBundleString("dataset.message.publicInstall"));
}
return null;
@@ -1491,7 +1492,7 @@ public void handleDropBoxUpload(ActionEvent event) {
//datafiles = ingestService.createDataFiles(workingVersion, dropBoxStream, fileName, "application/octet-stream");
CreateDataFileResult createDataFilesResult = FileUtil.createDataFiles(workingVersion, dropBoxStream, fileName, "application/octet-stream", null, null, systemConfig);
datafiles = createDataFilesResult.getDataFiles();
- errorMessage = editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult);
+ Optional.ofNullable(editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult)).ifPresent(errorMessage -> errorMessages.add(errorMessage));
} catch (IOException ex) {
this.logger.log(Level.SEVERE, "Error during ingest of DropBox file {0} from link {1}", new Object[]{fileName, fileLink});
@@ -1745,12 +1746,13 @@ public void uploadFinished() {
uploadedFiles.clear();
uploadInProgress.setValue(false);
}
- if(errorMessage != null) {
- FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, BundleUtil.getStringFromBundle("dataset.file.uploadFailure"), errorMessage));
- PrimeFaces.current().ajax().update(":messagePanel");
- }
+
// refresh the warning message below the upload component, if exists:
if (uploadComponentId != null) {
+ if(!errorMessages.isEmpty()) {
+ FacesContext.getCurrentInstance().addMessage(uploadComponentId, new FacesMessage(FacesMessage.SEVERITY_ERROR, BundleUtil.getStringFromBundle("dataset.file.uploadFailure"), editDataFilesPageHelper.consolidateHtmlErrorMessages(errorMessages)));
+ }
+
if (uploadWarningMessage != null) {
if (existingFilesWithDupeContent != null || newlyUploadedFilesWithDupeContent != null) {
setWarningMessageForAlreadyExistsPopUp(uploadWarningMessage);
@@ -1797,7 +1799,7 @@ public void uploadFinished() {
multipleDupesNew = false;
uploadWarningMessage = null;
uploadSuccessMessage = null;
- errorMessage = null;
+ errorMessages = new ArrayList<>();
}
private String warningMessageForFileTypeDifferentPopUp;
@@ -1931,7 +1933,7 @@ private void handleReplaceFileUpload(String fullStorageLocation,
fileReplacePageHelper.resetReplaceFileHelper();
saveEnabled = false;
- String storageIdentifier = DataAccess.getStorarageIdFromLocation(fullStorageLocation);
+ String storageIdentifier = DataAccess.getStorageIdFromLocation(fullStorageLocation);
if (fileReplacePageHelper.handleNativeFileUpload(null, storageIdentifier, fileName, contentType, checkSumValue, checkSumType)) {
saveEnabled = true;
@@ -1948,7 +1950,7 @@ private void handleReplaceFileUpload(String fullStorageLocation,
}
private String uploadWarningMessage = null;
- private String errorMessage = null;
+ private List errorMessages = new ArrayList<>();
private String uploadSuccessMessage = null;
private String uploadComponentId = null;
@@ -2020,7 +2022,11 @@ public void handleFileUpload(FileUploadEvent event) throws IOException {
// zip file.
CreateDataFileResult createDataFilesResult = FileUtil.createDataFiles(workingVersion, uFile.getInputStream(), uFile.getFileName(), uFile.getContentType(), null, null, systemConfig);
dFileList = createDataFilesResult.getDataFiles();
- errorMessage = editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult);
+ String createDataFilesError = editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult);
+ if(createDataFilesError != null) {
+ errorMessages.add(createDataFilesError);
+ uploadComponentId = event.getComponent().getClientId();
+ }
} catch (IOException ioex) {
logger.warning("Failed to process and/or save the file " + uFile.getFileName() + "; " + ioex.getMessage());
@@ -2072,8 +2078,12 @@ public void handleExternalUpload() {
if (!checksumTypeString.isBlank()) {
checksumType = ChecksumType.fromString(checksumTypeString);
}
+
+ //Should only be one colon with curent design
int lastColon = fullStorageIdentifier.lastIndexOf(':');
- String storageLocation = fullStorageIdentifier.substring(0, lastColon) + "/" + dataset.getAuthorityForFileStorage() + "/" + dataset.getIdentifierForFileStorage() + "/" + fullStorageIdentifier.substring(lastColon + 1);
+ String storageLocation = fullStorageIdentifier.substring(0,lastColon) + "/" + dataset.getAuthorityForFileStorage() + "/" + dataset.getIdentifierForFileStorage() + "/" + fullStorageIdentifier.substring(lastColon+1);
+ storageLocation = DataAccess.expandStorageIdentifierIfNeeded(storageLocation);
+
if (uploadInProgress.isFalse()) {
uploadInProgress.setValue(true);
}
@@ -2127,7 +2137,7 @@ public void handleExternalUpload() {
//datafiles = ingestService.createDataFiles(workingVersion, dropBoxStream, fileName, "application/octet-stream");
CreateDataFileResult createDataFilesResult = FileUtil.createDataFiles(workingVersion, null, fileName, contentType, fullStorageIdentifier, checksumValue, checksumType, systemConfig);
datafiles = createDataFilesResult.getDataFiles();
- errorMessage = editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult);
+ Optional.ofNullable(editDataFilesPageHelper.getHtmlErrorMessage(createDataFilesResult)).ifPresent(errorMessage -> errorMessages.add(errorMessage));
} catch (IOException ex) {
logger.log(Level.SEVERE, "Error during ingest of file {0}", new Object[]{fileName});
}
@@ -3038,16 +3048,24 @@ public void saveAdvancedOptions() {
}
public boolean rsyncUploadSupported() {
- // ToDo - rsync was written before multiple store support and currently is hardcoded to use the "s3" store.
+ // ToDo - rsync was written before multiple store support and currently is hardcoded to use the DataAccess.S3 store.
// When those restrictions are lifted/rsync can be configured per store, the test in the
// Dataset Util method should be updated
- if (settingsWrapper.isRsyncUpload() && !DatasetUtil.isAppropriateStorageDriver(dataset)) {
+ if (settingsWrapper.isRsyncUpload() && !DatasetUtil.isRsyncAppropriateStorageDriver(dataset)) {
//dataset.file.upload.setUp.rsync.failed.detail
FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_ERROR, BundleUtil.getStringFromBundle("dataset.file.upload.setUp.rsync.failed"), BundleUtil.getStringFromBundle("dataset.file.upload.setUp.rsync.failed.detail"));
FacesContext.getCurrentInstance().addMessage(null, message);
}
- return settingsWrapper.isRsyncUpload() && DatasetUtil.isAppropriateStorageDriver(dataset);
+ return settingsWrapper.isRsyncUpload() && DatasetUtil.isRsyncAppropriateStorageDriver(dataset);
+ }
+
+ // Globus must be one of the upload methods listed in the :UploadMethods setting
+ // and the dataset's store must be in the list allowed by the GlobusStores
+ // setting
+ public boolean globusUploadSupported() {
+ return settingsWrapper.isGlobusUpload()
+ && settingsWrapper.isGlobusEnabledStorageDriver(dataset.getEffectiveStorageDriverId());
}
private void populateFileMetadatas() {
@@ -3083,4 +3101,9 @@ public boolean isFileAccessRequest() {
public void setFileAccessRequest(boolean fileAccessRequest) {
this.fileAccessRequest = fileAccessRequest;
}
+
+ //Determines whether this Dataset uses a public store and therefore doesn't support embargoed or restricted files
+ public boolean isHasPublicStore() {
+ return settingsWrapper.isTrueForKey(SettingsServiceBean.Key.PublicInstall, StorageIO.isPublicStore(dataset.getEffectiveStorageDriverId()));
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/FileDownloadServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/FileDownloadServiceBean.java
index 6d3929a55e2..65e6b259bf4 100644
--- a/src/main/java/edu/harvard/iq/dataverse/FileDownloadServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/FileDownloadServiceBean.java
@@ -18,6 +18,7 @@
import edu.harvard.iq.dataverse.privateurl.PrivateUrl;
import edu.harvard.iq.dataverse.privateurl.PrivateUrlServiceBean;
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
+import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.FileUtil;
import edu.harvard.iq.dataverse.util.StringUtil;
import java.io.IOException;
@@ -313,9 +314,7 @@ public void explore(GuestbookResponse guestbookResponse, FileMetadata fmd, Exter
ExternalToolHandler externalToolHandler = new ExternalToolHandler(externalTool, dataFile, apiToken, fmd, localeCode);
// Persist the name of the tool (i.e. "Data Explorer", etc.)
guestbookResponse.setDownloadtype(externalTool.getDisplayName());
- String toolUrl = externalToolHandler.getToolUrlWithQueryParams();
- logger.fine("Exploring with " + toolUrl);
- PrimeFaces.current().executeScript("window.open('"+toolUrl + "', target='_blank');");
+ PrimeFaces.current().executeScript(externalToolHandler.getExploreScript());
// This is the old logic from TwoRavens, null checks and all.
if (guestbookResponse != null && guestbookResponse.isWriteResponse()
&& ((fmd != null && fmd.getDataFile() != null) || guestbookResponse.getDataFile() != null)) {
@@ -561,12 +560,12 @@ public void addFileToCustomZipJob(String key, DataFile dataFile, Timestamp times
public String getDirectStorageLocatrion(String storageLocation) {
String storageDriverId;
- int separatorIndex = storageLocation.indexOf("://");
+ int separatorIndex = storageLocation.indexOf(DataAccess.SEPARATOR);
if ( separatorIndex > 0 ) {
storageDriverId = storageLocation.substring(0,separatorIndex);
String storageType = DataAccess.getDriverType(storageDriverId);
- if ("file".equals(storageType) || "s3".equals(storageType)) {
+ if (DataAccess.FILE.equals(storageType) || DataAccess.S3.equals(storageType)) {
return storageType.concat(storageLocation.substring(separatorIndex));
}
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/FilePage.java b/src/main/java/edu/harvard/iq/dataverse/FilePage.java
index 3fa6d4fdfff..7f2c6dfca5c 100644
--- a/src/main/java/edu/harvard/iq/dataverse/FilePage.java
+++ b/src/main/java/edu/harvard/iq/dataverse/FilePage.java
@@ -13,6 +13,7 @@
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.authorization.users.PrivateUrlUser;
import edu.harvard.iq.dataverse.authorization.users.User;
+import edu.harvard.iq.dataverse.dataaccess.DataAccess;
import edu.harvard.iq.dataverse.dataaccess.StorageIO;
import edu.harvard.iq.dataverse.engine.command.Command;
import edu.harvard.iq.dataverse.engine.command.exception.CommandException;
@@ -843,7 +844,7 @@ public String getComputeUrl() throws IOException {
if (swiftObject != null) {
swiftObject.open();
//generate a temp url for a file
- if (settingsService.isTrueForKey(SettingsServiceBean.Key.PublicInstall, false)) {
+ if (isHasPublicStore()) {
return settingsService.getValueForKey(SettingsServiceBean.Key.ComputeBaseUrl) + "?" + this.getFile().getOwner().getGlobalIdString() + "=" + swiftObject.getSwiftFileName();
}
return settingsService.getValueForKey(SettingsServiceBean.Key.ComputeBaseUrl) + "?" + this.getFile().getOwner().getGlobalIdString() + "=" + swiftObject.getSwiftFileName() + "&temp_url_sig=" + swiftObject.getTempUrlSignature() + "&temp_url_expires=" + swiftObject.getTempUrlExpiry();
@@ -935,8 +936,8 @@ public String getPublicDownloadUrl() {
try {
SwiftAccessIO swiftIO = (SwiftAccessIO) storageIO;
swiftIO.open();
- //if its a public install, lets just give users the permanent URL!
- if (systemConfig.isPublicInstall()){
+ //if its a public store, lets just give users the permanent URL!
+ if (isHasPublicStore()){
fileDownloadUrl = swiftIO.getRemoteUrl();
} else {
//TODO: if a user has access to this file, they should be given the swift url
@@ -1165,5 +1166,10 @@ public String getEmbargoPhrase() {
public String getIngestMessage() {
return BundleUtil.getStringFromBundle("file.ingestFailed.message", Arrays.asList(settingsWrapper.getGuidesBaseUrl(), settingsWrapper.getGuidesVersion()));
}
+
+ //Determines whether this File uses a public store and therefore doesn't support embargoed or restricted files
+ public boolean isHasPublicStore() {
+ return settingsWrapper.isTrueForKey(SettingsServiceBean.Key.PublicInstall, StorageIO.isPublicStore(DataAccess.getStorageDriverFromIdentifier(file.getStorageIdentifier())));
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/GlobalId.java b/src/main/java/edu/harvard/iq/dataverse/GlobalId.java
index 98112170d25..20b280771fc 100644
--- a/src/main/java/edu/harvard/iq/dataverse/GlobalId.java
+++ b/src/main/java/edu/harvard/iq/dataverse/GlobalId.java
@@ -26,9 +26,13 @@ public class GlobalId implements java.io.Serializable {
public static final String DOI_PROTOCOL = "doi";
public static final String HDL_PROTOCOL = "hdl";
- public static final String HDL_RESOLVER_URL = "https://hdl.handle.net/";
public static final String DOI_RESOLVER_URL = "https://doi.org/";
-
+ public static final String DXDOI_RESOLVER_URL = "https://dx.doi.org/";
+ public static final String HDL_RESOLVER_URL = "https://hdl.handle.net/";
+ public static final String HTTP_DOI_RESOLVER_URL = "http://doi.org/";
+ public static final String HTTP_DXDOI_RESOLVER_URL = "http://dx.doi.org/";
+ public static final String HTTP_HDL_RESOLVER_URL = "http://hdl.handle.net/";
+
public static Optional parse(String identifierString) {
try {
return Optional.of(new GlobalId(identifierString));
@@ -252,4 +256,27 @@ public static boolean verifyImportCharacters(String pidParam) {
return m.matches();
}
+
+ /**
+ * Convenience method to get the internal form of a PID string when it may be in
+ * the https:// or http:// form ToDo -refactor class to allow creating a
+ * GlobalID from any form (which assures it has valid syntax) and then have methods to get
+ * the form you want.
+ *
+ * @param pidUrlString - a string assumed to be a valid PID in some form
+ * @return the internal form as a String
+ */
+ public static String getInternalFormOfPID(String pidUrlString) {
+ String pidString = pidUrlString;
+ if(pidUrlString.startsWith(GlobalId.DOI_RESOLVER_URL)) {
+ pidString = pidUrlString.replace(GlobalId.DOI_RESOLVER_URL, (GlobalId.DOI_PROTOCOL + ":"));
+ } else if(pidUrlString.startsWith(GlobalId.HDL_RESOLVER_URL)) {
+ pidString = pidUrlString.replace(GlobalId.HDL_RESOLVER_URL, (GlobalId.HDL_PROTOCOL + ":"));
+ } else if(pidUrlString.startsWith(GlobalId.HTTP_DOI_RESOLVER_URL)) {
+ pidString = pidUrlString.replace(GlobalId.HTTP_DOI_RESOLVER_URL, (GlobalId.DOI_PROTOCOL + ":"));
+ } else if(pidUrlString.startsWith(GlobalId.HTTP_HDL_RESOLVER_URL)) {
+ pidString = pidUrlString.replace(GlobalId.HTTP_HDL_RESOLVER_URL, (GlobalId.HDL_PROTOCOL + ":"));
+ }
+ return pidString;
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/MailServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/MailServiceBean.java
index f39fb8b0a32..2bfd342d899 100644
--- a/src/main/java/edu/harvard/iq/dataverse/MailServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/MailServiceBean.java
@@ -16,6 +16,8 @@
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.MailUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
+import edu.harvard.iq.dataverse.util.json.JsonUtil;
+
import java.io.UnsupportedEncodingException;
import java.text.MessageFormat;
import java.util.ArrayList;
@@ -169,7 +171,7 @@ public boolean sendSystemEmail(String to, String subject, String messageText, bo
return sent;
}
- private InternetAddress getSystemAddress() {
+ public InternetAddress getSystemAddress() {
String systemEmail = settingsService.getValueForKey(Key.SystemEmail);
return MailUtil.parseSystemAddress(systemEmail);
}
@@ -568,6 +570,49 @@ public String getMessageTextBasedOnNotification(UserNotification userNotificatio
logger.fine("fileImportMsg: " + fileImportMsg);
return messageText += fileImportMsg;
+ case GLOBUSUPLOADCOMPLETED:
+ dataset = (Dataset) targetObject;
+ messageText = BundleUtil.getStringFromBundle("notification.email.greeting.html");
+ String uploadCompletedMessage = messageText + BundleUtil.getStringFromBundle("notification.mail.globus.upload.completed", Arrays.asList(
+ systemConfig.getDataverseSiteUrl(),
+ dataset.getGlobalIdString(),
+ dataset.getDisplayName(),
+ comment
+ )) ;
+ return uploadCompletedMessage;
+
+ case GLOBUSDOWNLOADCOMPLETED:
+ dataset = (Dataset) targetObject;
+ messageText = BundleUtil.getStringFromBundle("notification.email.greeting.html");
+ String downloadCompletedMessage = messageText + BundleUtil.getStringFromBundle("notification.mail.globus.download.completed", Arrays.asList(
+ systemConfig.getDataverseSiteUrl(),
+ dataset.getGlobalIdString(),
+ dataset.getDisplayName(),
+ comment
+ )) ;
+ return downloadCompletedMessage;
+ case GLOBUSUPLOADCOMPLETEDWITHERRORS:
+ dataset = (Dataset) targetObject;
+ messageText = BundleUtil.getStringFromBundle("notification.email.greeting.html");
+ String uploadCompletedWithErrorsMessage = messageText + BundleUtil.getStringFromBundle("notification.mail.globus.upload.completedWithErrors", Arrays.asList(
+ systemConfig.getDataverseSiteUrl(),
+ dataset.getGlobalIdString(),
+ dataset.getDisplayName(),
+ comment
+ )) ;
+ return uploadCompletedWithErrorsMessage;
+
+ case GLOBUSDOWNLOADCOMPLETEDWITHERRORS:
+ dataset = (Dataset) targetObject;
+ messageText = BundleUtil.getStringFromBundle("notification.email.greeting.html");
+ String downloadCompletedWithErrorsMessage = messageText + BundleUtil.getStringFromBundle("notification.mail.globus.download.completedWithErrors", Arrays.asList(
+ systemConfig.getDataverseSiteUrl(),
+ dataset.getGlobalIdString(),
+ dataset.getDisplayName(),
+ comment
+ )) ;
+ return downloadCompletedWithErrorsMessage;
+
case CHECKSUMIMPORT:
version = (DatasetVersion) targetObject;
String checksumImportMsg = BundleUtil.getStringFromBundle("notification.import.checksum", Arrays.asList(
@@ -608,6 +653,26 @@ public String getMessageTextBasedOnNotification(UserNotification userNotificatio
));
return ingestedCompletedWithErrorsMessage;
+ case DATASETMENTIONED:
+ String additionalInfo = userNotification.getAdditionalInfo();
+ dataset = (Dataset) targetObject;
+ javax.json.JsonObject citingResource = null;
+ citingResource = JsonUtil.getJsonObject(additionalInfo);
+
+
+ pattern = BundleUtil.getStringFromBundle("notification.email.datasetWasMentioned");
+ Object[] paramArrayDatasetMentioned = {
+ userNotification.getUser().getName(),
+ BrandingUtil.getInstallationBrandName(),
+ citingResource.getString("@type"),
+ citingResource.getString("@id"),
+ citingResource.getString("name"),
+ citingResource.getString("relationship"),
+ systemConfig.getDataverseSiteUrl(),
+ dataset.getGlobalId().toString(),
+ dataset.getDisplayName()};
+ messageText = MessageFormat.format(pattern, paramArrayDatasetMentioned);
+ return messageText;
}
return "";
@@ -632,6 +697,7 @@ public Object getObjectOfNotification (UserNotification userNotification){
case GRANTFILEACCESS:
case REJECTFILEACCESS:
case DATASETCREATED:
+ case DATASETMENTIONED:
return datasetService.find(userNotification.getObjectId());
case CREATEDS:
case SUBMITTEDDS:
@@ -648,6 +714,11 @@ public Object getObjectOfNotification (UserNotification userNotification){
return datasetService.find(userNotification.getObjectId());
case FILESYSTEMIMPORT:
return versionService.find(userNotification.getObjectId());
+ case GLOBUSUPLOADCOMPLETED:
+ case GLOBUSUPLOADCOMPLETEDWITHERRORS:
+ case GLOBUSDOWNLOADCOMPLETED:
+ case GLOBUSDOWNLOADCOMPLETEDWITHERRORS:
+ return datasetService.find(userNotification.getObjectId());
case CHECKSUMIMPORT:
return versionService.find(userNotification.getObjectId());
case APIGENERATED:
diff --git a/src/main/java/edu/harvard/iq/dataverse/ManageGroupsPage.java b/src/main/java/edu/harvard/iq/dataverse/ManageGroupsPage.java
index d08337ec832..8513ca33b47 100644
--- a/src/main/java/edu/harvard/iq/dataverse/ManageGroupsPage.java
+++ b/src/main/java/edu/harvard/iq/dataverse/ManageGroupsPage.java
@@ -35,6 +35,7 @@
import javax.persistence.PersistenceContext;
import org.apache.commons.lang3.StringUtils;
+
/**
* @author michaelsuo
*/
@@ -95,10 +96,24 @@ public String init() {
return permissionsWrapper.notAuthorized();
}
explicitGroups = new LinkedList<>(explicitGroupService.findByOwner(getDataverseId()));
-
+ renderDeletePopup = false;
return null;
}
+
+ private boolean renderDeletePopup = false;
+
+ public boolean isRenderDeletePopup() {
+ return renderDeletePopup;
+ }
+ public void setRenderDeletePopup(boolean renderDeletePopup) {
+ this.renderDeletePopup = renderDeletePopup;
+ }
+
+ public void clickDeleteGroup(ExplicitGroup selectedGroup) {
+ setRenderDeletePopup(true);
+ this.selectedGroup = selectedGroup;
+ }
public void setSelectedGroup(ExplicitGroup selectedGroup) {
this.selectedGroup = selectedGroup;
diff --git a/src/main/java/edu/harvard/iq/dataverse/MetadataBlock.java b/src/main/java/edu/harvard/iq/dataverse/MetadataBlock.java
index 844c0ec5be7..33e75efffb5 100644
--- a/src/main/java/edu/harvard/iq/dataverse/MetadataBlock.java
+++ b/src/main/java/edu/harvard/iq/dataverse/MetadataBlock.java
@@ -202,10 +202,18 @@ public String toString() {
return "edu.harvard.iq.dataverse.MetadataBlock[ id=" + id + " ]";
}
- public String getLocaleDisplayName()
- {
+ public String getLocaleDisplayName() {
+ return getLocaleValue("metadatablock.displayName");
+ }
+
+ public String getLocaleDisplayFacet() {
+ return getLocaleValue("metadatablock.displayFacet");
+ }
+
+ // Visible for testing
+ String getLocaleValue(String metadataBlockKey) {
try {
- return BundleUtil.getStringFromPropertyFile("metadatablock.displayName", getName());
+ return BundleUtil.getStringFromPropertyFile(metadataBlockKey, getName());
} catch (MissingResourceException e) {
return displayName;
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/PermissionServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/PermissionServiceBean.java
index aaf38af1b36..8f7f53de1a2 100644
--- a/src/main/java/edu/harvard/iq/dataverse/PermissionServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/PermissionServiceBean.java
@@ -733,6 +733,9 @@ else if (dataset.isLockedFor(DatasetLock.Reason.Workflow)) {
else if (dataset.isLockedFor(DatasetLock.Reason.DcmUpload)) {
throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.editNotAllowed"), command);
}
+ else if (dataset.isLockedFor(DatasetLock.Reason.GlobusUpload)) {
+ throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.editNotAllowed"), command);
+ }
else if (dataset.isLockedFor(DatasetLock.Reason.EditInProgress)) {
throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.editNotAllowed"), command);
}
@@ -768,6 +771,9 @@ else if (dataset.isLockedFor(DatasetLock.Reason.Workflow)) {
else if (dataset.isLockedFor(DatasetLock.Reason.DcmUpload)) {
throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.publishNotAllowed"), command);
}
+ else if (dataset.isLockedFor(DatasetLock.Reason.GlobusUpload)) {
+ throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.publishNotAllowed"), command);
+ }
else if (dataset.isLockedFor(DatasetLock.Reason.EditInProgress)) {
throw new IllegalCommandException(BundleUtil.getStringFromBundle("dataset.message.locked.publishNotAllowed"), command);
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/SettingsWrapper.java b/src/main/java/edu/harvard/iq/dataverse/SettingsWrapper.java
index 9bf155740af..aa40423000d 100644
--- a/src/main/java/edu/harvard/iq/dataverse/SettingsWrapper.java
+++ b/src/main/java/edu/harvard/iq/dataverse/SettingsWrapper.java
@@ -13,6 +13,7 @@
import edu.harvard.iq.dataverse.util.MailUtil;
import edu.harvard.iq.dataverse.util.StringUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
+import edu.harvard.iq.dataverse.util.json.JsonUtil;
import edu.harvard.iq.dataverse.UserNotification.Type;
import java.time.LocalDate;
@@ -92,7 +93,15 @@ public class SettingsWrapper implements java.io.Serializable {
private Boolean rsyncUpload = null;
- private Boolean rsyncDownload = null;
+ private Boolean rsyncDownload = null;
+
+ private Boolean globusUpload = null;
+ private Boolean globusDownload = null;
+ private Boolean globusFileDownload = null;
+
+ private String globusAppUrl = null;
+
+ private List globusStoreList = null;
private Boolean httpUpload = null;
@@ -292,6 +301,42 @@ public boolean isRsyncDownload() {
}
return rsyncDownload;
}
+
+ public boolean isGlobusUpload() {
+ if (globusUpload == null) {
+ globusUpload = systemConfig.isGlobusUpload();
+ }
+ return globusUpload;
+ }
+
+ public boolean isGlobusDownload() {
+ if (globusDownload == null) {
+ globusDownload = systemConfig.isGlobusDownload();
+ }
+ return globusDownload;
+ }
+
+ public boolean isGlobusFileDownload() {
+ if (globusFileDownload == null) {
+ globusFileDownload = systemConfig.isGlobusFileDownload();
+ }
+ return globusFileDownload;
+ }
+
+ public boolean isGlobusEnabledStorageDriver(String driverId) {
+ if (globusStoreList == null) {
+ globusStoreList = systemConfig.getGlobusStoresList();
+ }
+ return globusStoreList.contains(driverId);
+ }
+
+ public String getGlobusAppUrl() {
+ if (globusAppUrl == null) {
+ globusAppUrl = settingsService.getValueForKey(SettingsServiceBean.Key.GlobusAppUrl, "http://localhost");
+ }
+ return globusAppUrl;
+
+ }
public boolean isRsyncOnly() {
if (rsyncOnly == null) {
@@ -646,5 +691,4 @@ public boolean isCustomLicenseAllowed() {
}
return customLicenseAllowed;
}
-}
-
+}
\ No newline at end of file
diff --git a/src/main/java/edu/harvard/iq/dataverse/Shib.java b/src/main/java/edu/harvard/iq/dataverse/Shib.java
index 324f6e185a6..0f0e20aba94 100644
--- a/src/main/java/edu/harvard/iq/dataverse/Shib.java
+++ b/src/main/java/edu/harvard/iq/dataverse/Shib.java
@@ -218,7 +218,26 @@ public void init() {
? getValueFromAssertion(shibAffiliationAttribute)
: shibService.getAffiliation(shibIdp, shibService.getDevShibAccountType());
+
if (affiliation != null) {
+ String ShibAffiliationSeparator = settingsService.getValueForKey(SettingsServiceBean.Key.ShibAffiliationSeparator);
+ if (ShibAffiliationSeparator == null) {
+ ShibAffiliationSeparator = ";";
+ }
+ String ShibAffiliationOrder = settingsService.getValueForKey(SettingsServiceBean.Key.ShibAffiliationOrder);
+ if (ShibAffiliationOrder != null) {
+ if (ShibAffiliationOrder.equals("lastAffiliation")) {
+ affiliation = affiliation.substring(affiliation.lastIndexOf(ShibAffiliationSeparator) + 1); //patch for affiliation array returning last part
+ }
+ else if (ShibAffiliationOrder.equals("firstAffiliation")) {
+ try{
+ affiliation = affiliation.substring(0,affiliation.indexOf(ShibAffiliationSeparator)); //patch for affiliation array returning first part
+ }
+ catch (Exception e){
+ logger.info("Affiliation does not contain \"" + ShibAffiliationSeparator + "\"");
+ }
+ }
+ }
affiliationToDisplayAtConfirmation = affiliation;
friendlyNameForInstitution = affiliation;
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/Template.java b/src/main/java/edu/harvard/iq/dataverse/Template.java
index b9a1762714a..61f0a78656f 100644
--- a/src/main/java/edu/harvard/iq/dataverse/Template.java
+++ b/src/main/java/edu/harvard/iq/dataverse/Template.java
@@ -1,7 +1,6 @@
package edu.harvard.iq.dataverse;
import java.io.Serializable;
-import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
@@ -10,6 +9,11 @@
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.stream.Collectors;
+
+import javax.json.Json;
+import javax.json.JsonObjectBuilder;
+import javax.json.JsonString;
import javax.persistence.CascadeType;
import javax.persistence.Column;
import javax.persistence.Entity;
@@ -28,6 +32,8 @@
import javax.validation.constraints.Size;
import edu.harvard.iq.dataverse.util.DateUtil;
+import edu.harvard.iq.dataverse.util.json.JsonUtil;
+
import javax.persistence.NamedQueries;
import javax.persistence.NamedQuery;
import org.hibernate.validator.constraints.NotBlank;
@@ -125,7 +131,13 @@ public void setTermsOfUseAndAccess(TermsOfUseAndAccess termsOfUseAndAccess) {
public List getDatasetFields() {
return datasetFields;
}
+
+ @Column(columnDefinition="TEXT", nullable = true )
+ private String instructions;
+ @Transient
+ private Map instructionsMap = null;
+
@Transient
private Map> metadataBlocksForView = new HashMap<>();
@Transient
@@ -235,10 +247,8 @@ public int compare(DatasetField d1, DatasetField d2) {
}
private void initMetadataBlocksForCreate() {
- metadataBlocksForView.clear();
metadataBlocksForEdit.clear();
for (MetadataBlock mdb : this.getDataverse().getMetadataBlocks()) {
- List datasetFieldsForView = new ArrayList<>();
List datasetFieldsForEdit = new ArrayList<>();
for (DatasetField dsf : this.getDatasetFields()) {
@@ -247,9 +257,6 @@ private void initMetadataBlocksForCreate() {
}
}
- if (!datasetFieldsForView.isEmpty()) {
- metadataBlocksForView.put(mdb, sortDatasetFields(datasetFieldsForView));
- }
if (!datasetFieldsForEdit.isEmpty()) {
metadataBlocksForEdit.put(mdb, sortDatasetFields(datasetFieldsForEdit));
}
@@ -261,27 +268,31 @@ public void setMetadataValueBlocks() {
metadataBlocksForView.clear();
metadataBlocksForEdit.clear();
List filledInFields = this.getDatasetFields();
+
+ Map instructionsMap = getInstructionsMap();
-
- List actualMDB = new ArrayList<>();
+ List viewMDB = new ArrayList<>();
+ List editMDB=this.getDataverse().getMetadataBlocks(false);
- actualMDB.addAll(this.getDataverse().getMetadataBlocks());
- for (DatasetField dsfv : filledInFields) {
- if (!dsfv.isEmptyForDisplay()) {
- MetadataBlock mdbTest = dsfv.getDatasetFieldType().getMetadataBlock();
- if (!actualMDB.contains(mdbTest)) {
- actualMDB.add(mdbTest);
+ //The metadatablocks in this template include any from the Dataverse it is associated with
+ //plus any others where the template has a displayable field (i.e. from before a block was dropped in the dataverse/collection)
+ viewMDB.addAll(this.getDataverse().getMetadataBlocks(true));
+ for (DatasetField dsf : filledInFields) {
+ if (!dsf.isEmptyForDisplay()) {
+ MetadataBlock mdbTest = dsf.getDatasetFieldType().getMetadataBlock();
+ if (!viewMDB.contains(mdbTest)) {
+ viewMDB.add(mdbTest);
}
}
- }
-
- for (MetadataBlock mdb : actualMDB) {
+ }
+
+ for (MetadataBlock mdb : viewMDB) {
+
List datasetFieldsForView = new ArrayList<>();
- List datasetFieldsForEdit = new ArrayList<>();
for (DatasetField dsf : this.getDatasetFields()) {
if (dsf.getDatasetFieldType().getMetadataBlock().equals(mdb)) {
- datasetFieldsForEdit.add(dsf);
- if (!dsf.isEmpty()) {
+ //For viewing, show the field if it has a value or custom instructions
+ if (!dsf.isEmpty() || instructionsMap.containsKey(dsf.getDatasetFieldType().getName())) {
datasetFieldsForView.add(dsf);
}
}
@@ -290,10 +301,20 @@ public void setMetadataValueBlocks() {
if (!datasetFieldsForView.isEmpty()) {
metadataBlocksForView.put(mdb, sortDatasetFields(datasetFieldsForView));
}
- if (!datasetFieldsForEdit.isEmpty()) {
- metadataBlocksForEdit.put(mdb, sortDatasetFields(datasetFieldsForEdit));
+
+ }
+
+ for (MetadataBlock mdb : editMDB) {
+ List datasetFieldsForEdit = new ArrayList<>();
+ this.setDatasetFields(initDatasetFields());
+ for (DatasetField dsf : this.getDatasetFields() ) {
+ if (dsf.getDatasetFieldType().getMetadataBlock().equals(mdb)) {
+ datasetFieldsForEdit.add(dsf);
+ }
}
+ metadataBlocksForEdit.put(mdb, sortDatasetFields(datasetFieldsForEdit));
}
+
}
// TODO: clean up init methods and get them to work, cascading all the way down.
@@ -340,6 +361,9 @@ public Template cloneNewTemplate(Template source) {
}
terms.setTemplate(newTemplate);
newTemplate.setTermsOfUseAndAccess(terms);
+
+ newTemplate.getInstructionsMap().putAll(source.getInstructionsMap());
+ newTemplate.updateInstructions();
return newTemplate;
}
@@ -379,6 +403,45 @@ private List getFlatDatasetFields(List dsfList) {
return retList;
}
+ //Cache values in map for reading
+ public Map getInstructionsMap() {
+ if(instructionsMap==null)
+ if(instructions != null) {
+ instructionsMap = JsonUtil.getJsonObject(instructions).entrySet().stream().collect(Collectors.toMap(entry -> entry.getKey(),entry -> ((JsonString)entry.getValue()).getString()));
+ } else {
+ instructionsMap = new HashMap();
+ }
+ return instructionsMap;
+ }
+
+ //Get the cutstom instructions defined for a give fieldType
+ public String getInstructionsFor(String fieldType) {
+ return getInstructionsMap().get(fieldType);
+ }
+
+ /*
+ //Add/change or remove (null instructionString) instructions for a given fieldType
+ public void setInstructionsFor(String fieldType, String instructionString) {
+ if(instructionString==null) {
+ getInstructionsMap().remove(fieldType);
+ } else {
+ getInstructionsMap().put(fieldType, instructionString);
+ }
+ updateInstructions();
+ }
+ */
+
+ //Keep instructions up-to-date on any change
+ public void updateInstructions() {
+ JsonObjectBuilder builder = Json.createObjectBuilder();
+ getInstructionsMap().forEach((key, value) -> {
+ if (value != null)
+ builder.add(key, value);
+ });
+ instructions = JsonUtil.prettyPrint(builder.build());
+ }
+
+
@Override
public int hashCode() {
int hash = 0;
diff --git a/src/main/java/edu/harvard/iq/dataverse/TemplatePage.java b/src/main/java/edu/harvard/iq/dataverse/TemplatePage.java
index 19beaf75349..6da0d99da20 100644
--- a/src/main/java/edu/harvard/iq/dataverse/TemplatePage.java
+++ b/src/main/java/edu/harvard/iq/dataverse/TemplatePage.java
@@ -14,6 +14,7 @@
import java.sql.Timestamp;
import java.util.Date;
import java.util.List;
+import java.util.logging.Logger;
import javax.ejb.EJB;
import javax.ejb.EJBException;
import javax.faces.application.FacesMessage;
@@ -52,6 +53,8 @@ public class TemplatePage implements java.io.Serializable {
@Inject
LicenseServiceBean licenseServiceBean;
+
+ private static final Logger logger = Logger.getLogger(TemplatePage.class.getCanonicalName());
public enum EditMode {
@@ -160,7 +163,7 @@ private void updateDatasetFieldInputLevels(){
for (DatasetField dsf: template.getFlatDatasetFields()){
DataverseFieldTypeInputLevel dsfIl = dataverseFieldTypeInputLevelService.findByDataverseIdDatasetFieldTypeId(dvIdForInputLevel, dsf.getDatasetFieldType().getId());
- if (dsfIl != null){
+ if (dsfIl != null){
dsf.setInclude(dsfIl.isInclude());
} else {
dsf.setInclude(true);
@@ -173,8 +176,6 @@ public void edit(TemplatePage.EditMode editMode) {
}
public String save(String redirectPage) {
-
- //SEK - removed dead code 1/6/2015
boolean create = false;
Command cmd;
@@ -184,6 +185,8 @@ public String save(String redirectPage) {
DatasetFieldUtil.tidyUpFields( template.getDatasetFields(), false );
+ template.updateInstructions();
+
if (editMode == EditMode.CREATE) {
template.setCreateTime(new Timestamp(new Date().getTime()));
template.setUsageCount(new Long(0));
@@ -208,20 +211,13 @@ public String save(String redirectPage) {
error.append(cause).append(" ");
error.append(cause.getMessage()).append(" ");
}
- //
- //FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Template Save Failed", " - " + error.toString()));
- System.out.print("dataverse " + dataverse.getName());
- System.out.print("Ejb exception");
- System.out.print(error.toString());
+ logger.warning("Template Save failed - Ejb exception " + error.toString());
JH.addMessage(FacesMessage.SEVERITY_FATAL, BundleUtil.getStringFromBundle("template.save.fail"));
return null;
} catch (CommandException ex) {
- System.out.print("command exception");
- System.out.print(ex.toString());
- //FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR, "Template Save Failed", " - " + ex.toString()));
+ logger.severe("Template Save failed - Ejb exception " + ex.toString());
JH.addMessage(FacesMessage.SEVERITY_FATAL, BundleUtil.getStringFromBundle("template.save.fail"));
return null;
- //logger.severe(ex.getMessage());
}
editMode = null;
String msg = (create)? BundleUtil.getStringFromBundle("template.create"): BundleUtil.getStringFromBundle("template.save");
@@ -253,5 +249,11 @@ public String deleteTemplate(Long templateId) {
}
return "/manage-templates.xhtml?dataverseId=" + dataverse.getId() + "&faces-redirect=true";
}
+
+ //Get the cutstom instructions defined for a give fieldType
+ public String getInstructionsLabelFor(String fieldType) {
+ String fieldInstructions = template.getInstructionsMap().get(fieldType);
+ return (fieldInstructions!=null && !fieldInstructions.isBlank()) ? fieldInstructions : BundleUtil.getStringFromBundle("template.instructions.empty.label");
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/UserNotification.java b/src/main/java/edu/harvard/iq/dataverse/UserNotification.java
index 5714a879527..b68a1b9d13e 100644
--- a/src/main/java/edu/harvard/iq/dataverse/UserNotification.java
+++ b/src/main/java/edu/harvard/iq/dataverse/UserNotification.java
@@ -37,7 +37,9 @@ public enum Type {
ASSIGNROLE, REVOKEROLE, CREATEDV, CREATEDS, CREATEACC, SUBMITTEDDS, RETURNEDDS,
PUBLISHEDDS, REQUESTFILEACCESS, GRANTFILEACCESS, REJECTFILEACCESS, FILESYSTEMIMPORT,
CHECKSUMIMPORT, CHECKSUMFAIL, CONFIRMEMAIL, APIGENERATED, INGESTCOMPLETED, INGESTCOMPLETEDWITHERRORS,
- PUBLISHFAILED_PIDREG, WORKFLOW_SUCCESS, WORKFLOW_FAILURE, STATUSUPDATED, DATASETCREATED;
+ PUBLISHFAILED_PIDREG, WORKFLOW_SUCCESS, WORKFLOW_FAILURE, STATUSUPDATED, DATASETCREATED, DATASETMENTIONED,
+ GLOBUSUPLOADCOMPLETED, GLOBUSUPLOADCOMPLETEDWITHERRORS,
+ GLOBUSDOWNLOADCOMPLETED, GLOBUSDOWNLOADCOMPLETEDWITHERRORS;
public String getDescription() {
return BundleUtil.getStringFromBundle("notification.typeDescription." + this.name());
@@ -88,6 +90,8 @@ public static String toStringValue(Set typesSet) {
@Column( nullable = false )
private Type type;
private Long objectId;
+
+ private String additionalInfo;
@Transient
private boolean displayAsRead;
@@ -196,4 +200,12 @@ public void setRoleString(String roleString) {
public String getLocaleSendDate() {
return DateUtil.formatDate(sendDate);
}
+
+ public String getAdditionalInfo() {
+ return additionalInfo;
+ }
+
+ public void setAdditionalInfo(String additionalInfo) {
+ this.additionalInfo = additionalInfo;
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/UserNotificationServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/UserNotificationServiceBean.java
index 6792a7bedc7..947ee3ce989 100644
--- a/src/main/java/edu/harvard/iq/dataverse/UserNotificationServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/UserNotificationServiceBean.java
@@ -110,12 +110,16 @@ public void sendNotification(AuthenticatedUser dataverseUser, Timestamp sendDate
}
public void sendNotification(AuthenticatedUser dataverseUser, Timestamp sendDate, Type type, Long objectId, String comment, AuthenticatedUser requestor, boolean isHtmlContent) {
+ sendNotification(dataverseUser, sendDate, type, objectId, comment, requestor, isHtmlContent, null);
+ }
+ public void sendNotification(AuthenticatedUser dataverseUser, Timestamp sendDate, Type type, Long objectId, String comment, AuthenticatedUser requestor, boolean isHtmlContent, String additionalInfo) {
UserNotification userNotification = new UserNotification();
userNotification.setUser(dataverseUser);
userNotification.setSendDate(sendDate);
userNotification.setType(type);
userNotification.setObjectId(objectId);
userNotification.setRequestor(requestor);
+ userNotification.setAdditionalInfo(additionalInfo);
if (!isEmailMuted(userNotification) && mailService.sendNotificationEmail(userNotification, comment, requestor, isHtmlContent)) {
logger.fine("email was sent");
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/AbstractApiBean.java b/src/main/java/edu/harvard/iq/dataverse/api/AbstractApiBean.java
index d2c3f68dba2..ed9a544e726 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/AbstractApiBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/AbstractApiBean.java
@@ -630,6 +630,10 @@ protected T execCommand( Command cmd ) throws WrappedResponse {
return engineSvc.submit(cmd);
} catch (IllegalCommandException ex) {
+ //for 8859 for api calls that try to update datasets with TOA out of compliance
+ if (ex.getMessage().toLowerCase().contains("terms of use")){
+ throw new WrappedResponse(ex, conflict(ex.getMessage()));
+ }
throw new WrappedResponse( ex, forbidden(ex.getMessage() ) );
} catch (PermissionException ex) {
/**
@@ -822,6 +826,10 @@ protected Response forbidden( String msg ) {
return error( Status.FORBIDDEN, msg );
}
+ protected Response conflict( String msg ) {
+ return error( Status.CONFLICT, msg );
+ }
+
protected Response badApiKey( String apiKey ) {
return error(Status.UNAUTHORIZED, (apiKey != null ) ? "Bad api key " : "Please provide a key query parameter (?key=XXX) or via the HTTP header " + DATAVERSE_KEY_HEADER_NAME);
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Access.java b/src/main/java/edu/harvard/iq/dataverse/api/Access.java
index b2a8da3af4c..abeedf23b59 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Access.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Access.java
@@ -278,7 +278,7 @@ private DataFile findDataFileOrDieWrapper(String fileId){
@Path("datafile/{fileId:.+}")
@GET
@Produces({"application/xml"})
- public DownloadInstance datafile(@PathParam("fileId") String fileId, @QueryParam("gbrecs") boolean gbrecs, @QueryParam("key") String apiToken, @Context UriInfo uriInfo, @Context HttpHeaders headers, @Context HttpServletResponse response) /*throws NotFoundException, ServiceUnavailableException, PermissionDeniedException, AuthorizationRequiredException*/ {
+ public Response datafile(@PathParam("fileId") String fileId, @QueryParam("gbrecs") boolean gbrecs, @QueryParam("key") String apiToken, @Context UriInfo uriInfo, @Context HttpHeaders headers, @Context HttpServletResponse response) /*throws NotFoundException, ServiceUnavailableException, PermissionDeniedException, AuthorizationRequiredException*/ {
// check first if there's a trailing slash, and chop it:
while (fileId.lastIndexOf('/') == fileId.length() - 1) {
@@ -332,6 +332,11 @@ public DownloadInstance datafile(@PathParam("fileId") String fileId, @QueryParam
dInfo.addServiceAvailable(new OptionalAccessService("preprocessed", "application/json", "format=prep", "Preprocessed data in JSON"));
dInfo.addServiceAvailable(new OptionalAccessService("subset", "text/tab-separated-values", "variables=<LIST>", "Column-wise Subsetting"));
}
+
+ if(systemConfig.isGlobusFileDownload() && systemConfig.getGlobusStoresList().contains(DataAccess.getStorageDriverFromIdentifier(df.getStorageIdentifier()))) {
+ dInfo.addServiceAvailable(new OptionalAccessService("GlobusTransfer", df.getContentType(), "format=GlobusTransfer", "Download via Globus"));
+ }
+
DownloadInstance downloadInstance = new DownloadInstance(dInfo);
downloadInstance.setRequestUriInfo(uriInfo);
downloadInstance.setRequestHttpHeaders(headers);
@@ -423,7 +428,10 @@ public DownloadInstance datafile(@PathParam("fileId") String fileId, @QueryParam
/*
* Provide some browser-friendly headers: (?)
*/
- return downloadInstance;
+ if (headers.getRequestHeaders().containsKey("Range")) {
+ return Response.status(Response.Status.PARTIAL_CONTENT).entity(downloadInstance).build();
+ }
+ return Response.ok(downloadInstance).build();
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Admin.java b/src/main/java/edu/harvard/iq/dataverse/api/Admin.java
index 78ec4a6edb5..ef08444af69 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Admin.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Admin.java
@@ -105,9 +105,6 @@
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.json;
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.rolesToJson;
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.toJsonArray;
-import java.math.BigDecimal;
-
-
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
@@ -1805,31 +1802,44 @@ public Response validateDataFileHashValue(@PathParam("fileId") String fileId) {
}
- @GET
- @Path("/submitDataVersionToArchive/{id}/{version}")
- public Response submitDatasetVersionToArchive(@PathParam("id") String dsid, @PathParam("version") String versionNumber) {
+ @POST
+ @Path("/submitDatasetVersionToArchive/{id}/{version}")
+ public Response submitDatasetVersionToArchive(@PathParam("id") String dsid,
+ @PathParam("version") String versionNumber) {
try {
AuthenticatedUser au = findAuthenticatedUserOrDie();
- // Note - the user is being set in the session so it becomes part of the
- // DataverseRequest and is sent to the back-end command where it is used to get
- // the API Token which is then used to retrieve files (e.g. via S3 direct
- // downloads) to create the Bag
- session.setUser(au); // TODO: Stop using session. Use createDataverseRequest instead.
+
Dataset ds = findDatasetOrDie(dsid);
DatasetVersion dv = datasetversionService.findByFriendlyVersionNumber(ds.getId(), versionNumber);
if (dv.getArchivalCopyLocation() == null) {
String className = settingsService.getValueForKey(SettingsServiceBean.Key.ArchiverClassName);
- AbstractSubmitToArchiveCommand cmd = ArchiverUtil.createSubmitToArchiveCommand(className, dvRequestService.getDataverseRequest(), dv);
+ // Note - the user is being sent via the createDataverseRequest(au) call to the
+ // back-end command where it is used to get the API Token which is
+ // then used to retrieve files (e.g. via S3 direct downloads) to create the Bag
+ AbstractSubmitToArchiveCommand cmd = ArchiverUtil.createSubmitToArchiveCommand(className,
+ createDataverseRequest(au), dv);
+ // createSubmitToArchiveCommand() tries to find and instantiate an non-abstract
+ // implementation of AbstractSubmitToArchiveCommand based on the provided
+ // className. If a class with that name isn't found (or can't be instatiated), it
+ // will return null
if (cmd != null) {
+ if(ArchiverUtil.onlySingleVersionArchiving(cmd.getClass(), settingsService)) {
+ for (DatasetVersion version : ds.getVersions()) {
+ if ((dv != version) && version.getArchivalCopyLocation() != null) {
+ return error(Status.CONFLICT, "Dataset already archived.");
+ }
+ }
+ }
new Thread(new Runnable() {
public void run() {
try {
DatasetVersion dv = commandEngine.submit(cmd);
- if (dv.getArchivalCopyLocation() != null) {
- logger.info("DatasetVersion id=" + ds.getGlobalId().toString() + " v" + versionNumber + " submitted to Archive at: "
- + dv.getArchivalCopyLocation());
+ if (!dv.getArchivalCopyLocationStatus().equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)) {
+ logger.info(
+ "DatasetVersion id=" + ds.getGlobalId().toString() + " v" + versionNumber
+ + " submitted to Archive, status: " + dv.getArchivalCopyLocationStatus());
} else {
logger.severe("Error submitting version due to conflict/error at Archive");
}
@@ -1838,13 +1848,105 @@ public void run() {
}
}
}).start();
- return ok("Archive submission using " + cmd.getClass().getCanonicalName() + " started. Processing can take significant time for large datasets. View log and/or check archive for results.");
+ return ok("Archive submission using " + cmd.getClass().getCanonicalName()
+ + " started. Processing can take significant time for large datasets and requires that the user have permission to publish the dataset. View log and/or check archive for results.");
+ } else {
+ logger.log(Level.SEVERE, "Could not find Archiver class: " + className);
+ return error(Status.INTERNAL_SERVER_ERROR, "Could not find Archiver class: " + className);
+ }
+ } else {
+ return error(Status.BAD_REQUEST, "Version was already submitted for archiving.");
+ }
+ } catch (WrappedResponse e1) {
+ return error(Status.UNAUTHORIZED, "api key required");
+ }
+ }
+
+
+ /**
+ * Iteratively archives all unarchived dataset versions
+ * @param
+ * listonly - don't archive, just list unarchived versions
+ * limit - max number to process
+ * lastestonly - only archive the latest versions
+ * @return
+ */
+ @POST
+ @Path("/archiveAllUnarchivedDatasetVersions")
+ public Response archiveAllUnarchivedDatasetVersions(@QueryParam("listonly") boolean listonly, @QueryParam("limit") Integer limit, @QueryParam("latestonly") boolean latestonly) {
+
+ try {
+ AuthenticatedUser au = findAuthenticatedUserOrDie();
+
+ List dsl = datasetversionService.getUnarchivedDatasetVersions();
+ if (dsl != null) {
+ if (listonly) {
+ JsonArrayBuilder jab = Json.createArrayBuilder();
+ logger.fine("Unarchived versions found: ");
+ int current = 0;
+ for (DatasetVersion dv : dsl) {
+ if (limit != null && current >= limit) {
+ break;
+ }
+ if (!latestonly || dv.equals(dv.getDataset().getLatestVersionForCopy())) {
+ jab.add(dv.getDataset().getGlobalId().toString() + ", v" + dv.getFriendlyVersionNumber());
+ logger.fine(" " + dv.getDataset().getGlobalId().toString() + ", v" + dv.getFriendlyVersionNumber());
+ current++;
+ }
+ }
+ return ok(jab);
+ }
+ String className = settingsService.getValueForKey(SettingsServiceBean.Key.ArchiverClassName);
+ // Note - the user is being sent via the createDataverseRequest(au) call to the
+ // back-end command where it is used to get the API Token which is
+ // then used to retrieve files (e.g. via S3 direct downloads) to create the Bag
+ final DataverseRequest request = createDataverseRequest(au);
+ // createSubmitToArchiveCommand() tries to find and instantiate an non-abstract
+ // implementation of AbstractSubmitToArchiveCommand based on the provided
+ // className. If a class with that name isn't found (or can't be instatiated, it
+ // will return null
+ AbstractSubmitToArchiveCommand cmd = ArchiverUtil.createSubmitToArchiveCommand(className, request, dsl.get(0));
+ if (cmd != null) {
+ //Found an archiver to use
+ new Thread(new Runnable() {
+ public void run() {
+ int total = dsl.size();
+ int successes = 0;
+ int failures = 0;
+ for (DatasetVersion dv : dsl) {
+ if (limit != null && (successes + failures) >= limit) {
+ break;
+ }
+ if (!latestonly || dv.equals(dv.getDataset().getLatestVersionForCopy())) {
+ try {
+ AbstractSubmitToArchiveCommand cmd = ArchiverUtil.createSubmitToArchiveCommand(className, request, dv);
+
+ dv = commandEngine.submit(cmd);
+ if (!dv.getArchivalCopyLocationStatus().equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)) {
+ successes++;
+ logger.info("DatasetVersion id=" + dv.getDataset().getGlobalId().toString() + " v" + dv.getFriendlyVersionNumber() + " submitted to Archive, status: "
+ + dv.getArchivalCopyLocationStatus());
+ } else {
+ failures++;
+ logger.severe("Error submitting version due to conflict/error at Archive for " + dv.getDataset().getGlobalId().toString() + " v" + dv.getFriendlyVersionNumber());
+ }
+ } catch (CommandException ex) {
+ failures++;
+ logger.log(Level.SEVERE, "Unexpected Exception calling submit archive command", ex);
+ }
+ }
+ logger.fine(successes + failures + " of " + total + " archive submissions complete");
+ }
+ logger.info("Archiving complete: " + successes + " Successes, " + failures + " Failures. See prior log messages for details.");
+ }
+ }).start();
+ return ok("Starting to archive all unarchived published dataset versions using " + cmd.getClass().getCanonicalName() + ". Processing can take significant time for large datasets/ large numbers of dataset versions and requires that the user have permission to publish the dataset(s). View log and/or check archive for results.");
} else {
logger.log(Level.SEVERE, "Could not find Archiver class: " + className);
return error(Status.INTERNAL_SERVER_ERROR, "Could not find Archiver class: " + className);
}
} else {
- return error(Status.BAD_REQUEST, "Version already archived at: " + dv.getArchivalCopyLocation());
+ return error(Status.BAD_REQUEST, "No unarchived published dataset versions found");
}
} catch (WrappedResponse e1) {
return error(Status.UNAUTHORIZED, "api key required");
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/ApiBlockingFilter.java b/src/main/java/edu/harvard/iq/dataverse/api/ApiBlockingFilter.java
index 6f7a1d876a1..6bf852d25f7 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/ApiBlockingFilter.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/ApiBlockingFilter.java
@@ -163,7 +163,8 @@ public void doFilter(ServletRequest sr, ServletResponse sr1, FilterChain fc) thr
if (settingsSvc.isTrueForKey(SettingsServiceBean.Key.AllowCors, true )) {
((HttpServletResponse) sr1).addHeader("Access-Control-Allow-Origin", "*");
((HttpServletResponse) sr1).addHeader("Access-Control-Allow-Methods", "PUT, GET, POST, DELETE, OPTIONS");
- ((HttpServletResponse) sr1).addHeader("Access-Control-Allow-Headers", "Accept, Content-Type, X-Dataverse-Key");
+ ((HttpServletResponse) sr1).addHeader("Access-Control-Allow-Headers", "Accept, Content-Type, X-Dataverse-Key, Range");
+ ((HttpServletResponse) sr1).addHeader("Access-Control-Expose-Headers", "Accept-Ranges, Content-Range, Content-Encoding");
}
fc.doFilter(sr, sr1);
} catch ( ServletException se ) {
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Datasets.java b/src/main/java/edu/harvard/iq/dataverse/api/Datasets.java
index 153d3f266b1..aff543e643c 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Datasets.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Datasets.java
@@ -7,6 +7,7 @@
import edu.harvard.iq.dataverse.authorization.DataverseRole;
import edu.harvard.iq.dataverse.authorization.Permission;
import edu.harvard.iq.dataverse.authorization.RoleAssignee;
+import edu.harvard.iq.dataverse.authorization.users.ApiToken;
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.authorization.users.User;
import edu.harvard.iq.dataverse.batch.jobs.importer.ImportMode;
@@ -59,6 +60,7 @@
import edu.harvard.iq.dataverse.ingest.IngestServiceBean;
import edu.harvard.iq.dataverse.privateurl.PrivateUrl;
+import edu.harvard.iq.dataverse.S3PackageImporter;
import edu.harvard.iq.dataverse.api.dto.RoleAssignmentDTO;
import edu.harvard.iq.dataverse.batch.util.LoggingUtil;
import edu.harvard.iq.dataverse.dataaccess.DataAccess;
@@ -82,11 +84,13 @@
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.EjbUtil;
import edu.harvard.iq.dataverse.util.FileUtil;
+import edu.harvard.iq.dataverse.util.MarkupChecker;
import edu.harvard.iq.dataverse.util.SystemConfig;
import edu.harvard.iq.dataverse.util.bagit.OREMap;
import edu.harvard.iq.dataverse.util.json.JSONLDUtil;
import edu.harvard.iq.dataverse.util.json.JsonLDTerm;
import edu.harvard.iq.dataverse.util.json.JsonParseException;
+import edu.harvard.iq.dataverse.util.json.JsonUtil;
import edu.harvard.iq.dataverse.search.IndexServiceBean;
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.*;
import static edu.harvard.iq.dataverse.util.json.NullSafeJsonBuilder.jsonObjectBuilder;
@@ -96,6 +100,8 @@
import edu.harvard.iq.dataverse.workflow.WorkflowServiceBean;
import edu.harvard.iq.dataverse.workflow.WorkflowContext.TriggerType;
+import edu.harvard.iq.dataverse.globus.GlobusServiceBean;
+
import java.io.IOException;
import java.io.InputStream;
import java.io.StringReader;
@@ -105,9 +111,10 @@
import java.text.SimpleDateFormat;
import java.time.LocalDate;
import java.time.LocalDateTime;
+import java.util.*;
+import java.util.concurrent.*;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
-import java.util.*;
import java.util.Map.Entry;
import java.util.logging.Level;
import java.util.logging.Logger;
@@ -115,7 +122,6 @@
import javax.ejb.EJB;
import javax.ejb.EJBException;
-import javax.faces.context.FacesContext;
import javax.inject.Inject;
import javax.json.*;
import javax.json.stream.JsonParsingException;
@@ -133,10 +139,7 @@
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
-import javax.ws.rs.core.Context;
-import javax.ws.rs.core.HttpHeaders;
-import javax.ws.rs.core.MediaType;
-import javax.ws.rs.core.Response;
+import javax.ws.rs.core.*;
import javax.ws.rs.core.Response.Status;
import static javax.ws.rs.core.Response.Status.BAD_REQUEST;
import javax.ws.rs.core.UriInfo;
@@ -161,6 +164,9 @@ public class Datasets extends AbstractApiBean {
@EJB
DataverseServiceBean dataverseService;
+ @EJB
+ GlobusServiceBean globusService;
+
@EJB
UserNotificationServiceBean userNotificationService;
@@ -216,6 +222,9 @@ public class Datasets extends AbstractApiBean {
@Inject
DataverseRoleServiceBean dataverseRoleService;
+ @EJB
+ DatasetVersionServiceBean datasetversionService;
+
/**
* Used to consolidate the way we parse and handle dataset versions.
* @param
@@ -425,7 +434,7 @@ public Response setCitationDate( @PathParam("id") String id, String dsfTypeName)
execCommand(new SetDatasetCitationDateCommand(req, findDatasetOrDie(id), dsfType));
return ok("Citation Date for dataset " + id + " set to: " + (dsfType != null ? dsfType.getDisplayName() : "default"));
});
- }
+ }
@DELETE
@Path("{id}/citationdate")
@@ -434,7 +443,7 @@ public Response useDefaultCitationDate( @PathParam("id") String id) {
execCommand(new SetDatasetCitationDateCommand(req, findDatasetOrDie(id), null));
return ok("Citation Date for dataset " + id + " set to default");
});
- }
+ }
@GET
@Path("{id}/versions")
@@ -450,9 +459,9 @@ public Response listVersions( @PathParam("id") String id ) {
@Path("{id}/versions/{versionId}")
public Response getVersion( @PathParam("id") String datasetId, @PathParam("versionId") String versionId, @Context UriInfo uriInfo, @Context HttpHeaders headers) {
return response( req -> {
- DatasetVersion dsv = getDatasetVersionOrDie(req, versionId, findDatasetOrDie(datasetId), uriInfo, headers);
+ DatasetVersion dsv = getDatasetVersionOrDie(req, versionId, findDatasetOrDie(datasetId), uriInfo, headers);
return (dsv == null || dsv.getId() == null) ? notFound("Dataset version not found")
- : ok(json(dsv));
+ : ok(json(dsv));
});
}
@@ -469,9 +478,9 @@ public Response getVersionFiles( @PathParam("id") String datasetId, @PathParam("
public Response getFileAccessFolderView(@PathParam("id") String datasetId, @QueryParam("version") String versionId, @QueryParam("folder") String folderName, @QueryParam("original") Boolean originals, @Context UriInfo uriInfo, @Context HttpHeaders headers, @Context HttpServletResponse response) {
folderName = folderName == null ? "" : folderName;
- versionId = versionId == null ? ":latest-published" : versionId;
+ versionId = versionId == null ? ":latest-published" : versionId;
- DatasetVersion version;
+ DatasetVersion version;
try {
DataverseRequest req = createDataverseRequest(findUserOrDie());
version = getDatasetVersionOrDie(req, versionId, findDatasetOrDie(datasetId), uriInfo, headers);
@@ -583,7 +592,7 @@ public Response updateDatasetPIDMetadataAll() {
} catch (WrappedResponse ex) {
Logger.getLogger(Datasets.class.getName()).log(Level.SEVERE, null, ex);
}
- });
+ });
return ok(BundleUtil.getStringFromBundle("datasets.api.updatePIDMetadata.success.for.update.all"));
});
}
@@ -592,7 +601,7 @@ public Response updateDatasetPIDMetadataAll() {
@Path("{id}/versions/{versionId}")
@Consumes(MediaType.APPLICATION_JSON)
public Response updateDraftVersion( String jsonBody, @PathParam("id") String id, @PathParam("versionId") String versionId ){
-
+
if ( ! ":draft".equals(versionId) ) {
return error( Response.Status.BAD_REQUEST, "Only the :draft version can be updated");
}
@@ -620,14 +629,22 @@ public Response updateDraftVersion( String jsonBody, @PathParam("id") String id,
boolean updateDraft = ds.getLatestVersion().isDraft();
DatasetVersion managedVersion;
- if ( updateDraft ) {
+ if (updateDraft) {
final DatasetVersion editVersion = ds.getEditVersion();
editVersion.setDatasetFields(incomingVersion.getDatasetFields());
- editVersion.setTermsOfUseAndAccess( incomingVersion.getTermsOfUseAndAccess() );
+ editVersion.setTermsOfUseAndAccess(incomingVersion.getTermsOfUseAndAccess());
editVersion.getTermsOfUseAndAccess().setDatasetVersion(editVersion);
+ boolean hasValidTerms = TermsOfUseAndAccessValidator.isTOUAValid(editVersion.getTermsOfUseAndAccess(), null);
+ if (!hasValidTerms) {
+ return error(Status.CONFLICT, BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
Dataset managedDataset = execCommand(new UpdateDatasetVersionCommand(ds, req));
managedVersion = managedDataset.getEditVersion();
} else {
+ boolean hasValidTerms = TermsOfUseAndAccessValidator.isTOUAValid(incomingVersion.getTermsOfUseAndAccess(), null);
+ if (!hasValidTerms) {
+ return error(Status.CONFLICT, BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
managedVersion = execCommand(new CreateDatasetVersionCommand(req, ds, incomingVersion));
}
// DatasetVersion managedVersion = execCommand( updateDraft
@@ -685,6 +702,10 @@ public Response updateVersionMetadata(String jsonLDBody, @PathParam("id") String
boolean updateDraft = ds.getLatestVersion().isDraft();
dsv = JSONLDUtil.updateDatasetVersionMDFromJsonLD(dsv, jsonLDBody, metadataBlockService, datasetFieldSvc, !replaceTerms, false, licenseSvc);
dsv.getTermsOfUseAndAccess().setDatasetVersion(dsv);
+ boolean hasValidTerms = TermsOfUseAndAccessValidator.isTOUAValid(dsv.getTermsOfUseAndAccess(), null);
+ if (!hasValidTerms) {
+ return error(Status.CONFLICT, BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
DatasetVersion managedVersion;
if (updateDraft) {
Dataset managedDataset = execCommand(new UpdateDatasetVersionCommand(ds, req));
@@ -771,7 +792,7 @@ private Response processDatasetFieldDataDelete(String jsonBody, String id, Datav
boolean found = false;
for (DatasetField dsf : dsv.getDatasetFields()) {
if (dsf.getDatasetFieldType().equals(updateField.getDatasetFieldType())) {
- if (dsf.getDatasetFieldType().isAllowMultiples()) {
+ if (dsf.getDatasetFieldType().isAllowMultiples()) {
if (updateField.getDatasetFieldType().isControlledVocabulary()) {
if (dsf.getDatasetFieldType().isAllowMultiples()) {
for (ControlledVocabularyValue cvv : updateField.getControlledVocabularyValues()) {
@@ -836,7 +857,7 @@ private Response processDatasetFieldDataDelete(String jsonBody, String id, Datav
datasetFieldCompoundValueItemsToRemove.forEach((remove) -> {
dsf.getDatasetFieldCompoundValues().remove(remove);
});
- if (!found) {
+ if (!found) {
logger.log(Level.SEVERE, "Delete metadata failed: " + updateField.getDatasetFieldType().getDisplayName() + ": " + deleteVal + " not found.");
return error(Response.Status.BAD_REQUEST, "Delete metadata failed: " + updateField.getDatasetFieldType().getDisplayName() + ": " + deleteVal + " not found.");
}
@@ -856,12 +877,11 @@ private Response processDatasetFieldDataDelete(String jsonBody, String id, Datav
logger.log(Level.SEVERE, "Delete metadata failed: " + updateField.getDatasetFieldType().getDisplayName() + ": " + displayValue + " not found." );
return error(Response.Status.BAD_REQUEST, "Delete metadata failed: " + updateField.getDatasetFieldType().getDisplayName() + ": " + displayValue + " not found." );
}
- }
+ }
-
boolean updateDraft = ds.getLatestVersion().isDraft();
- DatasetVersion managedVersion = updateDraft
+ DatasetVersion managedVersion = updateDraft
? execCommand(new UpdateDatasetVersionCommand(ds, req)).getEditVersion()
: execCommand(new CreateDatasetVersionCommand(req, ds, dsv));
return ok(json(managedVersion));
@@ -880,13 +900,13 @@ private Response processDatasetFieldDataDelete(String jsonBody, String id, Datav
private String getCompoundDisplayValue (DatasetFieldCompoundValue dscv){
String returnString = "";
- for (DatasetField dsf : dscv.getChildDatasetFields()) {
- for (String value : dsf.getValues()) {
- if (!(value == null)) {
- returnString += (returnString.isEmpty() ? "" : "; ") + value.trim();
- }
+ for (DatasetField dsf : dscv.getChildDatasetFields()) {
+ for (String value : dsf.getValues()) {
+ if (!(value == null)) {
+ returnString += (returnString.isEmpty() ? "" : "; ") + value.trim();
}
}
+ }
return returnString;
}
@@ -915,13 +935,13 @@ private Response processDatasetUpdate(String jsonBody, String id, DataverseReque
DatasetVersion dsv = ds.getEditVersion();
dsv.getTermsOfUseAndAccess().setDatasetVersion(dsv);
List fields = new LinkedList<>();
- DatasetField singleField = null;
+ DatasetField singleField = null;
JsonArray fieldsJson = json.getJsonArray("fields");
- if( fieldsJson == null ){
- singleField = jsonParser().parseField(json, Boolean.FALSE);
+ if (fieldsJson == null) {
+ singleField = jsonParser().parseField(json, Boolean.FALSE);
fields.add(singleField);
- } else{
+ } else {
fields = jsonParser().parseMultipleFields(json);
}
@@ -1082,18 +1102,24 @@ public Response publishDataset(@PathParam("id") String id, @QueryParam("type") S
case "major":
isMinor = false;
break;
- case "updatecurrent":
- if(user.isSuperuser()) {
- updateCurrent=true;
- } else {
- return error(Response.Status.FORBIDDEN, "Only superusers can update the current version");
- }
- break;
+ case "updatecurrent":
+ if (user.isSuperuser()) {
+ updateCurrent = true;
+ } else {
+ return error(Response.Status.FORBIDDEN, "Only superusers can update the current version");
+ }
+ break;
default:
- return error(Response.Status.BAD_REQUEST, "Illegal 'type' parameter value '" + type + "'. It needs to be either 'major', 'minor', or 'updatecurrent'.");
+ return error(Response.Status.BAD_REQUEST, "Illegal 'type' parameter value '" + type + "'. It needs to be either 'major', 'minor', or 'updatecurrent'.");
}
Dataset ds = findDatasetOrDie(id);
+
+ boolean hasValidTerms = TermsOfUseAndAccessValidator.isTOUAValid(ds.getLatestVersion().getTermsOfUseAndAccess(), null);
+ if (!hasValidTerms) {
+ return error(Status.CONFLICT, BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
+
if (mustBeIndexed) {
logger.fine("IT: " + ds.getIndexTime());
logger.fine("MT: " + ds.getModificationTime());
@@ -1110,7 +1136,7 @@ public Response publishDataset(@PathParam("id") String id, @QueryParam("type") S
* set and if so, if it after the modification time. If the modification time is
* set and the index time is null or is before the mod time, the 409/conflict
* error is returned.
- *
+ *
*/
if ((ds.getModificationTime()!=null && (ds.getIndexTime() == null || (ds.getIndexTime().compareTo(ds.getModificationTime()) <= 0))) ||
(ds.getPermissionModificationTime()!=null && (ds.getPermissionIndexTime() == null || (ds.getPermissionIndexTime().compareTo(ds.getPermissionModificationTime()) <= 0)))) {
@@ -1149,7 +1175,7 @@ public Response publishDataset(@PathParam("id") String id, @QueryParam("type") S
*/
try {
updateVersion = commandEngine.submit(archiveCommand);
- if (updateVersion.getArchivalCopyLocation() != null) {
+ if (!updateVersion.getArchivalCopyLocationStatus().equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)) {
successMsg = BundleUtil.getStringFromBundle("datasetversion.update.archive.success");
} else {
successMsg = BundleUtil.getStringFromBundle("datasetversion.update.archive.failure");
@@ -1174,10 +1200,10 @@ public Response publishDataset(@PathParam("id") String id, @QueryParam("type") S
.build();
}
} else {
- PublishDatasetResult res = execCommand(new PublishDatasetCommand(ds,
+ PublishDatasetResult res = execCommand(new PublishDatasetCommand(ds,
createDataverseRequest(user),
- isMinor));
- return res.isWorkflow() ? accepted(json(res.getDataset())) : ok(json(res.getDataset()));
+ isMinor));
+ return res.isWorkflow() ? accepted(json(res.getDataset())) : ok(json(res.getDataset()));
}
} catch (WrappedResponse ex) {
return ex.getResponse();
@@ -1278,7 +1304,7 @@ public Response publishMigratedDataset(String jsonldBody, @PathParam("id") Strin
@Path("{id}/move/{targetDataverseAlias}")
public Response moveDataset(@PathParam("id") String id, @PathParam("targetDataverseAlias") String targetDataverseAlias, @QueryParam("forceMove") Boolean force) {
try {
- User u = findUserOrDie();
+ User u = findUserOrDie();
Dataset ds = findDatasetOrDie(id);
Dataverse target = dataverseService.findByAlias(targetDataverseAlias);
if (target == null) {
@@ -1316,6 +1342,12 @@ public Response createFileEmbargo(@PathParam("id") String id, String jsonBody){
} catch (WrappedResponse ex) {
return ex.getResponse();
}
+
+ boolean hasValidTerms = TermsOfUseAndAccessValidator.isTOUAValid(dataset.getLatestVersion().getTermsOfUseAndAccess(), null);
+
+ if (!hasValidTerms){
+ return error(Status.CONFLICT, BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
// client is superadmin or (client has EditDataset permission on these files and files are unreleased)
/*
@@ -1560,21 +1592,21 @@ public Response removeFileEmbargo(@PathParam("id") String id, String jsonBody){
@PUT
- @Path("{linkedDatasetId}/link/{linkingDataverseAlias}")
- public Response linkDataset(@PathParam("linkedDatasetId") String linkedDatasetId, @PathParam("linkingDataverseAlias") String linkingDataverseAlias) {
- try{
- User u = findUserOrDie();
+ @Path("{linkedDatasetId}/link/{linkingDataverseAlias}")
+ public Response linkDataset(@PathParam("linkedDatasetId") String linkedDatasetId, @PathParam("linkingDataverseAlias") String linkingDataverseAlias) {
+ try {
+ User u = findUserOrDie();
Dataset linked = findDatasetOrDie(linkedDatasetId);
Dataverse linking = findDataverseOrDie(linkingDataverseAlias);
if (linked == null){
return error(Response.Status.BAD_REQUEST, "Linked Dataset not found.");
- }
- if (linking == null){
+ }
+ if (linking == null) {
return error(Response.Status.BAD_REQUEST, "Linking Dataverse not found.");
- }
+ }
execCommand(new LinkDatasetCommand(
createDataverseRequest(u), linking, linked
- ));
+ ));
return ok("Dataset " + linked.getId() + " linked successfully to " + linking.getAlias());
} catch (WrappedResponse ex) {
return ex.getResponse();
@@ -1588,8 +1620,7 @@ public Response getCustomTermsTab(@PathParam("id") String id, @PathParam("versio
User user = session.getUser();
String persistentId;
try {
- if (getDatasetVersionOrDie(createDataverseRequest(user), versionId, findDatasetOrDie(id), uriInfo, headers)
- .getTermsOfUseAndAccess().getLicense() != null) {
+ if (DatasetUtil.getLicense(getDatasetVersionOrDie(createDataverseRequest(user), versionId, findDatasetOrDie(id), uriInfo, headers)) != null) {
return error(Status.NOT_FOUND, "This Dataset has no custom license");
}
persistentId = getRequestParameter(":persistentId".substring(1));
@@ -1630,8 +1661,8 @@ public Response getLinks(@PathParam("id") String idSupplied ) {
/**
* Add a given assignment to a given user or group
- * @param ra role assignment DTO
- * @param id dataset id
+ * @param ra role assignment DTO
+ * @param id dataset id
* @param apiKey
*/
@POST
@@ -1643,7 +1674,7 @@ public Response createAssignment(RoleAssignmentDTO ra, @PathParam("identifier")
RoleAssignee assignee = findAssignee(ra.getAssignee());
if (assignee == null) {
return error(Response.Status.BAD_REQUEST, BundleUtil.getStringFromBundle("datasets.api.grant.role.assignee.not.found.error"));
- }
+ }
DataverseRole theRole;
Dataverse dv = dataset.getOwner();
@@ -1695,10 +1726,10 @@ public Response deleteAssignment(@PathParam("id") long assignmentId, @PathParam(
@GET
@Path("{identifier}/assignments")
public Response getAssignments(@PathParam("identifier") String id) {
- return response( req ->
- ok( execCommand(
- new ListRoleAssignments(req, findDatasetOrDie(id)))
- .stream().map(ra->json(ra)).collect(toJsonArray())) );
+ return response(req ->
+ ok(execCommand(
+ new ListRoleAssignments(req, findDatasetOrDie(id)))
+ .stream().map(ra -> json(ra)).collect(toJsonArray())));
}
@GET
@@ -1706,8 +1737,8 @@ public Response getAssignments(@PathParam("identifier") String id) {
public Response getPrivateUrlData(@PathParam("id") String idSupplied) {
return response( req -> {
PrivateUrl privateUrl = execCommand(new GetPrivateUrlCommand(req, findDatasetOrDie(idSupplied)));
- return (privateUrl != null) ? ok(json(privateUrl))
- : error(Response.Status.NOT_FOUND, "Private URL not found.");
+ return (privateUrl != null) ? ok(json(privateUrl))
+ : error(Response.Status.NOT_FOUND, "Private URL not found.");
});
}
@@ -1717,7 +1748,7 @@ public Response createPrivateUrl(@PathParam("id") String idSupplied,@DefaultValu
if(anonymizedAccess && settingsSvc.getValueForKey(SettingsServiceBean.Key.AnonymizedFieldTypeNames)==null) {
throw new NotAcceptableException("Anonymized Access not enabled");
}
- return response( req ->
+ return response(req ->
ok(json(execCommand(
new CreatePrivateUrlCommand(req, findDatasetOrDie(idSupplied), anonymizedAccess)))));
}
@@ -1851,13 +1882,13 @@ public Response getRsync(@PathParam("identifier") String id) {
}
/**
- * This api endpoint triggers the creation of a "package" file in a dataset
- * after that package has been moved onto the same filesystem via the Data Capture Module.
+ * This api endpoint triggers the creation of a "package" file in a dataset
+ * after that package has been moved onto the same filesystem via the Data Capture Module.
* The package is really just a way that Dataverse interprets a folder created by DCM, seeing it as just one file.
* The "package" can be downloaded over RSAL.
- *
+ *
* This endpoint currently supports both posix file storage and AWS s3 storage in Dataverse, and depending on which one is active acts accordingly.
- *
+ *
* The initial design of the DCM/Dataverse interaction was not to use packages, but to allow import of all individual files natively into Dataverse.
* But due to the possibly immense number of files (millions) the package approach was taken.
* This is relevant because the posix ("file") code contains many remnants of that development work.
@@ -1881,7 +1912,7 @@ public Response receiveChecksumValidationResults(@PathParam("identifier") String
try {
Dataset dataset = findDatasetOrDie(id);
if ("validation passed".equals(statusMessageFromDcm)) {
- logger.log(Level.INFO, "Checksum Validation passed for DCM.");
+ logger.log(Level.INFO, "Checksum Validation passed for DCM.");
String storageDriver = dataset.getDataverseContext().getEffectiveStorageDriverId();
String uploadFolder = jsonFromDcm.getString("uploadFolder");
@@ -1904,7 +1935,7 @@ public Response receiveChecksumValidationResults(@PathParam("identifier") String
String message = wr.getMessage();
return error(Response.Status.INTERNAL_SERVER_ERROR, "Uploaded files have passed checksum validation but something went wrong while attempting to put the files into Dataverse. Message was '" + message + "'.");
}
- } else if(storageDriverType.equals("s3")) {
+ } else if(storageDriverType.equals(DataAccess.S3)) {
logger.log(Level.INFO, "S3 storage driver used for DCM (dataset id={0})", dataset.getId());
try {
@@ -1943,10 +1974,10 @@ public Response receiveChecksumValidationResults(@PathParam("identifier") String
JsonObjectBuilder job = Json.createObjectBuilder();
return ok(job);
- } catch (IOException e) {
+ } catch (IOException e) {
String message = e.getMessage();
return error(Response.Status.INTERNAL_SERVER_ERROR, "Uploaded files have passed checksum validation but something went wrong while attempting to move the files into Dataverse. Message was '" + message + "'.");
- }
+ }
} else {
return error(Response.Status.INTERNAL_SERVER_ERROR, "Invalid storage driver in Dataverse, not compatible with dcm");
}
@@ -1999,7 +2030,7 @@ public Response returnToAuthor(@PathParam("id") String idSupplied, String jsonBo
JsonObject json = Json.createReader(rdr).readObject();
try {
Dataset dataset = findDatasetOrDie(idSupplied);
- String reasonForReturn = null;
+ String reasonForReturn = null;
reasonForReturn = json.getString("reasonForReturn");
// TODO: Once we add a box for the curator to type into, pass the reason for return to the ReturnDatasetToAuthorCommand and delete this check and call to setReturnReason on the API side.
if (reasonForReturn == null || reasonForReturn.isEmpty()) {
@@ -2052,7 +2083,7 @@ public Response setCurationStatus(@PathParam("id") String idSupplied, @QueryPara
return Response.fromResponse(wr.getResponse()).status(Response.Status.BAD_REQUEST).build();
}
}
-
+
@DELETE
@Path("{id}/curationStatus")
public Response deleteCurationStatus(@PathParam("id") String idSupplied) {
@@ -2072,228 +2103,228 @@ public Response deleteCurationStatus(@PathParam("id") String idSupplied) {
return Response.fromResponse(wr.getResponse()).status(Response.Status.BAD_REQUEST).build();
}
}
-
-@GET
-@Path("{id}/uploadsid")
-@Deprecated
-public Response getUploadUrl(@PathParam("id") String idSupplied) {
- try {
- Dataset dataset = findDatasetOrDie(idSupplied);
-
- boolean canUpdateDataset = false;
- try {
- canUpdateDataset = permissionSvc.requestOn(createDataverseRequest(findUserOrDie()), dataset).canIssue(UpdateDatasetVersionCommand.class);
- } catch (WrappedResponse ex) {
- logger.info("Exception thrown while trying to figure out permissions while getting upload URL for dataset id " + dataset.getId() + ": " + ex.getLocalizedMessage());
- throw ex;
- }
- if (!canUpdateDataset) {
- return error(Response.Status.FORBIDDEN, "You are not permitted to upload files to this dataset.");
- }
- S3AccessIO> s3io = FileUtil.getS3AccessForDirectUpload(dataset);
- if(s3io == null) {
- return error(Response.Status.NOT_FOUND,"Direct upload not supported for files in this dataset: " + dataset.getId());
- }
- String url = null;
- String storageIdentifier = null;
- try {
- url = s3io.generateTemporaryS3UploadUrl();
- storageIdentifier = FileUtil.getStorageIdentifierFromLocation(s3io.getStorageLocation());
- } catch (IOException io) {
- logger.warning(io.getMessage());
- throw new WrappedResponse(io, error( Response.Status.INTERNAL_SERVER_ERROR, "Could not create process direct upload request"));
- }
-
- JsonObjectBuilder response = Json.createObjectBuilder()
- .add("url", url)
- .add("storageIdentifier", storageIdentifier );
- return ok(response);
- } catch (WrappedResponse wr) {
- return wr.getResponse();
- }
-}
-@GET
-@Path("{id}/uploadurls")
-public Response getMPUploadUrls(@PathParam("id") String idSupplied, @QueryParam("size") long fileSize) {
- try {
- Dataset dataset = findDatasetOrDie(idSupplied);
-
- boolean canUpdateDataset = false;
- try {
- canUpdateDataset = permissionSvc.requestOn(createDataverseRequest(findUserOrDie()), dataset)
- .canIssue(UpdateDatasetVersionCommand.class);
- } catch (WrappedResponse ex) {
- logger.info(
- "Exception thrown while trying to figure out permissions while getting upload URLs for dataset id "
- + dataset.getId() + ": " + ex.getLocalizedMessage());
- throw ex;
- }
- if (!canUpdateDataset) {
- return error(Response.Status.FORBIDDEN, "You are not permitted to upload files to this dataset.");
- }
- S3AccessIO s3io = FileUtil.getS3AccessForDirectUpload(dataset);
- if (s3io == null) {
- return error(Response.Status.NOT_FOUND,
- "Direct upload not supported for files in this dataset: " + dataset.getId());
- }
- JsonObjectBuilder response = null;
- String storageIdentifier = null;
- try {
- storageIdentifier = FileUtil.getStorageIdentifierFromLocation(s3io.getStorageLocation());
- response = s3io.generateTemporaryS3UploadUrls(dataset.getGlobalId().asString(), storageIdentifier, fileSize);
-
- } catch (IOException io) {
- logger.warning(io.getMessage());
- throw new WrappedResponse(io,
- error(Response.Status.INTERNAL_SERVER_ERROR, "Could not create process direct upload request"));
- }
-
- response.add("storageIdentifier", storageIdentifier);
- return ok(response);
- } catch (WrappedResponse wr) {
- return wr.getResponse();
- }
-}
+ @GET
+ @Path("{id}/uploadsid")
+ @Deprecated
+ public Response getUploadUrl(@PathParam("id") String idSupplied) {
+ try {
+ Dataset dataset = findDatasetOrDie(idSupplied);
-@DELETE
-@Path("mpupload")
-public Response abortMPUpload(@QueryParam("globalid") String idSupplied, @QueryParam("storageidentifier") String storageidentifier, @QueryParam("uploadid") String uploadId) {
- try {
- Dataset dataset = datasetSvc.findByGlobalId(idSupplied);
- //Allow the API to be used within a session (e.g. for direct upload in the UI)
- User user =session.getUser();
- if (!user.isAuthenticated()) {
- try {
- user = findAuthenticatedUserOrDie();
- } catch (WrappedResponse ex) {
- logger.info(
- "Exception thrown while trying to figure out permissions while getting aborting upload for dataset id "
- + dataset.getId() + ": " + ex.getLocalizedMessage());
- throw ex;
- }
- }
- boolean allowed = false;
- if (dataset != null) {
- allowed = permissionSvc.requestOn(createDataverseRequest(user), dataset)
- .canIssue(UpdateDatasetVersionCommand.class);
- } else {
- /*
- * The only legitimate case where a global id won't correspond to a dataset is
- * for uploads during creation. Given that this call will still fail unless all
- * three parameters correspond to an active multipart upload, it should be safe
- * to allow the attempt for an authenticated user. If there are concerns about
- * permissions, one could check with the current design that the user is allowed
- * to create datasets in some dataverse that is configured to use the storage
- * provider specified in the storageidentifier, but testing for the ability to
- * create a dataset in a specific dataverse would requiring changing the design
- * somehow (e.g. adding the ownerId to this call).
- */
- allowed = true;
- }
- if (!allowed) {
- return error(Response.Status.FORBIDDEN,
- "You are not permitted to abort file uploads with the supplied parameters.");
- }
- try {
- S3AccessIO.abortMultipartUpload(idSupplied, storageidentifier, uploadId);
- } catch (IOException io) {
- logger.warning("Multipart upload abort failed for uploadId: " + uploadId + " storageidentifier="
- + storageidentifier + " dataset Id: " + dataset.getId());
- logger.warning(io.getMessage());
- throw new WrappedResponse(io,
- error(Response.Status.INTERNAL_SERVER_ERROR, "Could not abort multipart upload"));
- }
- return Response.noContent().build();
- } catch (WrappedResponse wr) {
- return wr.getResponse();
- }
-}
+ boolean canUpdateDataset = false;
+ try {
+ canUpdateDataset = permissionSvc.requestOn(createDataverseRequest(findUserOrDie()), dataset).canIssue(UpdateDatasetVersionCommand.class);
+ } catch (WrappedResponse ex) {
+ logger.info("Exception thrown while trying to figure out permissions while getting upload URL for dataset id " + dataset.getId() + ": " + ex.getLocalizedMessage());
+ throw ex;
+ }
+ if (!canUpdateDataset) {
+ return error(Response.Status.FORBIDDEN, "You are not permitted to upload files to this dataset.");
+ }
+ S3AccessIO> s3io = FileUtil.getS3AccessForDirectUpload(dataset);
+ if (s3io == null) {
+ return error(Response.Status.NOT_FOUND, "Direct upload not supported for files in this dataset: " + dataset.getId());
+ }
+ String url = null;
+ String storageIdentifier = null;
+ try {
+ url = s3io.generateTemporaryS3UploadUrl();
+ storageIdentifier = FileUtil.getStorageIdentifierFromLocation(s3io.getStorageLocation());
+ } catch (IOException io) {
+ logger.warning(io.getMessage());
+ throw new WrappedResponse(io, error(Response.Status.INTERNAL_SERVER_ERROR, "Could not create process direct upload request"));
+ }
-@PUT
-@Path("mpupload")
-public Response completeMPUpload(String partETagBody, @QueryParam("globalid") String idSupplied, @QueryParam("storageidentifier") String storageidentifier, @QueryParam("uploadid") String uploadId) {
- try {
- Dataset dataset = datasetSvc.findByGlobalId(idSupplied);
- //Allow the API to be used within a session (e.g. for direct upload in the UI)
- User user =session.getUser();
- if (!user.isAuthenticated()) {
- try {
- user=findAuthenticatedUserOrDie();
- } catch (WrappedResponse ex) {
- logger.info(
- "Exception thrown while trying to figure out permissions to complete mpupload for dataset id "
- + dataset.getId() + ": " + ex.getLocalizedMessage());
- throw ex;
- }
- }
- boolean allowed = false;
- if (dataset != null) {
- allowed = permissionSvc.requestOn(createDataverseRequest(user), dataset)
- .canIssue(UpdateDatasetVersionCommand.class);
- } else {
- /*
- * The only legitimate case where a global id won't correspond to a dataset is
- * for uploads during creation. Given that this call will still fail unless all
- * three parameters correspond to an active multipart upload, it should be safe
- * to allow the attempt for an authenticated user. If there are concerns about
- * permissions, one could check with the current design that the user is allowed
- * to create datasets in some dataverse that is configured to use the storage
- * provider specified in the storageidentifier, but testing for the ability to
- * create a dataset in a specific dataverse would requiring changing the design
- * somehow (e.g. adding the ownerId to this call).
- */
- allowed = true;
- }
- if (!allowed) {
- return error(Response.Status.FORBIDDEN,
- "You are not permitted to complete file uploads with the supplied parameters.");
- }
- List eTagList = new ArrayList();
- logger.info("Etags: " + partETagBody);
- try {
- JsonReader jsonReader = Json.createReader(new StringReader(partETagBody));
- JsonObject object = jsonReader.readObject();
- jsonReader.close();
- for(String partNo : object.keySet()) {
- eTagList.add(new PartETag(Integer.parseInt(partNo), object.getString(partNo)));
- }
- for(PartETag et: eTagList) {
- logger.info("Part: " + et.getPartNumber() + " : " + et.getETag());
- }
- } catch (JsonException je) {
- logger.info("Unable to parse eTags from: " + partETagBody);
- throw new WrappedResponse(je, error( Response.Status.INTERNAL_SERVER_ERROR, "Could not complete multipart upload"));
- }
- try {
- S3AccessIO.completeMultipartUpload(idSupplied, storageidentifier, uploadId, eTagList);
- } catch (IOException io) {
- logger.warning("Multipart upload completion failed for uploadId: " + uploadId +" storageidentifier=" + storageidentifier + " globalId: " + idSupplied);
- logger.warning(io.getMessage());
- try {
- S3AccessIO.abortMultipartUpload(idSupplied, storageidentifier, uploadId);
- } catch (IOException e) {
- logger.severe("Also unable to abort the upload (and release the space on S3 for uploadId: " + uploadId +" storageidentifier=" + storageidentifier + " globalId: " + idSupplied);
- logger.severe(io.getMessage());
- }
-
- throw new WrappedResponse(io, error( Response.Status.INTERNAL_SERVER_ERROR, "Could not complete multipart upload"));
- }
- return ok("Multipart Upload completed");
- } catch (WrappedResponse wr) {
- return wr.getResponse();
- }
-}
+ JsonObjectBuilder response = Json.createObjectBuilder()
+ .add("url", url)
+ .add("storageIdentifier", storageIdentifier);
+ return ok(response);
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ @GET
+ @Path("{id}/uploadurls")
+ public Response getMPUploadUrls(@PathParam("id") String idSupplied, @QueryParam("size") long fileSize) {
+ try {
+ Dataset dataset = findDatasetOrDie(idSupplied);
+
+ boolean canUpdateDataset = false;
+ try {
+ canUpdateDataset = permissionSvc.requestOn(createDataverseRequest(findUserOrDie()), dataset)
+ .canIssue(UpdateDatasetVersionCommand.class);
+ } catch (WrappedResponse ex) {
+ logger.info(
+ "Exception thrown while trying to figure out permissions while getting upload URLs for dataset id "
+ + dataset.getId() + ": " + ex.getLocalizedMessage());
+ throw ex;
+ }
+ if (!canUpdateDataset) {
+ return error(Response.Status.FORBIDDEN, "You are not permitted to upload files to this dataset.");
+ }
+ S3AccessIO s3io = FileUtil.getS3AccessForDirectUpload(dataset);
+ if (s3io == null) {
+ return error(Response.Status.NOT_FOUND,
+ "Direct upload not supported for files in this dataset: " + dataset.getId());
+ }
+ JsonObjectBuilder response = null;
+ String storageIdentifier = null;
+ try {
+ storageIdentifier = FileUtil.getStorageIdentifierFromLocation(s3io.getStorageLocation());
+ response = s3io.generateTemporaryS3UploadUrls(dataset.getGlobalId().asString(), storageIdentifier, fileSize);
+
+ } catch (IOException io) {
+ logger.warning(io.getMessage());
+ throw new WrappedResponse(io,
+ error(Response.Status.INTERNAL_SERVER_ERROR, "Could not create process direct upload request"));
+ }
+
+ response.add("storageIdentifier", storageIdentifier);
+ return ok(response);
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ @DELETE
+ @Path("mpupload")
+ public Response abortMPUpload(@QueryParam("globalid") String idSupplied, @QueryParam("storageidentifier") String storageidentifier, @QueryParam("uploadid") String uploadId) {
+ try {
+ Dataset dataset = datasetSvc.findByGlobalId(idSupplied);
+ //Allow the API to be used within a session (e.g. for direct upload in the UI)
+ User user = session.getUser();
+ if (!user.isAuthenticated()) {
+ try {
+ user = findAuthenticatedUserOrDie();
+ } catch (WrappedResponse ex) {
+ logger.info(
+ "Exception thrown while trying to figure out permissions while getting aborting upload for dataset id "
+ + dataset.getId() + ": " + ex.getLocalizedMessage());
+ throw ex;
+ }
+ }
+ boolean allowed = false;
+ if (dataset != null) {
+ allowed = permissionSvc.requestOn(createDataverseRequest(user), dataset)
+ .canIssue(UpdateDatasetVersionCommand.class);
+ } else {
+ /*
+ * The only legitimate case where a global id won't correspond to a dataset is
+ * for uploads during creation. Given that this call will still fail unless all
+ * three parameters correspond to an active multipart upload, it should be safe
+ * to allow the attempt for an authenticated user. If there are concerns about
+ * permissions, one could check with the current design that the user is allowed
+ * to create datasets in some dataverse that is configured to use the storage
+ * provider specified in the storageidentifier, but testing for the ability to
+ * create a dataset in a specific dataverse would requiring changing the design
+ * somehow (e.g. adding the ownerId to this call).
+ */
+ allowed = true;
+ }
+ if (!allowed) {
+ return error(Response.Status.FORBIDDEN,
+ "You are not permitted to abort file uploads with the supplied parameters.");
+ }
+ try {
+ S3AccessIO.abortMultipartUpload(idSupplied, storageidentifier, uploadId);
+ } catch (IOException io) {
+ logger.warning("Multipart upload abort failed for uploadId: " + uploadId + " storageidentifier="
+ + storageidentifier + " dataset Id: " + dataset.getId());
+ logger.warning(io.getMessage());
+ throw new WrappedResponse(io,
+ error(Response.Status.INTERNAL_SERVER_ERROR, "Could not abort multipart upload"));
+ }
+ return Response.noContent().build();
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ @PUT
+ @Path("mpupload")
+ public Response completeMPUpload(String partETagBody, @QueryParam("globalid") String idSupplied, @QueryParam("storageidentifier") String storageidentifier, @QueryParam("uploadid") String uploadId) {
+ try {
+ Dataset dataset = datasetSvc.findByGlobalId(idSupplied);
+ //Allow the API to be used within a session (e.g. for direct upload in the UI)
+ User user = session.getUser();
+ if (!user.isAuthenticated()) {
+ try {
+ user = findAuthenticatedUserOrDie();
+ } catch (WrappedResponse ex) {
+ logger.info(
+ "Exception thrown while trying to figure out permissions to complete mpupload for dataset id "
+ + dataset.getId() + ": " + ex.getLocalizedMessage());
+ throw ex;
+ }
+ }
+ boolean allowed = false;
+ if (dataset != null) {
+ allowed = permissionSvc.requestOn(createDataverseRequest(user), dataset)
+ .canIssue(UpdateDatasetVersionCommand.class);
+ } else {
+ /*
+ * The only legitimate case where a global id won't correspond to a dataset is
+ * for uploads during creation. Given that this call will still fail unless all
+ * three parameters correspond to an active multipart upload, it should be safe
+ * to allow the attempt for an authenticated user. If there are concerns about
+ * permissions, one could check with the current design that the user is allowed
+ * to create datasets in some dataverse that is configured to use the storage
+ * provider specified in the storageidentifier, but testing for the ability to
+ * create a dataset in a specific dataverse would requiring changing the design
+ * somehow (e.g. adding the ownerId to this call).
+ */
+ allowed = true;
+ }
+ if (!allowed) {
+ return error(Response.Status.FORBIDDEN,
+ "You are not permitted to complete file uploads with the supplied parameters.");
+ }
+ List eTagList = new ArrayList();
+ logger.info("Etags: " + partETagBody);
+ try {
+ JsonReader jsonReader = Json.createReader(new StringReader(partETagBody));
+ JsonObject object = jsonReader.readObject();
+ jsonReader.close();
+ for (String partNo : object.keySet()) {
+ eTagList.add(new PartETag(Integer.parseInt(partNo), object.getString(partNo)));
+ }
+ for (PartETag et : eTagList) {
+ logger.info("Part: " + et.getPartNumber() + " : " + et.getETag());
+ }
+ } catch (JsonException je) {
+ logger.info("Unable to parse eTags from: " + partETagBody);
+ throw new WrappedResponse(je, error(Response.Status.INTERNAL_SERVER_ERROR, "Could not complete multipart upload"));
+ }
+ try {
+ S3AccessIO.completeMultipartUpload(idSupplied, storageidentifier, uploadId, eTagList);
+ } catch (IOException io) {
+ logger.warning("Multipart upload completion failed for uploadId: " + uploadId + " storageidentifier=" + storageidentifier + " globalId: " + idSupplied);
+ logger.warning(io.getMessage());
+ try {
+ S3AccessIO.abortMultipartUpload(idSupplied, storageidentifier, uploadId);
+ } catch (IOException e) {
+ logger.severe("Also unable to abort the upload (and release the space on S3 for uploadId: " + uploadId + " storageidentifier=" + storageidentifier + " globalId: " + idSupplied);
+ logger.severe(io.getMessage());
+ }
+
+ throw new WrappedResponse(io, error(Response.Status.INTERNAL_SERVER_ERROR, "Could not complete multipart upload"));
+ }
+ return ok("Multipart Upload completed");
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
/**
* Add a File to an existing Dataset
- *
+ *
* @param idSupplied
* @param jsonData
* @param fileInputStream
* @param contentDispositionHeader
* @param formDataBodyPart
- * @return
+ * @return
*/
@POST
@Path("{id}/add")
@@ -2318,7 +2349,7 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
} catch (WrappedResponse ex) {
return error(Response.Status.FORBIDDEN,
BundleUtil.getStringFromBundle("file.addreplace.error.auth")
- );
+ );
}
@@ -2331,7 +2362,7 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
try {
dataset = findDatasetOrDie(idSupplied);
} catch (WrappedResponse wr) {
- return wr.getResponse();
+ return wr.getResponse();
}
//------------------------------------
@@ -2350,12 +2381,12 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
// (2a) Load up optional params via JSON
//---------------------------------------
OptionalFileParams optionalFileParams = null;
- msgt("(api) jsonData: " + jsonData);
+ msgt("(api) jsonData: " + jsonData);
try {
optionalFileParams = new OptionalFileParams(jsonData);
} catch (DataFileTagException ex) {
- return error( Response.Status.BAD_REQUEST, ex.getMessage());
+ return error(Response.Status.BAD_REQUEST, ex.getMessage());
}
catch (ClassCastException | com.google.gson.JsonParseException ex) {
return error(Response.Status.BAD_REQUEST, BundleUtil.getStringFromBundle("file.addreplace.error.parsing"));
@@ -2367,42 +2398,47 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
String newFilename = null;
String newFileContentType = null;
String newStorageIdentifier = null;
- if (null == contentDispositionHeader) {
- if (optionalFileParams.hasStorageIdentifier()) {
- newStorageIdentifier = optionalFileParams.getStorageIdentifier();
- // ToDo - check that storageIdentifier is valid
- if (optionalFileParams.hasFileName()) {
- newFilename = optionalFileParams.getFileName();
- if (optionalFileParams.hasMimetype()) {
- newFileContentType = optionalFileParams.getMimeType();
- }
- }
- } else {
- return error(BAD_REQUEST,
- "You must upload a file or provide a storageidentifier, filename, and mimetype.");
- }
- } else {
- newFilename = contentDispositionHeader.getFileName();
- // Let's see if the form data part has the mime (content) type specified.
- // Note that we don't want to rely on formDataBodyPart.getMediaType() -
- // because that defaults to "text/plain" when no "Content-Type:" header is
- // present. Instead we'll go through the headers, and see if "Content-Type:"
- // is there. If not, we'll default to "application/octet-stream" - the generic
- // unknown type. This will prompt the application to run type detection and
- // potentially find something more accurate.
- //newFileContentType = formDataBodyPart.getMediaType().toString();
-
- for (String header : formDataBodyPart.getHeaders().keySet()) {
- if (header.equalsIgnoreCase("Content-Type")) {
- newFileContentType = formDataBodyPart.getHeaders().get(header).get(0);
- }
- }
- if (newFileContentType == null) {
- newFileContentType = FileUtil.MIME_TYPE_UNDETERMINED_DEFAULT;
- }
- }
+ if (null == contentDispositionHeader) {
+ if (optionalFileParams.hasStorageIdentifier()) {
+ newStorageIdentifier = optionalFileParams.getStorageIdentifier();
+ newStorageIdentifier = DataAccess.expandStorageIdentifierIfNeeded(newStorageIdentifier);
+
+ if(!DataAccess.uploadToDatasetAllowed(dataset, newStorageIdentifier)) {
+ return error(BAD_REQUEST,
+ "Dataset store configuration does not allow provided storageIdentifier.");
+ }
+ if (optionalFileParams.hasFileName()) {
+ newFilename = optionalFileParams.getFileName();
+ if (optionalFileParams.hasMimetype()) {
+ newFileContentType = optionalFileParams.getMimeType();
+ }
+ }
+ } else {
+ return error(BAD_REQUEST,
+ "You must upload a file or provide a valid storageidentifier, filename, and mimetype.");
+ }
+ } else {
+ newFilename = contentDispositionHeader.getFileName();
+ // Let's see if the form data part has the mime (content) type specified.
+ // Note that we don't want to rely on formDataBodyPart.getMediaType() -
+ // because that defaults to "text/plain" when no "Content-Type:" header is
+ // present. Instead we'll go through the headers, and see if "Content-Type:"
+ // is there. If not, we'll default to "application/octet-stream" - the generic
+ // unknown type. This will prompt the application to run type detection and
+ // potentially find something more accurate.
+ // newFileContentType = formDataBodyPart.getMediaType().toString();
+
+ for (String header : formDataBodyPart.getHeaders().keySet()) {
+ if (header.equalsIgnoreCase("Content-Type")) {
+ newFileContentType = formDataBodyPart.getHeaders().get(header).get(0);
+ }
+ }
+ if (newFileContentType == null) {
+ newFileContentType = FileUtil.MIME_TYPE_UNDETERMINED_DEFAULT;
+ }
+ }
+
-
//-------------------
// (3) Create the AddReplaceFileHelper object
//-------------------
@@ -2410,11 +2446,11 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
DataverseRequest dvRequest2 = createDataverseRequest(authUser);
AddReplaceFileHelper addFileHelper = new AddReplaceFileHelper(dvRequest2,
- ingestService,
- datasetService,
- fileService,
- permissionSvc,
- commandEngine,
+ ingestService,
+ datasetService,
+ fileService,
+ permissionSvc,
+ commandEngine,
systemConfig,
licenseSvc);
@@ -2423,16 +2459,20 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
// (4) Run "runAddFileByDatasetId"
//-------------------
addFileHelper.runAddFileByDataset(dataset,
- newFilename,
- newFileContentType,
- newStorageIdentifier,
- fileInputStream,
- optionalFileParams);
+ newFilename,
+ newFileContentType,
+ newStorageIdentifier,
+ fileInputStream,
+ optionalFileParams);
if (addFileHelper.hasError()){
+ //conflict response status added for 8859
+ if (Response.Status.CONFLICT.equals(addFileHelper.getHttpErrorCode())){
+ return conflict(addFileHelper.getErrorMessagesAsString("\n"));
+ }
return error(addFileHelper.getHttpErrorCode(), addFileHelper.getErrorMessagesAsString("\n"));
- }else{
+ } else {
String successMsg = BundleUtil.getStringFromBundle("file.addreplace.success.add");
try {
//msgt("as String: " + addFileHelper.getSuccessResult());
@@ -2458,73 +2498,79 @@ public Response addFileToDataset(@PathParam("id") String idSupplied,
}
}
-
+
} // end: addFileToDataset
-
- private void msg(String m){
+ private void msg(String m) {
//System.out.println(m);
logger.fine(m);
}
- private void dashes(){
+
+ private void dashes() {
msg("----------------");
}
- private void msgt(String m){
- dashes(); msg(m); dashes();
+
+ private void msgt(String m) {
+ dashes();
+ msg(m);
+ dashes();
}
-
-
- public static T handleVersion( String versionId, DsVersionHandler hdl )
- throws WrappedResponse {
+
+
+ public static T handleVersion(String versionId, DsVersionHandler hdl)
+ throws WrappedResponse {
switch (versionId) {
- case ":latest": return hdl.handleLatest();
- case ":draft": return hdl.handleDraft();
- case ":latest-published": return hdl.handleLatestPublished();
+ case ":latest":
+ return hdl.handleLatest();
+ case ":draft":
+ return hdl.handleDraft();
+ case ":latest-published":
+ return hdl.handleLatestPublished();
default:
try {
String[] versions = versionId.split("\\.");
switch (versions.length) {
case 1:
- return hdl.handleSpecific(Long.parseLong(versions[0]), (long)0.0);
+ return hdl.handleSpecific(Long.parseLong(versions[0]), (long) 0.0);
case 2:
- return hdl.handleSpecific( Long.parseLong(versions[0]), Long.parseLong(versions[1]) );
+ return hdl.handleSpecific(Long.parseLong(versions[0]), Long.parseLong(versions[1]));
default:
- throw new WrappedResponse(error( Response.Status.BAD_REQUEST, "Illegal version identifier '" + versionId + "'"));
+ throw new WrappedResponse(error(Response.Status.BAD_REQUEST, "Illegal version identifier '" + versionId + "'"));
}
- } catch ( NumberFormatException nfe ) {
- throw new WrappedResponse( error( Response.Status.BAD_REQUEST, "Illegal version identifier '" + versionId + "'") );
+ } catch (NumberFormatException nfe) {
+ throw new WrappedResponse(error(Response.Status.BAD_REQUEST, "Illegal version identifier '" + versionId + "'"));
}
}
}
-
- private DatasetVersion getDatasetVersionOrDie( final DataverseRequest req, String versionNumber, final Dataset ds, UriInfo uriInfo, HttpHeaders headers) throws WrappedResponse {
- DatasetVersion dsv = execCommand( handleVersion(versionNumber, new DsVersionHandler>(){
- @Override
- public Command handleLatest() {
- return new GetLatestAccessibleDatasetVersionCommand(req, ds);
- }
+ private DatasetVersion getDatasetVersionOrDie(final DataverseRequest req, String versionNumber, final Dataset ds, UriInfo uriInfo, HttpHeaders headers) throws WrappedResponse {
+ DatasetVersion dsv = execCommand(handleVersion(versionNumber, new DsVersionHandler>() {
- @Override
- public Command handleDraft() {
- return new GetDraftDatasetVersionCommand(req, ds);
- }
-
- @Override
- public Command handleSpecific(long major, long minor) {
- return new GetSpecificPublishedDatasetVersionCommand(req, ds, major, minor);
- }
+ @Override
+ public Command handleLatest() {
+ return new GetLatestAccessibleDatasetVersionCommand(req, ds);
+ }
- @Override
- public Command handleLatestPublished() {
- return new GetLatestPublishedDatasetVersionCommand(req, ds);
- }
- }));
- if ( dsv == null || dsv.getId() == null ) {
- throw new WrappedResponse( notFound("Dataset version " + versionNumber + " of dataset " + ds.getId() + " not found") );
+ @Override
+ public Command handleDraft() {
+ return new GetDraftDatasetVersionCommand(req, ds);
+ }
+
+ @Override
+ public Command handleSpecific(long major, long minor) {
+ return new GetSpecificPublishedDatasetVersionCommand(req, ds, major, minor);
+ }
+
+ @Override
+ public Command handleLatestPublished() {
+ return new GetLatestPublishedDatasetVersionCommand(req, ds);
+ }
+ }));
+ if (dsv == null || dsv.getId() == null) {
+ throw new WrappedResponse(notFound("Dataset version " + versionNumber + " of dataset " + ds.getId() + " not found"));
}
- if (dsv.isReleased()) {
+ if (dsv.isReleased()&& uriInfo!=null) {
MakeDataCountLoggingServiceBean.MakeDataCountEntry entry = new MakeDataCountEntry(uriInfo, headers, dvRequestService, ds);
mdcLogService.logEntry(entry);
}
@@ -2538,14 +2584,14 @@ public Response getLocksForDataset(@PathParam("identifier") String id, @QueryPar
Dataset dataset = null;
try {
dataset = findDatasetOrDie(id);
- Set locks;
+ Set locks;
if (lockType == null) {
locks = dataset.getLocks();
} else {
// request for a specific type lock:
DatasetLock lock = dataset.getLockFor(lockType);
- locks = new HashSet<>();
+ locks = new HashSet<>();
if (lock != null) {
locks.add(lock);
}
@@ -2555,9 +2601,9 @@ public Response getLocksForDataset(@PathParam("identifier") String id, @QueryPar
} catch (WrappedResponse wr) {
return wr.getResponse();
- }
- }
-
+ }
+ }
+
@DELETE
@Path("{identifier}/locks")
public Response deleteLocks(@PathParam("identifier") String id, @QueryParam("type") DatasetLock.Reason lockType) {
@@ -2630,7 +2676,7 @@ public Response lockDataset(@PathParam("identifier") String id, @PathParam("type
AuthenticatedUser user = findAuthenticatedUserOrDie();
if (!user.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "This API end point can be used by superusers only.");
- }
+ }
Dataset dataset = findDatasetOrDie(id);
DatasetLock lock = dataset.getLockFor(lockType);
if (lock != null) {
@@ -2723,7 +2769,7 @@ public Response getMakeDataCountCitations(@PathParam("id") String idSupplied) {
Dataset dataset = findDatasetOrDie(idSupplied);
JsonArrayBuilder datasetsCitations = Json.createArrayBuilder();
List externalCitations = datasetExternalCitationsService.getDatasetExternalCitationsByDataset(dataset);
- for (DatasetExternalCitations citation : externalCitations ){
+ for (DatasetExternalCitations citation : externalCitations) {
JsonObjectBuilder candidateObj = Json.createObjectBuilder();
/**
* In the future we can imagine storing and presenting more
@@ -2734,9 +2780,9 @@ public Response getMakeDataCountCitations(@PathParam("id") String idSupplied) {
*/
candidateObj.add("citationUrl", citation.getCitedByUrl());
datasetsCitations.add(candidateObj);
- }
- return ok(datasetsCitations);
-
+ }
+ return ok(datasetsCitations);
+
} catch (WrappedResponse wr) {
return wr.getResponse();
}
@@ -2752,20 +2798,20 @@ public Response getMakeDataCountMetricCurrentMonth(@PathParam("id") String idSup
@GET
@Path("{identifier}/storagesize")
- public Response getStorageSize(@PathParam("identifier") String dvIdtf, @QueryParam("includeCached") boolean includeCached,
- @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+ public Response getStorageSize(@PathParam("identifier") String dvIdtf, @QueryParam("includeCached") boolean includeCached,
+ @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
+
return response(req -> ok(MessageFormat.format(BundleUtil.getStringFromBundle("datasets.api.datasize.storage"),
- execCommand(new GetDatasetStorageSizeCommand(req, findDatasetOrDie(dvIdtf), includeCached,GetDatasetStorageSizeCommand.Mode.STORAGE, null)))));
+ execCommand(new GetDatasetStorageSizeCommand(req, findDatasetOrDie(dvIdtf), includeCached, GetDatasetStorageSizeCommand.Mode.STORAGE, null)))));
}
@GET
@Path("{identifier}/versions/{versionId}/downloadsize")
- public Response getDownloadSize(@PathParam("identifier") String dvIdtf, @PathParam("versionId") String version,
- @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+ public Response getDownloadSize(@PathParam("identifier") String dvIdtf, @PathParam("versionId") String version,
+ @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
+
return response(req -> ok(MessageFormat.format(BundleUtil.getStringFromBundle("datasets.api.datasize.download"),
- execCommand(new GetDatasetStorageSizeCommand(req, findDatasetOrDie(dvIdtf), false, GetDatasetStorageSizeCommand.Mode.DOWNLOAD, getDatasetVersionOrDie(req, version , findDatasetOrDie(dvIdtf), uriInfo, headers))))));
+ execCommand(new GetDatasetStorageSizeCommand(req, findDatasetOrDie(dvIdtf), false, GetDatasetStorageSizeCommand.Mode.DOWNLOAD, getDatasetVersionOrDie(req, version, findDatasetOrDie(dvIdtf), uriInfo, headers))))));
}
@GET
@@ -2889,7 +2935,7 @@ public Response getFileStore(@PathParam("identifier") String dvIdtf,
} catch (WrappedResponse ex) {
return error(Response.Status.NOT_FOUND, "No such dataset");
}
-
+
return response(req -> ok(dataset.getEffectiveStorageDriverId()));
}
@@ -2908,10 +2954,10 @@ public Response setFileStore(@PathParam("identifier") String dvIdtf,
}
if (!user.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
- }
-
- Dataset dataset;
-
+ }
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
@@ -2926,15 +2972,15 @@ public Response setFileStore(@PathParam("identifier") String dvIdtf,
return ok("Storage driver set to: " + store.getKey() + "/" + store.getValue());
}
}
- return error(Response.Status.BAD_REQUEST,
- "No Storage Driver found for : " + storageDriverLabel);
+ return error(Response.Status.BAD_REQUEST,
+ "No Storage Driver found for : " + storageDriverLabel);
}
@DELETE
@Path("{identifier}/storageDriver")
public Response resetFileStore(@PathParam("identifier") String dvIdtf,
@Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+
// Superuser-only:
AuthenticatedUser user;
try {
@@ -2944,10 +2990,10 @@ public Response resetFileStore(@PathParam("identifier") String dvIdtf,
}
if (!user.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
- }
-
- Dataset dataset;
-
+ }
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
@@ -2956,14 +3002,14 @@ public Response resetFileStore(@PathParam("identifier") String dvIdtf,
dataset.setStorageDriverId(null);
datasetService.merge(dataset);
- return ok("Storage reset to default: " + DataAccess.DEFAULT_STORAGE_DRIVER_IDENTIFIER);
+ return ok("Storage reset to default: " + DataAccess.DEFAULT_STORAGE_DRIVER_IDENTIFIER);
}
@GET
@Path("{identifier}/curationLabelSet")
public Response getCurationLabelSet(@PathParam("identifier") String dvIdtf,
- @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+ @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
+
try {
AuthenticatedUser user = findAuthenticatedUserOrDie();
if (!user.isSuperuser()) {
@@ -2972,24 +3018,24 @@ public Response getCurationLabelSet(@PathParam("identifier") String dvIdtf,
} catch (WrappedResponse wr) {
return wr.getResponse();
}
-
- Dataset dataset;
-
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
return ex.getResponse();
}
-
+
return response(req -> ok(dataset.getEffectiveCurationLabelSetName()));
}
-
+
@PUT
@Path("{identifier}/curationLabelSet")
public Response setCurationLabelSet(@PathParam("identifier") String dvIdtf,
@QueryParam("name") String curationLabelSet,
@Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+
// Superuser-only:
AuthenticatedUser user;
try {
@@ -3000,9 +3046,9 @@ public Response setCurationLabelSet(@PathParam("identifier") String dvIdtf,
if (!user.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
}
-
- Dataset dataset;
-
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
@@ -3024,12 +3070,12 @@ public Response setCurationLabelSet(@PathParam("identifier") String dvIdtf,
return error(Response.Status.BAD_REQUEST,
"No Such Curation Label Set");
}
-
+
@DELETE
@Path("{identifier}/curationLabelSet")
public Response resetCurationLabelSet(@PathParam("identifier") String dvIdtf,
@Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
-
+
// Superuser-only:
AuthenticatedUser user;
try {
@@ -3040,15 +3086,15 @@ public Response resetCurationLabelSet(@PathParam("identifier") String dvIdtf,
if (!user.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
}
-
- Dataset dataset;
-
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
return ex.getResponse();
}
-
+
dataset.setCurationLabelSetName(SystemConfig.DEFAULTCURATIONLABELSET);
datasetService.merge(dataset);
return ok("Curation Label Set reset to default: " + SystemConfig.DEFAULTCURATIONLABELSET);
@@ -3057,16 +3103,16 @@ public Response resetCurationLabelSet(@PathParam("identifier") String dvIdtf,
@GET
@Path("{identifier}/allowedCurationLabels")
public Response getAllowedCurationLabels(@PathParam("identifier") String dvIdtf,
- @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
+ @Context UriInfo uriInfo, @Context HttpHeaders headers) throws WrappedResponse {
AuthenticatedUser user = null;
try {
user = findAuthenticatedUserOrDie();
} catch (WrappedResponse wr) {
return wr.getResponse();
}
-
- Dataset dataset;
-
+
+ Dataset dataset;
+
try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
@@ -3079,7 +3125,7 @@ public Response getAllowedCurationLabels(@PathParam("identifier") String dvIdtf,
return error(Response.Status.FORBIDDEN, "You are not permitted to view the allowed curation labels for this dataset.");
}
}
-
+
@GET
@Path("{identifier}/timestamps")
@Produces(MediaType.APPLICATION_JSON)
@@ -3109,6 +3155,7 @@ public Response getTimestamps(@PathParam("identifier") String id) {
if (dataset.getLastExportTime() != null) {
timestamps.add("lastMetadataExportTime",
formatter.format(dataset.getLastExportTime().toInstant().atZone(ZoneId.systemDefault())));
+
}
if (dataset.getMostRecentMajorVersionReleaseDate() != null) {
@@ -3120,11 +3167,11 @@ public Response getTimestamps(@PathParam("identifier") String id) {
timestamps.add("hasStaleIndex",
(dataset.getModificationTime() != null && (dataset.getIndexTime() == null
|| (dataset.getIndexTime().compareTo(dataset.getModificationTime()) <= 0))) ? true
- : false);
+ : false);
timestamps.add("hasStalePermissionIndex",
(dataset.getPermissionModificationTime() != null && (dataset.getIndexTime() == null
|| (dataset.getIndexTime().compareTo(dataset.getModificationTime()) <= 0))) ? true
- : false);
+ : false);
}
// More detail if you can see a draft
if (canSeeDraft) {
@@ -3153,6 +3200,129 @@ public Response getTimestamps(@PathParam("identifier") String id) {
}
+ @POST
+ @Path("{id}/addglobusFiles")
+ @Consumes(MediaType.MULTIPART_FORM_DATA)
+ public Response addGlobusFilesToDataset(@PathParam("id") String datasetId,
+ @FormDataParam("jsonData") String jsonData,
+ @Context UriInfo uriInfo,
+ @Context HttpHeaders headers
+ ) throws IOException, ExecutionException, InterruptedException {
+
+ logger.info(" ==== (api addGlobusFilesToDataset) jsonData ====== " + jsonData);
+
+ if (!systemConfig.isHTTPUpload()) {
+ return error(Response.Status.SERVICE_UNAVAILABLE, BundleUtil.getStringFromBundle("file.api.httpDisabled"));
+ }
+
+ // -------------------------------------
+ // (1) Get the user from the API key
+ // -------------------------------------
+ AuthenticatedUser authUser;
+ try {
+ authUser = findAuthenticatedUserOrDie();
+ } catch (WrappedResponse ex) {
+ return error(Response.Status.FORBIDDEN, BundleUtil.getStringFromBundle("file.addreplace.error.auth")
+ );
+ }
+
+ // -------------------------------------
+ // (2) Get the Dataset Id
+ // -------------------------------------
+ Dataset dataset;
+
+ try {
+ dataset = findDatasetOrDie(datasetId);
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+
+ //------------------------------------
+ // (2b) Make sure dataset does not have package file
+ // --------------------------------------
+
+ for (DatasetVersion dv : dataset.getVersions()) {
+ if (dv.isHasPackageFile()) {
+ return error(Response.Status.FORBIDDEN, BundleUtil.getStringFromBundle("file.api.alreadyHasPackageFile")
+ );
+ }
+ }
+
+
+ String lockInfoMessage = "Globus Upload API started ";
+ DatasetLock lock = datasetService.addDatasetLock(dataset.getId(), DatasetLock.Reason.GlobusUpload,
+ (authUser).getId(), lockInfoMessage);
+ if (lock != null) {
+ dataset.addLock(lock);
+ } else {
+ logger.log(Level.WARNING, "Failed to lock the dataset (dataset id={0})", dataset.getId());
+ }
+
+
+ ApiToken token = authSvc.findApiTokenByUser(authUser);
+
+ if(uriInfo != null) {
+ logger.info(" ==== (api uriInfo.getRequestUri()) jsonData ====== " + uriInfo.getRequestUri().toString());
+ }
+
+
+ String requestUrl = headers.getRequestHeader("origin").get(0);
+
+ if(requestUrl.contains("localhost")){
+ requestUrl = "http://localhost:8080";
+ }
+
+ // Async Call
+ globusService.globusUpload(jsonData, token, dataset, requestUrl, authUser);
+
+ return ok("Async call to Globus Upload started ");
+
+ }
+
+ @POST
+ @Path("{id}/deleteglobusRule")
+ @Consumes(MediaType.MULTIPART_FORM_DATA)
+ public Response deleteglobusRule(@PathParam("id") String datasetId,@FormDataParam("jsonData") String jsonData
+ ) throws IOException, ExecutionException, InterruptedException {
+
+
+ logger.info(" ==== (api deleteglobusRule) jsonData ====== " + jsonData);
+
+
+ if (!systemConfig.isHTTPUpload()) {
+ return error(Response.Status.SERVICE_UNAVAILABLE, BundleUtil.getStringFromBundle("file.api.httpDisabled"));
+ }
+
+ // -------------------------------------
+ // (1) Get the user from the API key
+ // -------------------------------------
+ User authUser;
+ try {
+ authUser = findUserOrDie();
+ } catch (WrappedResponse ex) {
+ return error(Response.Status.FORBIDDEN, BundleUtil.getStringFromBundle("file.addreplace.error.auth")
+ );
+ }
+
+ // -------------------------------------
+ // (2) Get the Dataset Id
+ // -------------------------------------
+ Dataset dataset;
+
+ try {
+ dataset = findDatasetOrDie(datasetId);
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+
+ // Async Call
+ globusService.globusDownload(jsonData, dataset, authUser);
+
+ return ok("Async call to Globus Download started");
+
+ }
+
+
/**
* Add multiple Files to an existing Dataset
*
@@ -3192,6 +3362,9 @@ public Response addFilesToDataset(@PathParam("id") String idSupplied,
return wr.getResponse();
}
+ dataset.getLocks().forEach(dl -> {
+ logger.info(dl.toString());
+ });
//------------------------------------
// (2a) Make sure dataset does not have package file
@@ -3221,10 +3394,10 @@ public Response addFilesToDataset(@PathParam("id") String idSupplied,
return addFileHelper.addFiles(jsonData, dataset, authUser);
}
-
- /**
+
+ /**
* API to find curation assignments and statuses
- *
+ *
* @return
* @throws WrappedResponse
*/
@@ -3282,4 +3455,130 @@ public Response getCurationStates() throws WrappedResponse {
csvSB.append("\n");
return ok(csvSB.toString(), MediaType.valueOf(FileUtil.MIME_TYPE_CSV), "datasets.status.csv");
}
+
+ // APIs to manage archival status
+
+ @GET
+ @Produces(MediaType.APPLICATION_JSON)
+ @Path("/{id}/{version}/archivalStatus")
+ public Response getDatasetVersionArchivalStatus(@PathParam("id") String datasetId,
+ @PathParam("version") String versionNumber, @Context UriInfo uriInfo, @Context HttpHeaders headers) {
+
+ try {
+ AuthenticatedUser au = findAuthenticatedUserOrDie();
+ if (!au.isSuperuser()) {
+ return error(Response.Status.FORBIDDEN, "Superusers only.");
+ }
+ DataverseRequest req = createDataverseRequest(au);
+ DatasetVersion dsv = getDatasetVersionOrDie(req, versionNumber, findDatasetOrDie(datasetId), uriInfo,
+ headers);
+
+ if (dsv.getArchivalCopyLocation() == null) {
+ return error(Status.NO_CONTENT, "This dataset version has not been archived");
+ } else {
+ JsonObject status = JsonUtil.getJsonObject(dsv.getArchivalCopyLocation());
+ return ok(status);
+ }
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ @PUT
+ @Consumes(MediaType.APPLICATION_JSON)
+ @Path("/{id}/{version}/archivalStatus")
+ public Response setDatasetVersionArchivalStatus(@PathParam("id") String datasetId,
+ @PathParam("version") String versionNumber, String newStatus, @Context UriInfo uriInfo,
+ @Context HttpHeaders headers) {
+
+ logger.fine(newStatus);
+ try {
+ AuthenticatedUser au = findAuthenticatedUserOrDie();
+
+ if (!au.isSuperuser()) {
+ return error(Response.Status.FORBIDDEN, "Superusers only.");
+ }
+
+ //Verify we have valid json after removing any HTML tags (the status gets displayed in the UI, so we want plain text).
+ JsonObject update= JsonUtil.getJsonObject(MarkupChecker.stripAllTags(newStatus));
+
+ if (update.containsKey(DatasetVersion.ARCHIVAL_STATUS) && update.containsKey(DatasetVersion.ARCHIVAL_STATUS_MESSAGE)) {
+ String status = update.getString(DatasetVersion.ARCHIVAL_STATUS);
+ if (status.equals(DatasetVersion.ARCHIVAL_STATUS_PENDING) || status.equals(DatasetVersion.ARCHIVAL_STATUS_FAILURE)
+ || status.equals(DatasetVersion.ARCHIVAL_STATUS_SUCCESS)) {
+
+ DataverseRequest req = createDataverseRequest(au);
+ DatasetVersion dsv = getDatasetVersionOrDie(req, versionNumber, findDatasetOrDie(datasetId),
+ uriInfo, headers);
+
+ if (dsv == null) {
+ return error(Status.NOT_FOUND, "Dataset version not found");
+ }
+ if (isSingleVersionArchiving()) {
+ for (DatasetVersion version : dsv.getDataset().getVersions()) {
+ if ((!dsv.equals(version)) && (version.getArchivalCopyLocation() != null)) {
+ return error(Status.CONFLICT, "Dataset already archived.");
+ }
+ }
+ }
+
+ dsv.setArchivalCopyLocation(JsonUtil.prettyPrint(update));
+ dsv = datasetversionService.merge(dsv);
+ logger.fine("status now: " + dsv.getArchivalCopyLocationStatus());
+ logger.fine("message now: " + dsv.getArchivalCopyLocationMessage());
+
+ return ok("Status updated");
+ }
+ }
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ } catch (JsonException| IllegalStateException ex) {
+ return error(Status.BAD_REQUEST, "Unable to parse provided JSON");
+ }
+ return error(Status.BAD_REQUEST, "Unacceptable status format");
+ }
+
+ @DELETE
+ @Produces(MediaType.APPLICATION_JSON)
+ @Path("/{id}/{version}/archivalStatus")
+ public Response deleteDatasetVersionArchivalStatus(@PathParam("id") String datasetId,
+ @PathParam("version") String versionNumber, @Context UriInfo uriInfo, @Context HttpHeaders headers) {
+
+ try {
+ AuthenticatedUser au = findAuthenticatedUserOrDie();
+ if (!au.isSuperuser()) {
+ return error(Response.Status.FORBIDDEN, "Superusers only.");
+ }
+
+ DataverseRequest req = createDataverseRequest(au);
+ DatasetVersion dsv = getDatasetVersionOrDie(req, versionNumber, findDatasetOrDie(datasetId), uriInfo,
+ headers);
+ if (dsv == null) {
+ return error(Status.NOT_FOUND, "Dataset version not found");
+ }
+ dsv.setArchivalCopyLocation(null);
+ dsv = datasetversionService.merge(dsv);
+
+ return ok("Status deleted");
+
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ private boolean isSingleVersionArchiving() {
+ String className = settingsService.getValueForKey(SettingsServiceBean.Key.ArchiverClassName, null);
+ if (className != null) {
+ Class extends AbstractSubmitToArchiveCommand> clazz;
+ try {
+ clazz = Class.forName(className).asSubclass(AbstractSubmitToArchiveCommand.class);
+ return ArchiverUtil.onlySingleVersionArchiving(clazz, settingsService);
+ } catch (ClassNotFoundException e) {
+ logger.warning(":ArchiverClassName does not refer to a known Archiver");
+ } catch (ClassCastException cce) {
+ logger.warning(":ArchiverClassName does not refer to an Archiver class");
+ }
+ }
+ return false;
+ }
}
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java b/src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java
index d15b0f1c48f..90130cb3944 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java
@@ -7,17 +7,17 @@
import edu.harvard.iq.dataverse.Dataverse;
import edu.harvard.iq.dataverse.DataverseFacet;
import edu.harvard.iq.dataverse.DataverseContact;
+import edu.harvard.iq.dataverse.DataverseMetadataBlockFacet;
import edu.harvard.iq.dataverse.DataverseServiceBean;
import edu.harvard.iq.dataverse.api.datadeposit.SwordServiceBean;
+import edu.harvard.iq.dataverse.api.dto.DataverseMetadataBlockFacetDTO;
import edu.harvard.iq.dataverse.authorization.DataverseRole;
import edu.harvard.iq.dataverse.DvObject;
-import edu.harvard.iq.dataverse.DvObjectContainer;
import edu.harvard.iq.dataverse.GlobalId;
import edu.harvard.iq.dataverse.GuestbookResponseServiceBean;
import edu.harvard.iq.dataverse.GuestbookServiceBean;
import edu.harvard.iq.dataverse.MetadataBlock;
import edu.harvard.iq.dataverse.RoleAssignment;
-import static edu.harvard.iq.dataverse.api.AbstractApiBean.error;
import edu.harvard.iq.dataverse.api.dto.ExplicitGroupDTO;
import edu.harvard.iq.dataverse.api.dto.RoleAssignmentDTO;
import edu.harvard.iq.dataverse.api.dto.RoleDTO;
@@ -41,6 +41,7 @@
import edu.harvard.iq.dataverse.engine.command.impl.DeleteDataverseCommand;
import edu.harvard.iq.dataverse.engine.command.impl.DeleteDataverseLinkingDataverseCommand;
import edu.harvard.iq.dataverse.engine.command.impl.DeleteExplicitGroupCommand;
+import edu.harvard.iq.dataverse.engine.command.impl.UpdateMetadataBlockFacetRootCommand;
import edu.harvard.iq.dataverse.engine.command.impl.GetDataverseCommand;
import edu.harvard.iq.dataverse.engine.command.impl.GetDataverseStorageSizeCommand;
import edu.harvard.iq.dataverse.engine.command.impl.GetExplicitGroupCommand;
@@ -49,6 +50,7 @@
import edu.harvard.iq.dataverse.engine.command.impl.ListDataverseContentCommand;
import edu.harvard.iq.dataverse.engine.command.impl.ListExplicitGroupsCommand;
import edu.harvard.iq.dataverse.engine.command.impl.ListFacetsCommand;
+import edu.harvard.iq.dataverse.engine.command.impl.ListMetadataBlockFacetsCommand;
import edu.harvard.iq.dataverse.engine.command.impl.ListMetadataBlocksCommand;
import edu.harvard.iq.dataverse.engine.command.impl.ListRoleAssignments;
import edu.harvard.iq.dataverse.engine.command.impl.ListRolesCommand;
@@ -62,6 +64,7 @@
import edu.harvard.iq.dataverse.engine.command.impl.UpdateDataverseDefaultContributorRoleCommand;
import edu.harvard.iq.dataverse.engine.command.impl.UpdateDataverseMetadataBlocksCommand;
import edu.harvard.iq.dataverse.engine.command.impl.UpdateExplicitGroupCommand;
+import edu.harvard.iq.dataverse.engine.command.impl.UpdateMetadataBlockFacetsCommand;
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.ConstraintViolationUtil;
@@ -69,7 +72,6 @@
import static edu.harvard.iq.dataverse.util.StringUtil.nonEmpty;
import edu.harvard.iq.dataverse.util.json.JSONLDUtil;
-import edu.harvard.iq.dataverse.util.json.JsonLDTerm;
import edu.harvard.iq.dataverse.util.json.JsonParseException;
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.brief;
import java.io.StringReader;
@@ -91,7 +93,6 @@
import javax.json.JsonValue;
import javax.json.JsonValue.ValueType;
import javax.json.stream.JsonParsingException;
-import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
import javax.ws.rs.BadRequestException;
import javax.ws.rs.Consumes;
@@ -114,9 +115,9 @@
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Date;
-import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
+import java.util.stream.Collectors;
import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Context;
@@ -713,6 +714,78 @@ public Response setFacets(@PathParam("identifier") String dvIdtf, String facetId
}
}
+ @GET
+ @Path("{identifier}/metadatablockfacets")
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response listMetadataBlockFacets(@PathParam("identifier") String dvIdtf) {
+ try {
+ User u = findUserOrDie();
+ DataverseRequest request = createDataverseRequest(u);
+ Dataverse dataverse = findDataverseOrDie(dvIdtf);
+ List metadataBlockFacets = Optional.ofNullable(execCommand(new ListMetadataBlockFacetsCommand(request, dataverse))).orElse(Collections.emptyList());
+ List metadataBlocksDTOs = metadataBlockFacets.stream()
+ .map(item -> new DataverseMetadataBlockFacetDTO.MetadataBlockDTO(item.getMetadataBlock().getName(), item.getMetadataBlock().getLocaleDisplayFacet()))
+ .collect(Collectors.toList());
+ DataverseMetadataBlockFacetDTO response = new DataverseMetadataBlockFacetDTO(dataverse.getId(), dataverse.getAlias(), dataverse.isMetadataBlockFacetRoot(), metadataBlocksDTOs);
+ return Response.ok(response).build();
+ } catch (WrappedResponse e) {
+ return e.getResponse();
+ }
+ }
+
+ @POST
+ @Path("{identifier}/metadatablockfacets")
+ @Consumes(MediaType.APPLICATION_JSON)
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response setMetadataBlockFacets(@PathParam("identifier") String dvIdtf, List metadataBlockNames) {
+ try {
+ Dataverse dataverse = findDataverseOrDie(dvIdtf);
+
+ if(!dataverse.isMetadataBlockFacetRoot()) {
+ return badRequest(String.format("Dataverse: %s must have metadata block facet root set to true", dvIdtf));
+ }
+
+ List metadataBlockFacets = new LinkedList<>();
+ for(String metadataBlockName: metadataBlockNames) {
+ MetadataBlock metadataBlock = findMetadataBlock(metadataBlockName);
+ if (metadataBlock == null) {
+ return badRequest(String.format("Invalid metadata block name: %s", metadataBlockName));
+ }
+
+ DataverseMetadataBlockFacet metadataBlockFacet = new DataverseMetadataBlockFacet();
+ metadataBlockFacet.setDataverse(dataverse);
+ metadataBlockFacet.setMetadataBlock(metadataBlock);
+ metadataBlockFacets.add(metadataBlockFacet);
+ }
+
+ execCommand(new UpdateMetadataBlockFacetsCommand(createDataverseRequest(findUserOrDie()), dataverse, metadataBlockFacets));
+ return ok(String.format("Metadata block facets updated. DataverseId: %s blocks: %s", dvIdtf, metadataBlockNames));
+
+ } catch (WrappedResponse ex) {
+ return ex.getResponse();
+ }
+ }
+
+ @POST
+ @Path("{identifier}/metadatablockfacets/isRoot")
+ @Consumes(MediaType.APPLICATION_JSON)
+ @Produces(MediaType.APPLICATION_JSON)
+ public Response updateMetadataBlockFacetsRoot(@PathParam("identifier") String dvIdtf, String body) {
+ try {
+ final boolean blockFacetsRoot = parseBooleanOrDie(body);
+ Dataverse dataverse = findDataverseOrDie(dvIdtf);
+ if(dataverse.isMetadataBlockFacetRoot() == blockFacetsRoot) {
+ return ok(String.format("No update needed, dataverse already consistent with new value. DataverseId: %s blockFacetsRoot: %s", dvIdtf, blockFacetsRoot));
+ }
+
+ execCommand(new UpdateMetadataBlockFacetRootCommand(createDataverseRequest(findUserOrDie()), dataverse, blockFacetsRoot));
+ return ok(String.format("Metadata block facets root updated. DataverseId: %s blockFacetsRoot: %s", dvIdtf, blockFacetsRoot));
+
+ } catch (WrappedResponse ex) {
+ return ex.getResponse();
+ }
+ }
+
// FIXME: This listContent method is way too optimistic, always returning "ok" and never "error".
// TODO: Investigate why there was a change in the timeframe of when pull request #4350 was merged
// (2438-4295-dois-for-files branch) such that a contributor API token no longer allows this method
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstance.java b/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstance.java
index 07215cb919e..c9eb3638b90 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstance.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstance.java
@@ -11,6 +11,8 @@
import edu.harvard.iq.dataverse.EjbDataverseEngine;
import edu.harvard.iq.dataverse.GuestbookResponse;
import java.util.List;
+import java.util.logging.Logger;
+
import edu.harvard.iq.dataverse.dataaccess.OptionalAccessService;
import javax.faces.context.FacesContext;
import javax.ws.rs.core.HttpHeaders;
@@ -22,6 +24,7 @@
*/
public class DownloadInstance {
+ private static final Logger logger = Logger.getLogger(DownloadInstance.class.getCanonicalName());
/*
private ByteArrayOutputStream outStream = null;
@@ -122,6 +125,7 @@ public Boolean checkIfServiceSupportedAndSetConverter(String serviceArg, String
for (OptionalAccessService dataService : servicesAvailable) {
if (dataService != null) {
+ logger.fine("Checking service: " + dataService.getServiceName());
if (serviceArg.equals("variables")) {
// Special case for the subsetting parameter (variables=):
if ("subset".equals(dataService.getServiceName())) {
@@ -149,6 +153,7 @@ public Boolean checkIfServiceSupportedAndSetConverter(String serviceArg, String
return true;
}
String argValuePair = serviceArg + "=" + serviceArgValue;
+ logger.fine("Comparing: " + argValuePair + " and " + dataService.getServiceArguments());
if (argValuePair.startsWith(dataService.getServiceArguments())) {
conversionParam = serviceArg;
conversionParamValue = serviceArgValue;
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstanceWriter.java b/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstanceWriter.java
index 84a31959286..01f627ea23b 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstanceWriter.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/DownloadInstanceWriter.java
@@ -27,9 +27,12 @@
import edu.harvard.iq.dataverse.engine.command.Command;
import edu.harvard.iq.dataverse.engine.command.exception.CommandException;
import edu.harvard.iq.dataverse.engine.command.impl.CreateGuestbookResponseCommand;
+import edu.harvard.iq.dataverse.globus.GlobusServiceBean;
import edu.harvard.iq.dataverse.makedatacount.MakeDataCountLoggingServiceBean;
import edu.harvard.iq.dataverse.makedatacount.MakeDataCountLoggingServiceBean.MakeDataCountEntry;
import edu.harvard.iq.dataverse.util.FileUtil;
+import edu.harvard.iq.dataverse.util.SystemConfig;
+
import java.io.File;
import java.io.FileInputStream;
import java.net.URI;
@@ -59,6 +62,10 @@ public class DownloadInstanceWriter implements MessageBodyWriter clazz, Type type, Annotation[]
throw new NotFoundException("Datafile " + dataFile.getId() + ": Failed to locate and/or open physical file.");
}
+
+ boolean redirectSupported = false;
+ String auxiliaryTag = null;
+ String auxiliaryType = null;
+ String auxiliaryFileName = null;
// Before we do anything else, check if this download can be handled
// by a redirect to remote storage (only supported on S3, as of 5.4):
- if (storageIO instanceof S3AccessIO && ((S3AccessIO) storageIO).downloadRedirectEnabled()) {
+ if (storageIO.downloadRedirectEnabled()) {
// Even if the above is true, there are a few cases where a
// redirect is not applicable.
@@ -101,10 +113,8 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
// for a saved original; but CANNOT if it is a column subsetting
// request (must be streamed in real time locally); or a format
// conversion that hasn't been cached and saved on S3 yet.
- boolean redirectSupported = true;
- String auxiliaryTag = null;
- String auxiliaryType = null;
- String auxiliaryFileName = null;
+ redirectSupported = true;
+
if ("imageThumb".equals(di.getConversionParam())) {
@@ -112,7 +122,7 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
int requestedSize = 0;
if (!"".equals(di.getConversionParamValue())) {
try {
- requestedSize = new Integer(di.getConversionParamValue());
+ requestedSize = Integer.parseInt(di.getConversionParamValue());
} catch (java.lang.NumberFormatException ex) {
// it's ok, the default size will be used.
}
@@ -120,7 +130,7 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
auxiliaryTag = ImageThumbConverter.THUMBNAIL_SUFFIX + (requestedSize > 0 ? requestedSize : ImageThumbConverter.DEFAULT_THUMBNAIL_SIZE);
- if (isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
+ if (storageIO.downloadRedirectEnabled(auxiliaryTag) && isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
auxiliaryType = ImageThumbConverter.THUMBNAIL_MIME_TYPE;
String fileName = storageIO.getFileName();
if (fileName != null) {
@@ -139,7 +149,7 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
auxiliaryTag = auxiliaryTag + "_" + auxVersion;
}
- if (isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
+ if (storageIO.downloadRedirectEnabled(auxiliaryTag) && isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
String fileExtension = getFileExtension(di.getAuxiliaryFile());
auxiliaryFileName = storageIO.getFileName() + "." + auxiliaryTag + fileExtension;
auxiliaryType = di.getAuxiliaryFile().getContentType();
@@ -162,7 +172,7 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
// it has been cached already.
auxiliaryTag = di.getConversionParamValue();
- if (isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
+ if (storageIO.downloadRedirectEnabled(auxiliaryTag) && isAuxiliaryObjectCached(storageIO, auxiliaryTag)) {
auxiliaryType = di.getServiceFormatType(di.getConversionParam(), auxiliaryTag);
auxiliaryFileName = FileUtil.replaceExtension(storageIO.getFileName(), auxiliaryTag);
} else {
@@ -177,40 +187,52 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
redirectSupported = false;
}
}
-
- if (redirectSupported) {
- // definitely close the (potentially still open) input stream,
- // since we are not going to use it. The S3 documentation in particular
- // emphasizes that it is very important not to leave these
- // lying around un-closed, since they are going to fill
- // up the S3 connection pool!
- storageIO.closeInputStream();
- // [attempt to] redirect:
- String redirect_url_str;
- try {
- redirect_url_str = ((S3AccessIO) storageIO).generateTemporaryS3Url(auxiliaryTag, auxiliaryType, auxiliaryFileName);
- } catch (IOException ioex) {
- redirect_url_str = null;
- }
-
- if (redirect_url_str == null) {
- throw new ServiceUnavailableException();
+ }
+ String redirect_url_str=null;
+
+ if (redirectSupported) {
+ // definitely close the (potentially still open) input stream,
+ // since we are not going to use it. The S3 documentation in particular
+ // emphasizes that it is very important not to leave these
+ // lying around un-closed, since they are going to fill
+ // up the S3 connection pool!
+ storageIO.closeInputStream();
+ // [attempt to] redirect:
+ try {
+ redirect_url_str = storageIO.generateTemporaryDownloadUrl(auxiliaryTag, auxiliaryType, auxiliaryFileName);
+ } catch (IOException ioex) {
+ logger.warning("Unable to generate downloadURL for " + dataFile.getId() + ": " + auxiliaryTag);
+ //Setting null will let us try to get the file/aux file w/o redirecting
+ redirect_url_str = null;
+ }
+ }
+
+ if (systemConfig.isGlobusFileDownload() && systemConfig.getGlobusStoresList()
+ .contains(DataAccess.getStorageDriverFromIdentifier(dataFile.getStorageIdentifier()))) {
+ if (di.getConversionParam() != null) {
+ if (di.getConversionParam().equals("format")) {
+
+ if ("GlobusTransfer".equals(di.getConversionParamValue())) {
+ redirect_url_str = globusService.getGlobusAppUrlForDataset(dataFile.getOwner(), false, dataFile);
+ }
}
+ }
+ if (redirect_url_str!=null) {
- logger.fine("Data Access API: direct S3 url: " + redirect_url_str);
+ logger.fine("Data Access API: redirect url: " + redirect_url_str);
URI redirect_uri;
try {
redirect_uri = new URI(redirect_url_str);
} catch (URISyntaxException ex) {
- logger.info("Data Access API: failed to create S3 redirect url (" + redirect_url_str + ")");
+ logger.info("Data Access API: failed to create redirect url (" + redirect_url_str + ")");
redirect_uri = null;
}
if (redirect_uri != null) {
// increment the download count, if necessary:
if (di.getGbr() != null && !(isThumbnailDownload(di) || isPreprocessedMetadataDownload(di))) {
try {
- logger.fine("writing guestbook response, for an S3 download redirect.");
+ logger.fine("writing guestbook response, for a download redirect.");
Command> cmd = new CreateGuestbookResponseCommand(di.getDataverseRequestService().getDataverseRequest(), di.getGbr(), di.getGbr().getDataFile().getOwner());
di.getCommand().submit(cmd);
MakeDataCountEntry entry = new MakeDataCountEntry(di.getRequestUriInfo(), di.getRequestHttpHeaders(), di.getDataverseRequestService(), di.getGbr().getDataFile());
@@ -221,7 +243,7 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
// finally, issue the redirect:
Response response = Response.seeOther(redirect_uri).build();
- logger.fine("Issuing redirect to the file location on S3.");
+ logger.fine("Issuing redirect to the file location.");
throw new RedirectionException(response);
}
throw new ServiceUnavailableException();
@@ -434,6 +456,9 @@ public void writeTo(DownloadInstance di, Class> clazz, Type type, Annotation[]
offset = ranges.get(0).getStart();
leftToRead = rangeContentSize;
+ httpHeaders.add("Accept-Ranges", "bytes");
+ httpHeaders.add("Content-Range", "bytes "+offset+"-"+(offset+rangeContentSize-1)+"/"+contentSize);
+
}
} else {
// Content size unknown, must be a dynamically
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/EditDDI.java b/src/main/java/edu/harvard/iq/dataverse/api/EditDDI.java
index d58622f9874..82938fd3687 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/EditDDI.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/EditDDI.java
@@ -35,7 +35,6 @@
import javax.inject.Inject;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
-import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Context;
import javax.ws.rs.Path;
@@ -92,9 +91,6 @@ public class EditDDI extends AbstractApiBean {
private List filesToBeDeleted = new ArrayList<>();
- @Context
- protected HttpServletRequest httpRequest;
-
private VariableMetadataUtil variableMetadataUtil;
@@ -193,7 +189,7 @@ private boolean createNewDraftVersion(ArrayList neededToUpdate
Command cmd;
try {
- DataverseRequest dr = new DataverseRequest(apiTokenUser, httpRequest);
+ DataverseRequest dr = createDataverseRequest(apiTokenUser);
cmd = new UpdateDatasetVersionCommand(dataset, dr, fm);
((UpdateDatasetVersionCommand) cmd).setValidateLenient(true);
dataset = commandEngine.submit(cmd);
@@ -335,7 +331,7 @@ private boolean updateDraftVersion(ArrayList neededToUpdateVM,
}
Command cmd;
try {
- DataverseRequest dr = new DataverseRequest(apiTokenUser, httpRequest);
+ DataverseRequest dr = createDataverseRequest(apiTokenUser);
cmd = new UpdateDatasetVersionCommand(dataset, dr);
((UpdateDatasetVersionCommand) cmd).setValidateLenient(true);
commandEngine.submit(cmd);
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Files.java b/src/main/java/edu/harvard/iq/dataverse/api/Files.java
index 78847119ce4..9dc0c3be524 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Files.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Files.java
@@ -12,6 +12,7 @@
import edu.harvard.iq.dataverse.DataverseServiceBean;
import edu.harvard.iq.dataverse.EjbDataverseEngine;
import edu.harvard.iq.dataverse.FileMetadata;
+import edu.harvard.iq.dataverse.TermsOfUseAndAccessValidator;
import edu.harvard.iq.dataverse.UserNotificationServiceBean;
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.authorization.users.User;
@@ -21,6 +22,7 @@
import edu.harvard.iq.dataverse.datasetutility.OptionalFileParams;
import edu.harvard.iq.dataverse.engine.command.DataverseRequest;
import edu.harvard.iq.dataverse.engine.command.exception.CommandException;
+import edu.harvard.iq.dataverse.engine.command.exception.IllegalCommandException;
import edu.harvard.iq.dataverse.engine.command.impl.GetDataFileCommand;
import edu.harvard.iq.dataverse.engine.command.impl.GetDraftFileMetadataIfAvailableCommand;
import edu.harvard.iq.dataverse.engine.command.impl.RedetectFileTypeCommand;
@@ -146,6 +148,12 @@ public Response restrictFileInDataset(@PathParam("id") String fileToRestrictId,
// update the dataset
try {
engineSvc.submit(new UpdateDatasetVersionCommand(dataFile.getOwner(), dataverseRequest));
+ } catch (IllegalCommandException ex) {
+ //special case where terms of use are out of compliance
+ if (!TermsOfUseAndAccessValidator.isTOUAValid(dataFile.getOwner().getLatestVersion().getTermsOfUseAndAccess(), null)) {
+ return conflict(BundleUtil.getStringFromBundle("dataset.message.toua.invalid"));
+ }
+ return error(BAD_REQUEST, "Problem saving datafile " + dataFile.getDisplayName() + ": " + ex.getLocalizedMessage());
} catch (CommandException ex) {
return error(BAD_REQUEST, "Problem saving datafile " + dataFile.getDisplayName() + ": " + ex.getLocalizedMessage());
}
@@ -232,7 +240,7 @@ public Response replaceFileInDataset(
}
} else {
return error(BAD_REQUEST,
- "You must upload a file or provide a storageidentifier, filename, and mimetype.");
+ "You must upload a file or provide a valid storageidentifier, filename, and mimetype.");
}
} else {
newFilename = contentDispositionHeader.getFileName();
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/LDNInbox.java b/src/main/java/edu/harvard/iq/dataverse/api/LDNInbox.java
new file mode 100644
index 00000000000..3912b9102e2
--- /dev/null
+++ b/src/main/java/edu/harvard/iq/dataverse/api/LDNInbox.java
@@ -0,0 +1,195 @@
+package edu.harvard.iq.dataverse.api;
+
+import edu.harvard.iq.dataverse.Dataset;
+import edu.harvard.iq.dataverse.DatasetServiceBean;
+import edu.harvard.iq.dataverse.DataverseRoleServiceBean;
+import edu.harvard.iq.dataverse.GlobalId;
+import edu.harvard.iq.dataverse.MailServiceBean;
+import edu.harvard.iq.dataverse.RoleAssigneeServiceBean;
+import edu.harvard.iq.dataverse.RoleAssignment;
+import edu.harvard.iq.dataverse.UserNotification;
+import edu.harvard.iq.dataverse.UserNotificationServiceBean;
+import edu.harvard.iq.dataverse.authorization.Permission;
+import edu.harvard.iq.dataverse.authorization.groups.impl.ipaddress.ip.IpAddress;
+import edu.harvard.iq.dataverse.engine.command.DataverseRequest;
+import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
+import edu.harvard.iq.dataverse.util.json.JSONLDUtil;
+import edu.harvard.iq.dataverse.util.json.JsonLDNamespace;
+import edu.harvard.iq.dataverse.util.json.JsonLDTerm;
+
+import java.util.Date;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.io.StringWriter;
+import java.sql.Timestamp;
+import java.util.logging.Logger;
+
+import javax.ejb.EJB;
+import javax.json.Json;
+import javax.json.JsonObject;
+import javax.json.JsonValue;
+import javax.json.JsonWriter;
+import javax.servlet.http.HttpServletRequest;
+import javax.ws.rs.BadRequestException;
+import javax.ws.rs.ServiceUnavailableException;
+import javax.ws.rs.Consumes;
+import javax.ws.rs.ForbiddenException;
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+
+@Path("inbox")
+public class LDNInbox extends AbstractApiBean {
+
+ private static final Logger logger = Logger.getLogger(LDNInbox.class.getName());
+
+ @EJB
+ SettingsServiceBean settingsService;
+
+ @EJB
+ DatasetServiceBean datasetService;
+
+ @EJB
+ MailServiceBean mailService;
+
+ @EJB
+ UserNotificationServiceBean userNotificationService;
+
+ @EJB
+ DataverseRoleServiceBean roleService;
+
+ @EJB
+ RoleAssigneeServiceBean roleAssigneeService;
+ @Context
+ protected HttpServletRequest httpRequest;
+
+ @POST
+ @Path("/")
+ @Consumes("application/ld+json, application/json-ld")
+ public Response acceptMessage(String body) {
+ IpAddress origin = new DataverseRequest(null, httpRequest).getSourceAddress();
+ String whitelist = settingsService.get(SettingsServiceBean.Key.LDNMessageHosts.toString(), "");
+ // Only do something if we listen to this host
+ if (whitelist.equals("*") || whitelist.contains(origin.toString())) {
+ String citingPID = null;
+ String citingType = null;
+ boolean sent = false;
+
+ JsonObject jsonld = null;
+ jsonld = JSONLDUtil.decontextualizeJsonLD(body);
+ if (jsonld == null) {
+ // Kludge - something about the coar notify URL causes a
+ // LOADING_REMOTE_CONTEXT_FAILED error in the titanium library - so replace it
+ // and try with a local copy
+ body = body.replace("\"https://purl.org/coar/notify\"",
+ "{\n" + " \"@vocab\": \"http://purl.org/coar/notify_vocabulary/\",\n"
+ + " \"ietf\": \"http://www.iana.org/assignments/relation/\",\n"
+ + " \"coar-notify\": \"http://purl.org/coar/notify_vocabulary/\",\n"
+ + " \"sorg\": \"http://schema.org/\",\n"
+ + " \"ReviewAction\": \"coar-notify:ReviewAction\",\n"
+ + " \"EndorsementAction\": \"coar-notify:EndorsementAction\",\n"
+ + " \"IngestAction\": \"coar-notify:IngestAction\",\n"
+ + " \"ietf:cite-as\": {\n" + " \"@type\": \"@id\"\n"
+ + " }}");
+ jsonld = JSONLDUtil.decontextualizeJsonLD(body);
+ }
+ if (jsonld == null) {
+ throw new BadRequestException("Could not parse message to find acceptable citation link to a dataset.");
+ }
+ String relationship = "isRelatedTo";
+ String name = null;
+ JsonLDNamespace activityStreams = JsonLDNamespace.defineNamespace("as",
+ "https://www.w3.org/ns/activitystreams#");
+ JsonLDNamespace ietf = JsonLDNamespace.defineNamespace("ietf", "http://www.iana.org/assignments/relation/");
+ String objectKey = new JsonLDTerm(activityStreams, "object").getUrl();
+ if (jsonld.containsKey(objectKey)) {
+ JsonObject msgObject = jsonld.getJsonObject(objectKey);
+
+ citingPID = msgObject.getJsonObject(new JsonLDTerm(ietf, "cite-as").getUrl()).getString("@id");
+ logger.fine("Citing PID: " + citingPID);
+ if (msgObject.containsKey("@type")) {
+ citingType = msgObject.getString("@type");
+ if (citingType.startsWith(JsonLDNamespace.schema.getUrl())) {
+ citingType = citingType.replace(JsonLDNamespace.schema.getUrl(), "");
+ }
+ if (msgObject.containsKey(JsonLDTerm.schemaOrg("name").getUrl())) {
+ name = msgObject.getString(JsonLDTerm.schemaOrg("name").getUrl());
+ }
+ logger.fine("Citing Type: " + citingType);
+ String contextKey = new JsonLDTerm(activityStreams, "context").getUrl();
+
+ if (jsonld.containsKey(contextKey)) {
+ JsonObject context = jsonld.getJsonObject(contextKey);
+ for (Map.Entry entry : context.entrySet()) {
+
+ relationship = entry.getKey().replace("_:", "");
+ // Assuming only one for now - should check for array and loop
+ JsonObject citedResource = (JsonObject) entry.getValue();
+ String pid = citedResource.getJsonObject(new JsonLDTerm(ietf, "cite-as").getUrl())
+ .getString("@id");
+ if (citedResource.getString("@type").equals(JsonLDTerm.schemaOrg("Dataset").getUrl())) {
+ logger.fine("Raw PID: " + pid);
+ if (pid.startsWith(GlobalId.DOI_RESOLVER_URL)) {
+ pid = pid.replace(GlobalId.DOI_RESOLVER_URL, GlobalId.DOI_PROTOCOL + ":");
+ } else if (pid.startsWith(GlobalId.HDL_RESOLVER_URL)) {
+ pid = pid.replace(GlobalId.HDL_RESOLVER_URL, GlobalId.HDL_PROTOCOL + ":");
+ }
+ logger.fine("Protocol PID: " + pid);
+ Optional id = GlobalId.parse(pid);
+ Dataset dataset = datasetSvc.findByGlobalId(pid);
+ if (dataset != null) {
+ JsonObject citingResource = Json.createObjectBuilder().add("@id", citingPID)
+ .add("@type", citingType).add("relationship", relationship)
+ .add("name", name).build();
+ StringWriter sw = new StringWriter(128);
+ try (JsonWriter jw = Json.createWriter(sw)) {
+ jw.write(citingResource);
+ }
+ String jsonstring = sw.toString();
+ Set ras = roleService.rolesAssignments(dataset);
+
+ roleService.rolesAssignments(dataset).stream()
+ .filter(ra -> ra.getRole().permissions()
+ .contains(Permission.PublishDataset))
+ .flatMap(
+ ra -> roleAssigneeService
+ .getExplicitUsers(roleAssigneeService
+ .getRoleAssignee(ra.getAssigneeIdentifier()))
+ .stream())
+ .distinct() // prevent double-send
+ .forEach(au -> {
+
+ if (au.isSuperuser()) {
+ userNotificationService.sendNotification(au,
+ new Timestamp(new Date().getTime()),
+ UserNotification.Type.DATASETMENTIONED, dataset.getId(),
+ null, null, true, jsonstring);
+
+ }
+ });
+ sent = true;
+ }
+ }
+ }
+ }
+ }
+ }
+
+ if (!sent) {
+ if (citingPID == null || citingType == null) {
+ throw new BadRequestException(
+ "Could not parse message to find acceptable citation link to a dataset.");
+ } else {
+ throw new ServiceUnavailableException(
+ "Unable to process message. Please contact the administrators.");
+ }
+ }
+ } else {
+ logger.info("Ignoring message from IP address: " + origin.toString());
+ throw new ForbiddenException("Inbox does not acept messages from this address");
+ }
+ return ok("Message Received");
+ }
+}
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Metadata.java b/src/main/java/edu/harvard/iq/dataverse/api/Metadata.java
index 5084b5267a4..b0d82b69d1b 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Metadata.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Metadata.java
@@ -5,19 +5,25 @@
*/
package edu.harvard.iq.dataverse.api;
+import edu.harvard.iq.dataverse.Dataset;
import edu.harvard.iq.dataverse.DatasetServiceBean;
+
+import java.io.IOException;
+import java.util.concurrent.Future;
import java.util.logging.Logger;
import javax.ejb.EJB;
-import javax.ws.rs.GET;
-import javax.ws.rs.Path;
-import javax.ws.rs.Produces;
+import javax.json.Json;
+import javax.json.JsonArrayBuilder;
+import javax.json.JsonObjectBuilder;
+import javax.ws.rs.*;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.PUT;
+
+import edu.harvard.iq.dataverse.DatasetVersion;
import edu.harvard.iq.dataverse.harvest.server.OAISetServiceBean;
import edu.harvard.iq.dataverse.harvest.server.OAISet;
+import org.apache.solr.client.solrj.SolrServerException;
/**
*
@@ -59,7 +65,27 @@ public Response exportAll() {
public Response reExportAll() {
datasetService.reExportAllAsync();
return this.accepted();
- }
+ }
+
+ @GET
+ @Path("{id}/reExportDataset")
+ public Response indexDatasetByPersistentId(@PathParam("id") String id) {
+ try {
+ Dataset dataset = findDatasetOrDie(id);
+ datasetService.reExportDatasetAsync(dataset);
+ return ok("export started");
+ } catch (WrappedResponse wr) {
+ return wr.getResponse();
+ }
+ }
+
+ @GET
+ @Path("clearExportTimestamps")
+ public Response clearExportTimestamps() {
+ // only clear the timestamp in the database, cached metadata export files are not deleted
+ int numItemsCleared = datasetService.clearAllExportTimes();
+ return ok("cleared: " + numItemsCleared);
+ }
/**
* initial attempt at triggering indexing/creation/population of a OAI set without going throught
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/Users.java b/src/main/java/edu/harvard/iq/dataverse/api/Users.java
index b1177531874..d3b938af960 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/Users.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/Users.java
@@ -83,7 +83,7 @@ public Response mergeInAuthenticatedUser(@PathParam("consumedIdentifier") String
return error(Response.Status.BAD_REQUEST, "Error calling ChangeUserIdentifierCommand: " + e.getLocalizedMessage());
}
- return ok("All account data for " + consumedIdentifier + " has been merged into " + baseIdentifier + " .");
+ return ok(String.format("All account data for %s has been merged into %s.", consumedIdentifier, baseIdentifier));
}
@POST
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/datadeposit/SwordServiceBean.java b/src/main/java/edu/harvard/iq/dataverse/api/datadeposit/SwordServiceBean.java
index 96df3ab400a..2e093dbcf36 100644
--- a/src/main/java/edu/harvard/iq/dataverse/api/datadeposit/SwordServiceBean.java
+++ b/src/main/java/edu/harvard/iq/dataverse/api/datadeposit/SwordServiceBean.java
@@ -9,6 +9,7 @@
import edu.harvard.iq.dataverse.TermsOfUseAndAccess;
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.authorization.users.User;
+import edu.harvard.iq.dataverse.dataset.DatasetUtil;
import edu.harvard.iq.dataverse.license.License;
import edu.harvard.iq.dataverse.license.LicenseServiceBean;
import edu.harvard.iq.dataverse.util.BundleUtil;
@@ -163,7 +164,7 @@ public void setDatasetLicenseAndTermsOfUse(DatasetVersion datasetVersionToMutate
terms.setDatasetVersion(datasetVersionToMutate);
if (listOfLicensesProvided == null) {
- License existingLicense = datasetVersionToMutate.getTermsOfUseAndAccess().getLicense();
+ License existingLicense = DatasetUtil.getLicense(datasetVersionToMutate);
if (existingLicense != null) {
// leave the license alone but set terms of use
setTermsOfUse(datasetVersionToMutate, dcterms, existingLicense);
diff --git a/src/main/java/edu/harvard/iq/dataverse/api/dto/DataverseMetadataBlockFacetDTO.java b/src/main/java/edu/harvard/iq/dataverse/api/dto/DataverseMetadataBlockFacetDTO.java
new file mode 100644
index 00000000000..65b6f0ff58f
--- /dev/null
+++ b/src/main/java/edu/harvard/iq/dataverse/api/dto/DataverseMetadataBlockFacetDTO.java
@@ -0,0 +1,56 @@
+package edu.harvard.iq.dataverse.api.dto;
+
+import java.util.List;
+
+/**
+ *
+ * @author adaybujeda
+ */
+public class DataverseMetadataBlockFacetDTO {
+
+ private Long dataverseId;
+ private String dataverseAlias;
+ private boolean isMetadataBlockFacetRoot;
+ private List metadataBlocks;
+
+ public DataverseMetadataBlockFacetDTO(Long dataverseId, String dataverseAlias, boolean isMetadataBlockFacetRoot, List