-
Notifications
You must be signed in to change notification settings - Fork 2
Developing backend
Also checkout
($INSTANCE is one number. On dev-5 it can be 5 or 6, see https://github.com/dataquest-dev/DSpace/wiki/Developing-dev-5/ When deployed at customer, it is usally 0)
In order to connect to database, follow these steps:
- If run in docker, first exec into it like so:
sudo docker exec -it dspacedb$INSTANCE /bin/bash
- Then login to postgres with user dspace like so:
psql -U dspace -p 543$INSTANCE
(You might need to locate psql. It can be placed in postgres installation folder or in its subfolder/bin
)- As shell suggests, type help for help.
- Misc useful commands:
-
\dt
to see all tables -
\d tablename
to describe table, e.g.\d eperson
-
select * from eperson;
to see all entries in eperson. Note the ; here, which is missing in\d
commands.
-
Update Handle table - add an url
column there.
- Add script to the
dspace-api/src/main/resources/org/dspace/storage/rdbms/sqlmigration/postgres
- Rebuild [dspace-source] to generate fresh dspace folder
- Run
dspace database migrate
command in the dspace/bin
Don't forget to change the database for tests: Testing
As described here https://www.postgresql.org/docs/current/backup-dump.html it is possible to create dump files and or load them.
Creating dump is quite simple, just use pg_dump -U dspace dspace > path/to/file.sql
. (NOTE: all programs/scripts are located in [postgres-installation]/bin, best to execute them from there)
In command above, -U dspace means user dspace and second dspace is name of database. I found it easiest to use owner of database.
To import database, it's important to drop all the tables, or alternatively whole database and recreate it.
In order to drop and create again, log in as root psql -U postgres
. You should then be prompted for password (for user of name postgres, which should be admin). After login check connection info by typing \conninfo
. You must not be connected to database you want to drop. Also end any other programs that might be connected, in this scenario namely dspace backend. Drop database by issuing command drop database dspace;
.
To create new database, log out by typing \q
. Then execute createdb --username=postgres --owner=dspace --encoding=UNICODE dspace
.
Log in again as postgres (psql -U postgres
), change database to dspace \connect dspace
and execute following command: create extension pgcrypto;
Log out again (\q
) and execute psql -U postgres dspace < path/to/db_file.sql
. Then go to [dspace-installation]/bin and import database with this command dspace database migrate force
as well as discover indexes with dspace index-discovery
.
Done.
Database installation instructions, in case of first install.
In dspace/config/submission-forms.xml set input type to <input-type>autocomplete</input-type>
and the values will be loaded from the appropriate metadata field.
How to harvest in the Docker
Related issue: https://github.com/dataquest-dev/dspace-angular/issues/48
- go to [dspace-angular] - in our case /opt/actions-runner-dq-1/_work/dspace-angular/dspace-angular Dataquest
- remove entrypoint in the docker/cli.yml
- connect to the docker-cli:
docker-compose --env-file build-scripts/run/envs/.default -p dq-d7 -f docker/cli.yml run -v $(pwd)/build-scripts/import/assets:/assets --rm dspace-cli /bin/bash
- create collection:
./dspace structure-builder -f /assets/test_community_collection.xml -o /assets/test_import_output.xml -e [email protected]
- prepare collection for harvesting:
./dspace harvest -s -c 123456789/2 -a http://lindat.mff.cuni.cz/repository/oai/request -i hdl_11234_3430 -m dc -t 1 -e [email protected]
- harvest to the collection:
./dspace harvest -r -c 123456789/2 -a http://lindat.mff.cuni.cz/repository/oai/request -i hdl_11234_3430 -m dc -t 1 -e [email protected]
Unresolved problems:
- The script harvest.sh harvest the items to the collection 123456789/2 but it works only in the first run because database remember all historical collection ids.
- Add the custom metadata to the local-types.xml
- Run
mvn package
in the [dspace-source]/dspace - Run
ant fresh_install
in the [dspace-source]/dspace/targer/dspace-installer - Run
dspace database migrate force
in the [dspace-source]/bin
- Login as an Administrator
- In the side menu that appears, open the "Registries" menu. Click on "Metadata". This is where all your Metadata fields are defined
- Now, you'll see a list of all the metadata schema that DSpace is initialized with. click on the "local" schema to see its fields
- The "local" schema is empty by default, so no fields should appear. But, you can add a new field by minimally adding an Element & clicking Save. For example, to create "local.myfield", place the text "myfield" in the Element and click Save. To add a field named "local.myfield.other", add the text "myfield" in the Element and "other" in Qualifier and click Save.
- Here's the endpoint which is used: https://github.com/DSpace/RestContract/blob/main/metadatafields.md#creating-a-metadata-field
Some changes are not pushed to the testEnvironment.zip after command mvn package
because:
- '/target/' folder needs to be updated - run
mvn clean package
not justmvn package
- After running the test, the 'testEnvironment.zip' is loaded from the .m2 and not created - remove it from the .m2.
To add license to files which miss it, call mvn license:format
.
Use Mathers.is
instead of Matchers.hasItem
Necessary class for create new endpoint for new table in DB:
- Object class (Handle.java, Metadatavalue.java...) -> add it to hibernate.cfg
- DAO/Impl -> add it to core-dao-services.xml
- Service/Impl -> add it to core-services.xml Note: If the service wants to catch the protected constructor of object, it mast be in package org.dspace.content!
- RestRepository/IT
- Rest
- Converter
- Resource
- Builder
- ServiceFacotry/Impl -> add it to core-factory-services.xml
- Create a collection - you need to have a handle of that collection
- Configure a new submission process in the
item-submission.xsl
file e.g., https://github.com/dataquest-dev/DSpace/pull/231/files#diff-cbecd668e9534b5ef05a6d8838b62d5f2c3f2ea46fdb1e2f980a2cffeee51d4dR368 - Rebuild the DSpace project.
- Update
spring.servlet.multipart.max-file-size
andspring.servlet.multipart.max-request-size
props to your max upload file limit size in theclarin-dspace.cfg
- Update
submission-forms.xml
- the Big file input field must be set up
<row>
<field>
<dc-schema>local</dc-schema>
<dc-element>bitstream</dc-element>
<dc-qualifier>redirectToURL</dc-qualifier>
<repeatable>false</repeatable>
<label>Big file URL</label>
<input-type>onebox</input-type>
<hint>
The actual maximum upload size of the file is 4GB. To upload the file bigger than maximum
upload size type the URL of that big file. Admin must know URL to that bitstream file.
Then click on the 'Save' button and the file will start to upload. The file will be loaded
from the '/temp' folder of the server. Example: /tomcat/temp/bitstream.png
</hint>
<required></required>
<acl>
policy=deny,action=read,grantee-type=user,grantee-id=*
</acl>
</field>
</row>
- Do not forget about nginx and tomcat restrictions
- Try to upload big file following this steps: https://github.com/dataquest-dev/DSpace/wiki/For-users#uploading--files-which-are-bigger-than-maximum-upload-size