diff --git a/submissions/431/_quarto.yml b/submissions/431/_quarto.yml new file mode 100644 index 0000000..ae6ece2 --- /dev/null +++ b/submissions/431/_quarto.yml @@ -0,0 +1,8 @@ +project: + type: manuscript + +manuscript: + article: index.qmd + +format: + html: default diff --git a/submissions/431/images/Network analysis settings in nodegoat.png b/submissions/431/images/Network analysis settings in nodegoat.png new file mode 100644 index 0000000..da99c2f Binary files /dev/null and b/submissions/431/images/Network analysis settings in nodegoat.png differ diff --git a/submissions/431/images/Network of Basel jurists 1460-1550.png b/submissions/431/images/Network of Basel jurists 1460-1550.png new file mode 100644 index 0000000..9dab109 Binary files /dev/null and b/submissions/431/images/Network of Basel jurists 1460-1550.png differ diff --git a/submissions/431/images/Places of activity of Basel jurists 1460-1550.png b/submissions/431/images/Places of activity of Basel jurists 1460-1550.png new file mode 100644 index 0000000..c79e393 Binary files /dev/null and b/submissions/431/images/Places of activity of Basel jurists 1460-1550.png differ diff --git a/submissions/431/images/Places of origin of students at the university of Basel 1460-1550.png b/submissions/431/images/Places of origin of students at the university of Basel 1460-1550.png new file mode 100644 index 0000000..17d802b Binary files /dev/null and b/submissions/431/images/Places of origin of students at the university of Basel 1460-1550.png differ diff --git a/submissions/431/images/RAG Eingabemaske MS Access.JPG b/submissions/431/images/RAG Eingabemaske MS Access.JPG new file mode 100644 index 0000000..ff779cb Binary files /dev/null and b/submissions/431/images/RAG Eingabemaske MS Access.JPG differ diff --git a/submissions/431/images/RAG nodegoat frontend for data collection.png b/submissions/431/images/RAG nodegoat frontend for data collection.png new file mode 100644 index 0000000..915be3e Binary files /dev/null and b/submissions/431/images/RAG nodegoat frontend for data collection.png differ diff --git a/submissions/431/images/nodegoat text reconciliation settings.png b/submissions/431/images/nodegoat text reconciliation settings.png new file mode 100644 index 0000000..399e0af Binary files /dev/null and b/submissions/431/images/nodegoat text reconciliation settings.png differ diff --git a/submissions/431/index.qmd b/submissions/431/index.qmd new file mode 100644 index 0000000..51ff9e4 --- /dev/null +++ b/submissions/431/index.qmd @@ -0,0 +1,145 @@ +--- +submission_id: 431 +categories: 'Session 2A' +title: "From manual work to artificial intelligence: developments in data literacy using the example of the Repertorium Academicum Germanicum (2001-2024)" +author: + - name: Kaspar Gubler + orcid: 0000-0002-6627-5045 + email: kaspar.gubler@unibe.ch + affiliations: + - University of Bern + - University of Krakow (Hector) + +keywords: + - Digital Prosopography + - Data Biographies + - Data visualisations + - Network analysis + - History of knowledge and science + - History of universities + +abstract: | + The Repertorium Academicum Germanicum (RAG) is a prosopographical research project dedicated to studying medieval scholars and their impact on society in Europe from 1250 to 1550. The RAG database contains approximately 62,000 scholars and 400,000 biographical entries across 26,000 locations, derived from university registers, academic sources, and general biographical records. As a pioneering project in digital prosopography, the RAG is exemplary for the development of data competences in the last 20 years. The presentation will therefore highlight the methods, procedures, best practices and future approaches used to date. What is special about the RAG is that the project not only collects data, but also analyses it in a targeted manner with a focus on data visualisations (maps, networks, time series). RAG presents the results in its own series of publications [(RAG Forschungen). ](https://vdf.ch/index.php?route=product/collection&language=de-DE&collection_id=35) + +key-points: + - Key point 1 The Repertorium Academicum Germanicum (RAG) focuses on the knowledge influence of medieval scholars in pre-modern Europe, creating a comprehensive research database. + - Key point 2 The RAG database, with data on 62,000 scholars, has advanced from manual to computer-aided and AI-assisted data collection and analysis. + - Key point 3 Technological advancements, including the use of nodegoat, have enhanced data management, collaboration, and accessibility, integrating AI for improved historical data analysis. +date: 07-07-2024 +bibliography: references.bib +--- + +## Introduction + +The core data of RAG is based on the university registers. The registers usually contain the names and places of origin of the students as well as the date of enrolment. This data is enriched in the research database with biographical data on subjects studied, professional activities and written works. Since 2020, the RAG has been a sub-project of the umbrella project Repertorium Academicum (REPAC), which is being carried out at the Historical Institute of the University of Bern. See on the project and its developments: [@gubler_hesse_schwinges2022]. +Data skills in RAG can be divided into data collection, data entry and data analysis. Different data skills are required in the three areas, which have of course also changed over time as a result of digitalisation. While compiling and analysing data has been simplified by computer-aided processes, the precise recording of data in the database still requires in-depth historical knowledge and human intelligence. + +## Project history + +The RAG started with a Microsoft Access database as a multi-user installation. In 2007, the switch was made to a client-server architecture, with MS Access continuing to serve as the front end and a Microsoft SQL server being added as the back end. This configuration had to be replaced in 2017 as regular software updates for the client and server had been neglected. As a result, it was no longer possible to update the MS Access client to the new architecture in good time and the server, which was running on the outdated MS SQL Server 2005 operating system, increasingly posed a security risk. In addition, publishing the data on the internet was only possible to a limited extent, as a fragmented export from the MS SQL server to a MySQL database with a PHP front end was required. +In 2017, it was therefore decided to switch to a new system [@gubler2020]. + + + +![Fig. 1: Former frontend of the RAG project for data collection in MS Access 2003.](images/RAG Eingabemaske MS Access.jpg) + + +Over one million data records on people, events, observations, locations, institutions, sources and literature were to be integrated in a database migration - a project that had previously been considered for years without success. After a evaluation of possible research environments, nodegoat was chosen [@vanBree_Kessels2013]. Nodegoat was a tip from a colleague who had attended a nodegoat workshop [@gubler2021]. With nodegoat, the RAG was able to implement the desired functions immediately: + +- Location-independent data collection thanks to a web-based front end. + +- Data visualisations (maps, networks, time series) are integrated directly into nodegoat, which means that exporting to other software is not necessary, but possible. + +- Research data can be published directly from nodegoat without the need to export it to other software. + + +From then on, the RAG research team worked with nodegoat in a live environment in which the data collected can be made available on the Internet immediately after a brief review. This facilitated the exchange with the research community and the interested public and significantly increased the visibility of the research project. The database migration to nodegoat meant that the biographical details of around 10,000 people could be published for the first time, which had previously not been possible due to difficulties in exporting data from the MS SQL server. On 1 January 2018, the research teams at the universities in Bern and Giessen then began collecting data in nodegoat, starting with extensive standardisation of the data. Thanks to a multi-change function in nodegoat, these standardisations could now be carried out efficiently by all users. Institutions where biographical events took place (e.g. universities, schools, cities, courts, churches, monasteries, courts) were newly introduced. + +![Fig. 2: Frontend of the RAG project for data collection in nodegoat.](images/RAG nodegoat frontend for data collection.png) + + +## Methodology + +These institutions were assigned to the events accordingly, which forms the basis for the project's method of analysis: analysing the data according to the criteria 'incoming' and 'outgoing' [@gubler2022]. The key questions here are: Which people, ideas or knowledge entered an institution or space? + +![Fig. 3: Incoming: Places of origin of students at the University of Basel 1460-1550 with the large dot in the centre as the city of Basel., data: repac.ch, 07/2024.](images/Places of origin of students at the university of Basel 1460-1550.png) + +How was this knowledge shared and passed on there? Spaces are considered both as geographical locations and as knowledge spaces within networks of scholars. In addition, the written works of scholars are taken into account in order to document their knowledge. The people themselves are seen as knowledge carriers who acquire knowledge and pass it on. Consequently, the people are linked to their knowledge in the database using approaches from the history of knowledge [@steckel2015]. The methodology described can therefore not only be used to research the circulation of knowledge between individuals and institutions, but also to digitally reconstruct spheres of influence and knowledge, for example by discipline: Spaces that were shaped by jurists, Physicians or theologians. The map shows places or regions where a particularly large number of Basel jurists were active. The second graphic shows the network of the same group with famous Bonifacius Amerbach as a strong link in the centre. The network is formed based on a Force-directed graph. + +![Fig. 4: Outgoing: Spheres of activity of jurists with a doctorate from the University of Basel +1460-1550., data: repac.ch, 07/2024.](images/Places of activity of Basel jurists 1460-1550.png) + +![Fig. 5: Network: Jurists with a doctorate from the University of Basel +1460-1550., data: repac.ch, 07/2024.](images/Network of Basel jurists 1460-1550.png) + + +## Data literacy + +Students and researchers working on the RAG project can acquire important data skills. We can make a distinction, as said, between the skills required to collect, enter and analyse the biografical data. Key learning content related to the data entering process for students working in the RAG project are: + +- Basics of data modelling + +Basic knowledge of the use of digital research tools and platforms. Students learn how to design and adapt data structures in order to systematically enter, manage and analyse historical information. They understand how to define entities (such as people, places, events) and their relationships. + +- Basics of data collection + +The collection of data in a historical project involves several steps and methods to ensure data consistency. In the project, students learn how to search and evaluate sources based on research questions and extract the relevant information. Both quantitative and qualitative approaches are considered in the methods of data collection. An SNSF Spark project provides an example of a quantitative approach on dynamic data ingestion of linked open data in one nodegoat environment [@gubler2021_1] + +- Data entry and management + +Students acquire practical experience in entering and maintaining data within a digital research environment. Additionally, they learn to document workflows and data sources to ensure transparency and traceability. For effective data entry, both students and researchers must develop essential skills related to the extraction and evaluation of historical information. + +- Source criticism and information extraction + +The project's most challenging task is extracting relevant biographical information from sources and literature and systematically recording and documenting it in the database according to project-specific guidelines. The goal is to achieve the highest possible standardization to ensure data quality and consistency. Specifically, students must select life events from approximately 900 biographical categories to accurately record an event. These categories are divided into three major blocks: 1) personal data (birth, death, social and geographical origin, etc.), 2) academic data (specializations, degrees), and 3) professional activities. These encompass all potential fields of activity in both ecclesiastical and secular administration in the late Middle Ages. Collecting data and accurately evaluating information from sources and research literature is a demanding task that requires a solid knowledge of history and Latin. + +Key learning content related to data analysis is: + +- Learning how to query a database. The use of filters and search functions for targeted data analysis requires a solid understanding of the data model, the data collection methodology, and the available content. For an initial overview of the data and, if necessary, for in-depth analysis, AI tools for data analysis will also be used in the project in the future. Such tools can help with data retrieval, as the data can be queried using natural language prompts. + +- Geographical and temporal visualisations + +Use of GIS functionalities to create and analyse geographical maps. Visualisation of historical data on time axes to show chronological processes and changes. + +- Network analysis + +Knowing and applying methods for linking different data sets and for analysing networks and interactions between historical actors such as people, institutions, objects and others. The data can also be exported from nodegoat in order to evaluate it with other visualisation software, for example such as Gephi for network analyses. The graphic shows the general settings in nodegoat for network analyses. + +![Fig. 6: General settings for network analyses in nodegoat.](images/Network analysis settings in nodegoat.png) + +- Interpretation of the digital findings (patterns, developments) + +The most important skill in the entire research process is, of course, the ability to interpret the results. The data is always interpreted against the background of the historical context. Without well-founded historical expertise, however, the data cannot provide in-depth insights for historical research, but at best enable superficial observations. It follows that when working with research data, a double source criticism must always take place: when obtaining the information from the sources (data collection) and when analysing the digital results obtained from the information (data interpretation). + +## Digitisation + +How have the described data competences changed since the start of the project in 2001? This question is linked to changes in the research infrastructure, the availability of digitised material (sources and literature) and with the question of how computer-aided automation, in particular, artificial intelligence have influenced and will influence the practices of data collection, entry and analysis in the project, expanding the epistemological framework? The most important factors in connection with digitalisation in general are: + +- Resources: The increasing availability of digitized texts, particularly through Google Books, has significantly transformed prosopographical and biographical research. Not only is a wealth of information more accessible today, but it can also be entered into databases more efficiently. Consequently, skills for digital research and information processing have had to be continuously adapted throughout the course of the project. + +- Tools: Since the start of the project, new software tools have significantly transformed the processes of collecting, extracting, entering, and analyzing information. The most substantial development has been in data analysis, which, thanks to advanced tools and user-friendly graphical interfaces, has become accessible to a wide range of researchers, no longer being limited to computer scientists. AI tools for data analysis also open up huge potential for data analysis. Large amounts of data can be analyzed in a short time using simple query languages. However, when using AI, the results must be examined even more critically than with conventional data analysis. + +- Data analysis: The visualization of research data in historical studies has seen significant advancements. For instance, data can now be displayed on historical maps, within networks, or in time series, and dynamically over time using a time slider in a research environment like nodegoat. This has accelerated data analysis: tasks like creating a map, which took weeks in the early years of the RAG project, now take only a few minutes. + +- Interpretation of the data: The core method of historical scholarship, source criticism, has also evolved significantly. While it traditionally involved evaluating information from sources and literature, today it also requires the ability to analyze data visualizations and network representations derived from these sources. To adequately assess these digital findings, a thorough understanding of the data model, data context, and historical background is essential. Consequently, data analysis presents new challenges for historical research, necessitating advanced data competencies at multiple levels. + +- Collaboration: Web-based research environments have made collaboration much easier and more transparent. Teams are now able to follow each other's progress in real time, making the location of the work less important and communication smoother. + + +## Human and artificial intelligence + +Regarding data collection, entry, and analysis, artificial intelligence significantly impacts several, though not all, tasks within the RAG project. + +- Data collection: AI supports the rapid processing and pre-sorting of digital information used for data collection. For example, Transkribus is utilized to create OCR texts, which are then directly imported into nodegoat and matched with specific vocabularies using algorithms [@gubler2023]. This technology aids the RAG project by efficiently detecting references to students and scholars within large text corpora, significantly speeding up the identification and extraction process. + +![Fig. 7: Example settings for the algorithm for reconciling textual data in nodegoat.](images/nodegoat text reconciliation settings.png) + + +- Data entry: In this area, human intelligence remains crucial. In-depth specialist knowledge of the historical field under investigation is essential, particularly concerning the history of universities and knowledge in the European Middle Ages and the Renaissance. Due to the heterogeneous and often fragmented nature of the sources, AI cannot yet replicate this expertise. The nuanced understanding required to interpret historical events and their semantic levels still necessitates human insight. + +- Data analysis: While AI support for data entry is still limited, it is much greater for data analysis. The epistemological framework has expanded considerably not only in digital prosopography and digital biographical research, but in history in general. Exploratory data analysis in particular will become a key methodology in history through the application of AI. + +## Conclusion + +Since the 1990s, digital resources and tools have become increasingly prevalent in historical research. However, skills related to handling data remain underdeveloped in this field. This gap is not due to a lack of interest from students, but rather stems from a chronic lack of available training opportunities. This situation has gradually improved in recent years, with a growing number of courses and significant initiatives promoting digital history. +Nevertheless, the responsibility now lies with academic chairs to take a more proactive role in integrating a sustainable range of digital courses into the general history curriculum. It is crucial that data literacy becomes a fundamental component of the training for history students, particularly considering their future career prospects and the increasingly complex task of evaluating information, including the critical use of artificial intelligence methods, tools and results. Especially with regard to the methodology of source criticism, which is now more important than ever in the evaluation of AI-generated results. In addition to formal teaching, more project-based learning should be offered to support students in acquiring digital skills. diff --git a/submissions/431/references.bib b/submissions/431/references.bib new file mode 100644 index 0000000..6646775 --- /dev/null +++ b/submissions/431/references.bib @@ -0,0 +1,93 @@ +@book{vanBree_Kessels2013, + title = {nodegoat: a web-based data management, network analysis & visualisation environment, http://nodegoat.net from LAB1100, http://lab1100.com}, + shorttitle = {nodegoat data management}, + author = {van Bree, Pim and Kessels, Geert}, + date = {2013}, + howpublished = "\url{https://nodegoat.net}", + langid = {english}, + keywords = {data management, data visualisation, network analysis, research environment}, +} +@book{gubler_hesse_schwinges2022, + title = {Person und Wissen. Bilanz und Perspektiven (RAG Forschungen 4)}, + shorttitle = {Person und Wissen}, + author = {Gubler, Kaspar and Hesse, Christian and Schwinges, Rainer Christoph}, + date = {2022}, + publisher = {vdf,Zürich}, + doi = {10.3218/4114-9}, + langid = {german}, + keywords = {digital prosopography, data biographies}, +} +@book{gubler2020, + title = {Database Migration Case Study: Repertorium Academicum Germanicum (RAG)}, + shorttitle = {Database Migration}, + author = {Gubler, Kaspar}, + date = {2020}, + publisher = {histdata.hypotheses.org}, + doi = {10.58079/pldk}, + langid = {english}, + keywords = {database migration,case study, methodology}, +} +@book{gubler2021, + title = {The coffee break as a driver of science: Nodegoat @ Uni Bern (2017-2021)}, + shorttitle = {nodegoat @ Uni Bern}, + author = {Gubler, Kaspar}, + date = {2021}, + publisher = {histdata.hypotheses.org}, + doi = {10.58079/ple4}, + langid = {english}, + keywords = {nodegoat,database migration,data management, data visualisation,network analysis}, +} +@book{gubler2021_1, + title = {Data Ingestion Episode III – May the linked open data be with you}, + shorttitle = {Data ingestion}, + author = {Gubler, Kaspar}, + date = {2021}, + publisher = {histdata.hypotheses.org}, + doi = {10.58079/pldv}, + langid = {english}, + keywords = {nodegoat,data ingestion, data visualisation,network analysis}, +} +@book{gubler2023, + title = {Transkribus kombiniert mit nodegoat: Ein vielseitiges Werkzeug für Datenanalysen}, + shorttitle = {Transkribus + nodegoat}, + author = {Gubler, Kaspar}, + date = {2023}, + publisher = {histdata.hypotheses.org}, + doi = {10.58079/plex}, + langid = {german}, + keywords = {nodegoat,transkribus,data reconciliation,ocr, text mining}, +} +@book{gubler2022, + title = {Von Daten zu Informationen und Wissen. Zum Stand der Datenbank des Repertorium Academicum Germanicum, in: Kaspar Gubler, Christian Hesse, Rainer C. Schwinges (Hrsg.): Person und Wissen. Bilanz und Perspektiven (RAG Forschungen 4)}, + shorttitle = {Informationen und Wissen}, + author = {Gubler, Kaspar}, + date = {2022}, + publisher = {vdf,Zürich}, + URL = "https://boris.unibe.ch/174773/2/Gubler__Von_Daten_zu_Informationen_und_Wissen.pdf", + langid = {german}, + keywords = {methodology,data model,data analysis}, +} +@book{gubler2022_1, + title = {Forschungsdaten vernetzen, harmonisieren und auswerten: Methodik und Umsetzung am Beispiel einer prosopographischen Datenbank mit rund 200.000 Studenten europäischer Universitäten (1200–1800), in: Oberdorf, Andreas (Hrsg.): Digital Turn und Historische Bildungsforschung. Bestandesaufnahme und Forschungsperspektiven}, + shorttitle = {Forschungsdaten vernetzen}, + author = {Gubler, Kaspar}, + date = {2022}, + publisher = {Bad Heilbrunn}, + doi = {10.35468/5952}, + langid = {german}, + keywords = {methodology,data harmonisation,data reconciliation, big data}, +} +@book{steckel2015, + title = {Wissensgeschichten. Zugänge, Probleme und Potentiale in der Erforschung mittelalterlicher Wissenskulturen, in: Akademische Wissenskulturen. Praktiken des Lehrens und Forschens vom Mittelalter bis zur Moderne, hg. v. Martin Kintzinger / Sita Steckel}, + shorttitle = {Wissensgeschichte}, + author = {Steckel, Sita}, + date = {2015}, + publisher = {Bern}, + URL = "https://repositorium.uni-muenster.de/document/miami/6532a89c-da39-4d14-9a28-f550471da4e7/steckel_2015_wissensgeschichte.pdf", + langid = {german}, + keywords = {history of knowledge,methodology}, +} + + + + diff --git a/submissions/459/_quarto.yml b/submissions/459/_quarto.yml new file mode 100644 index 0000000..ae6ece2 --- /dev/null +++ b/submissions/459/_quarto.yml @@ -0,0 +1,8 @@ +project: + type: manuscript + +manuscript: + article: index.qmd + +format: + html: default diff --git a/submissions/459/index.qmd b/submissions/459/index.qmd new file mode 100644 index 0000000..4934638 --- /dev/null +++ b/submissions/459/index.qmd @@ -0,0 +1,66 @@ +--- +submission_id: 459 +categories: 'Session 2A' +title: Data Literacy and the Role of Libraries +author: + - name: Catrina Langenegger + orcid: 0000-0001-8875-2730 + email: c.langenegger@unibas.ch + affiliations: + - University of Basel + - name: Johanna Schüpbach + orcid: 0000-0002-0905-2056 + email: johanna.schuepbach@unibas.ch + affiliations: + - University of Basel + +keywords: + - Data Literacy + - Academic Libraries + - Digital Humanities + - Experience Report + +abstract: | + Libraries are finding their place in the field of data literacy and the opportunities as well as challenges of supporting students and researchers in the field of Digital Humanities. Key aspects of this development are research data management, repositories, libraries as suppliers of data sets, digitisation and more. Over the past few years, the library has undertaken steps to actively bring itself into teaching and facilitate the basics of working with digital sources. The talk shares three experience reports of such endeavours undertaken by subject librarians of the Digital Humanities Work Group (AG DH) at the University Library Basel (UB). + +date: 07-26-2024 + +--- + +## Introduction + +More and more, libraries are becoming important institutions when it comes to teaching data literacy and the basics of Digital Humanities (DH) tools and methods, especially to undergraduates or other people new to the subject matter. The Digital Humanities Work Group (AG DH), consisting of a selection of subject librarians from the University Library Basel (UB), have developed various formats to introduce students to these topics and continue to build and expand upon the available teaching elements in order to assemble customised lesson or workshop packages as needed. The aim of this talk is to share our experiences with the planning and teaching of three different course formats. These classes and workshops play, on one hand, an important part of making the library's (historical) holdings and datasets visible and available for digital research and, on the other hand, they are means to engage with students and (early stage) researchers and imparting skills in the area of working with data at an easily accessible level. +As of today, there have been three distinct formats in which the AG DH has introduced students to data literacy and working with digitised historical sources: a full semester course (research seminar) that the AG DH has come up with in collaboration with a professor for Jewish and General History; a 90-minute session on data literacy and working with subject specific datasets within the larger frame of an existing semester course on information, data, and media literacy; and, last but not least, another 90-minute session within a research seminar in literary studies to provide a brief introduction to DH and how it can be incorporated in further research on the seminar topic. + +## Research Seminar/Semester Course + +To this end, the AG DH organised a semester course in close collaboration with Prof. Dr. phil. Erik Petry with whom they have created and then co-taught a curriculum introducing various DH tools and methods to be tried out using the UB's holdings on the topic of the first Zionist Congresses in Basel. The course was attended by MA students from the subjects of History, Jewish Studies and Digital Humanities. This research seminar was designed to provide an introduction to digital methods. +We have divided our course into different phases. The first introduction to work organisation, data management and data literacy was followed by sessions that combined the basics of the topic and introductions to digital methods. We focussed on different forms of sources: images, maps and text, with one session being dedicated to each type. This meant we could offer introductions to a broad spectrum of DH tools and methods such as digital storytelling and IIIF, geomapping and working with GIS, and transcription, text analysis and topic modelling. As a transition to the third phase of the project, we organised a session in which we presented various sources either from the University Library or from other institutions – the Basel-Stadt State Archives and the Jewish Museum Switzerland. The overall aim of the course was to enable students to apply their knowledge directly. To this end, they developed small projects in which they researched source material using digital methods and were able to visualise the results of their work. In the third phase of the course, students were given time to work on their own projects. In a block event at the end of the semester, the groups presented their projects and the status of their work. We were able to see for ourselves the students’ exciting approaches and good realisations. +The course was also a good experience for us subject librarians. Above all, we benefited from the broad knowledge in our team as well as the opportunity to gain new insights and experiences in select areas of DH. We particularly appreciated the good collaboration with Prof. Dr. Petry, who treated us as equal partners and experts. Despite the positive experience, this format is not sustainable: The effort involved in creating an entire semester course exceeds the resources available to regularly offer similar semester courses. Nevertheless, for this pilot project of the AG DH, the effort was justified because the course allowed us to make our holdings visible and they were researched. + +## Data Literacy – a Session Within an Existing IDM Semester Course + +For the second format, the AG DH was approached by the organisers of the regular IDM (“Informations-, Daten- & Medienkompetenz”) semester courses at the University Library Basel. These semester courses are offered for select subject areas to teach students basic information, data and media literacy skills tailored to their subject. The AG DH was asked to come up with two 90-minute sessions to introduce the students to the basics of data literacy. After talking through the requirements with the course lead, the AG DH decided to collaborate with the colleagues from the Open Science Team who would cover the first session dedicated to Research Data Management and a more general introduction to the subject matter. Building on that, the AG DH covers the second session, tailoring it to the requirements of the subject area in question (e.g. art history, sociology, cultural anthropology, economics etc.). Rather than by the whole group, these sessions are mainly prepared and taught by a member of the AG DH whose own subject specialty is closest to (or even the same as) the course’s audience. This means that not all AG DH members are involved in it all the time, therefore being more time and work efficient. Slides are, of course, liberally copied, pasted and reused. This ensures that not everyone has to do all the work while at the same time also guarantees that everyone in the group has access to all the information (which can then be adapted to the subject area). Of course these slides are always edited and brought up to date as to reflect the changes in the field. + +The goals for the session on subject specific data literacy are intended for the students to…: + +* …know the relevant sources where to get (research) data and/or corpora for their projects +* …understand the specifics of working with data as pertains to the subject in question +* …assemble subject specific (reused or collected) data sets and how to work with them (i.e. analyse and visualise). +* …introduce them to the people and contacts at the University Library who can help them with their further studies/research. + +A big challenge for these sessions is, of course, the sheer extent of working with data. It is impossible to teach every method/tool the students might need for their projects. Particularly in subjects like social anthropology, where almost everything and anything can be seen and collected as data, this session works mainly as a very broad overview of what is possible. The students are given an entry point, links, examples and an understanding of the different kinds of data they might encounter – e.g. texts and linguistic data, statistical data, geodata, image and audi(visual) data – but are required to then work their own way into what they’ll need for their own projects. +Because this 90-minute session is only just enough to give a brief introduction and overview of what data is and how you could work with subject specific data, it is important to provide the students with enough links and contact addresses where they can find further assistance, like the subject librarian or the AG DH. However, because the target audience are always students of one specific subject area, it is also easier to tailor the session to that particular subject. (All subject areas may request a semester course from the IDM-team/the organisers.) +This format has been a very positive experience in terms of collaboration – not only with the department of the subject but also with the colleagues organising the IDM semester course and the Open Science team. + +## Introduction to DH for a Research Seminar in English Literary Studies + +Lastly we are also able to prepare bespoke inputs within the framework of a regular class. In this example, the idea for collaboration came about through an informal talk with Prof. Dr. Ina Habermann and her assistant MA Stefanie Heeg from the University of Basel’s English Department, while they were planning a research seminar on early modern travel writing. Since the UB has some of the texts discussed in their collections, I suggested teaching a session at the library where the students may look at the original print books and then talk about and discuss introductory aspects of DH when juxtaposing them with the digitised texts of the same. By using these examples the aim of this 90-minute session was to give the students an introduction to DH, metadata, authority files (in particular the GND) and – drawing on material used for the IDM session on data literacy – showing them possibilities of what they can do with and how to work with these digitised texts. Even though this was within the frame of a class in literary studies, the subject matter is closely related to historical research. +While this session was also very dense, content wise, by hosting it at the UB and having the books from the historical holdings ready to be examined in the classroom, it added a nice touch of interactivity to the class. At the same time, preparing and teaching this session fulfils two intentions of the AG DH: first, to strengthen ties with the departments and let the researchers and teaching staff know, that the UB has the competence and people to help with and support with basic DH needs; second, to highlight and showcase our (digitised) collections and holdings, and to familiarise students and researchers with the possibilities of working with them. In addition to that, the UB could present itself as a location combining both the historical dimension with the original texts, as well as a centre for competence in digital methods. + +## Conclusion + +These three different formats highlight some of the chances but also challenges the AG DH faces with regards to their work on with and for students and researchers, and the experiences and feedback from these different formats throw an important light on the role of the UB in the task of teaching skills in this field. +Generally it can be said that it needs an active involvement from and by the AG DH to get into the teaching spaces. Either through directly talking with professors/teaching staff and offering to collaborate with them in contributing to their planned classes or by getting involved in existing course formats like the IDM semester courses. +It can thus be shown that libraries play a key role in imparting knowledge and skills as well as guardians of cultural property in their function as reliable and long-lasting institutions. We also want to highlight aspects that can still be improved. Above all, this concerns the awareness and attractiveness of such services as well as cooperation with researchers and teachers from all subject areas that work digitally, and history in particular. +The questions that drive the AG DH are many and varied: What are the needs of researchers and students? What do you need from your university library? Where do you see the possibility for the library to support and raise awareness with working with historical documents?