Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preparation for Zenodo Publishing #86

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions styles.css
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@
text-decoration: underline #ff3cc7 3px;
}

.callout {
border-left-color: #a5d7d2 !important;
}

/* Print styles */
@media print {
body {
Expand Down
12 changes: 11 additions & 1 deletion submissions/405/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,20 @@ key-points:
- Data-driven research offers significant opportunities for analyzing large volumes of web archived data to reveal trends in the complexity of the structure of the preserved websites and the development of content.
- Implementation of data-driven research in web history is challenging due to issues like data incompleteness, biases, and the diversity of file formats, which require the development of innovative solutions and digital research infrastructures.
- The research highlights challenges in obtaining comprehensive datasets from web archives, underscores the importance of assessing data quality, and indicates the need to address the heterogeneous nature of data preserved in web archives.
date: 07-24-2024
date: 09-12-2024
doi: 10.5281/zenodo.13904210
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.13904210
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/13904210/files/405_DigiHistCH24_HistoryOfMuseums_Slides.pdf).

:::

## Introduction

Data-driven approaches bring extensive opportunities for research to analyze large volumes of data, and gain new knowledge and insights. This is considered especially beneficial for implementation in the humanities and social sciences [@weichselbraun2021]. Application of data-driven research methodologies in the field of history requires a sufficient source base, which should be accurate, transparently shaped and large enough for robust analysis [@braake2016].
Expand Down
13 changes: 12 additions & 1 deletion submissions/427/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,21 @@ abstract: |
key-points:
- Software development is increasingly important in digital humanities research projects, yet many struggle to implement modern engineering practices that enhance sustainability and speed up development.
- Developing an XML schema for a scholarly edition project is challenging but can provide a solid foundation for the project when executed effectively.
date: 07-25-2024
date: 09-13-2024
date-modified: 11-15-2024
doi: 10.5281/zenodo.14171339
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.14171339
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/14171339/files/427_DigiHistCH24_SolidGround_Slides.pdf).

:::

## Introduction

### General Problem Description
Expand Down
33 changes: 27 additions & 6 deletions submissions/428/index.qmd

Large diffs are not rendered by default.

13 changes: 12 additions & 1 deletion submissions/429/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,21 @@ key-points:
- The Techn’hom Time Machine project aims to offer a virtual reality reconstruction of a former spinning mill in the city of Belfort (France), with its machines and activities.
- Students from the Belfort-Montbéliard University of Technology participate directly in the project by modeling buildings, machines, or by working on knowledge engineering.
- Their reports make it possible to identify points that most marked them, namely the discovery of human sciences and their difficulties, as well as new technical and organizational skills learning.
date: 07-26-2024
date: 09-13-2024
date-modified: 11-15-2024
doi: 10.5281/zenodo.14171328
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.14171328
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/14171328/files/429_DigiHistCH24_Technhom_Slides.pdf).

:::

## Introduction

Part of the national Lab In Virtuo project (2021-2024), the Techn'hom Time Machine project, initiated in 2019 by the Belfort-Montbéliard University of Technology, aims to study and digitally restore the history of an industrial neighborhood, with teacher-researchers but also students as co-constructors [@Gasnier2014 ; @Gasnier2020, p. 293]. The project is thus located at the interface of pedagogy and research. The Techn'hom district was created after the Franco-Prussian War of 1870 with two companies from Alsace: the Société Alsacienne de Constructions Mécaniques, nowadays Alstom; and the Dollfus-Mieg et Compagnie (DMC) spinning mill, in operation from 1879 to 1959. The project aims to create a “Time Machine” of these industrial areas, beginning with the spinning mill. We seek to restore in four dimensions (including time) buildings, machines with their operation, but also document and model sociability and know-how, down to the gestures and feelings. The resulting “Sensory Realistic Intelligent Virtual Environment” should allow both researchers and general public to virtually discover places and “facts” taking place in the industry, but also interact with them or even make modifications.
Expand Down
7 changes: 6 additions & 1 deletion submissions/431/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,12 @@ key-points:
- Key point 1 The Repertorium Academicum Germanicum (RAG) focuses on the knowledge influence of medieval scholars in pre-modern Europe, creating a comprehensive research database.
- Key point 2 The RAG database, with data on 62,000 scholars, has advanced from manual to computer-aided and AI-assisted data collection and analysis.
- Key point 3 Technological advancements, including the use of nodegoat, have enhanced data management, collaboration, and accessibility, integrating AI for improved historical data analysis.
date: 07-07-2024
date: 09-12-2024
date-modified: 11-15-2024
maehr marked this conversation as resolved.
Show resolved Hide resolved
doi: 10.5281/zenodo.14171301
other-links:
- text: Post on Personal Blog
href: https://doi.org/10.58079/126xr
bibliography: references.bib
---

Expand Down
16 changes: 14 additions & 2 deletions submissions/438/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,23 @@ key-points:
- The study of video game graphics integrates narrative and aesthetic aspects with interactive and functional elements, differing significantly from classical visual media.
- The Framework for the Analysis of Visual Representation in Video Games (FAVR) provides a structured approach to analyze video game images through annotation, focusing on their formal, material, and functional aspects.
- The initial implementation of the FAVR framework as a linked open ontology for tools like Tropy has proven valuable in formally analyzing video game images and comparing aspects such as dynamic versus static image space, facilitating further digital and computational research.
date: 07-19-2024
date-modified: last-modified
date: 09-12-2024
date-modified: 10-10-2024
doi: 10.5281/zenodo.13904453
bibliography: references.bib
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.13904453
- text: Transcript
href: transcript.html
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/13904453/files/438_DigiHistCH24_VideoGameGraphics_Slides.pdf).

:::

The 1980s marked the arrival of the home computer. Computing systems became affordable and were marketed to private consumers through state-supported programs and new economic opportunities [@haddonHomeComputerMaking1988; @williamsEarlyComputersEurope1976]. Early models, such as the ZX Spectrum[^1], Texas Instrument TI-99/4A[^2], or the Atari[^3], quickly became popular in Europe and opened the door for digital technology to enter the home. This period also marks the advent of homebrew video game culture and newly emerging creative programming practices [@swalwellHomebrewGamingBeginnings2021; @albertsHackingEuropeComputer2014]. As part of this process, these early programmers not only had to figure out how to develop video games but also were among the first to incorporate graphics into video games. This created fertile grounds for a new array of video game genres and helped popularize video games as a mainstream media.

I’m researching graphics programming for video games from the 1980s and 1990s. The difference to other visual media lies in the amalgamation of computing and the expression of productive or creative intent by video game designers and developers. The specifics of video game graphics are deeply rooted in how human ideas must be translated into instructions that a computer understands. This necessitates a mediation between the computer's pure logic and a playing person's phenomenological experience. In other words, the video game image is a specific type of interface that needs to take care of a semiotic layer and offer functional affordances. I am interested in how early video game programmers worked with these interfaces, incorporating their own visual inspirations and attempting to work with the limited resources at hand. Besides critical source code analysis, I also extensively analyze formal aspects of video game images. For the latter, I depend on FAVR to properly describe and annotate images in datasets relevant to my inquiries. The framework explicitly deals with problems of analyzing video game graphics. It guides the annotation of images by their functional, material, and formal aspects and aids in analyzing narrativity and the rhetoric of aesthetic aspects [@arsenaultGameFAVRFramework2015].
Expand Down
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
---
created: 2024-09-06T11:02
updated: 2024-09-09T13:04
submission_id: 438_Transcript
title: A handful of pixels of blood – Decoding early video game graphics with the FAVR ontology
subtitle: Transcript
author:
- name: Adrian Demleitner
orcid: 0000-0001-9918-7300
email: [email protected]
affiliations:
- University of the Arts Bern
- University of Bern
date: 2024-09-06T11:02
date-modified: 2024-09-09T13:04
doi: 10.5281/zenodo.13904453
mtwente marked this conversation as resolved.
Show resolved Hide resolved
---

# A handful of pixels of blood

## A Historical and Technological Perspective on Understanding Video Game Graphics

Good afternoon, colleagues. Today, I'd like to share with you parts of my research on video game programming practices of the 1980s and 1990s, with a particular focus on graphics. This work is an integral part of my dissertation, where I'm exploring the technological foundations of video games as a popular medium.
Expand Down
4 changes: 2 additions & 2 deletions submissions/443/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ keywords:
- historical scholarship
abstract: |
The Impresso project pioneers the exploration of historical media content across temporal, linguistic, and geographical boundaries. In its initial phase (2017-2020), the project developed a scalable infrastructure for Swiss and Luxembourgish newspapers, featuring a powerful search interface. The second phase, beginning in 2023, expands the focus to connect media archives across languages and modalities, creating a Western European corpus of newspaper and broadcast collections for transnational research on historical media. In this presentation, we introduce Impresso 2 and discuss some of the specific challenges to connecting newspaper and radio.

date: 04-09-2024
date: 09-12-2024
doi: 10.5281/zenodo.13907298
bibliography: references.bib
---

Expand Down
13 changes: 12 additions & 1 deletion submissions/444/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,21 @@ key-points:
- Key point 1 Hybrid thinking or multidisciplinary collaboration always takes much more time than one estimates, and it is useful to develop several complementary ways of working with data together in order to understand their local specificities.
- Key point 2 Archival metadata is an untapped research resource for digital humanities, but its use requires close collaboration with cultural heritage organisations and practical knowledge of archival practices.
- Key point 3 The user test survey of the portal with 19th-century letter metadata showed that building a committed test group is challenging and that 'traditional' humanists have difficulties in studying mass data.
date: 07-04-2024
date: 09-12-2024
date-modified: 11-15-2024
doi: 10.5281/zenodo.14171306
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.14171306
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/14171306/files/444_DigiHistCH24_LetterMetadata_Slides.pdf).

:::

## Introduction

This paper discusses data and practices related to an ongoing digital humanities consortium project *Constellations of Correspondence – Large and Small Networks of Epistolary Exchange in the Grand Duchy of Finland* (CoCo; Research Council of Finland, 2021–2025). The project aggregates, analyses and publishes 19th-century epistolary metadata from letter collections of Finnish cultural heritage (CH) organisations on a Linked Open Data service and as a semantic web portal (the ‘CoCo Portal’), and it consists of three research teams, bringing together computational and humanities expertise. We focus exclusively on metadata considering them to be part of the cultural heritage and a fruitful starting point for research, providing access i.e. to 19th-century epistolary culture and archival biases. The project started with a webropol survey addressed to over 100 CH organisations to get an overview of the preserved 19th-century letters and the Finnish public organisations willing to share their letter metadata with us. Currently the CoCo portal includes seven CH organisations and four online letter publications with the metadata of over 997.000 letters and with 95.000 actors (senders and recipients of letters).
Expand Down
14 changes: 13 additions & 1 deletion submissions/445/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,21 @@ key-points:
- Online teaching modules on ATR are a desideratum currently, interested persons must familiarise themselves with the subject themselves at considerable time and effort.
- ATR tools are in a constant state of flux, which is why teaching modules should explain the wider context and not specific buttons.
- Working with historical documents today often takes place at the intersection between tried and tested analog methods and new digital approaches, which is why our teaching module takes these intersections into account.
date: 07-26-2024
date: 09-12-2024
date-modified: 11-15-2024
maehr marked this conversation as resolved.
Show resolved Hide resolved
doi: 10.5281/zenodo.14171285
mtwente marked this conversation as resolved.
Show resolved Hide resolved
other-links:
- text: Presentation Slides (PDF)
href: https://doi.org/10.5281/zenodo.14171285
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/14171285/files/445_DigiHistCH24_AdFontes_Slides.pdf).

:::

## Introduction

Scholars and interested laypeople who want to adequately deal with historical topics or generally extract information from differently structured historical documents need both knowledge of old scripts and methods for analysing complex layouts. Studies of written artefacts are only possible if they can be read at all – written in unfamiliar scripts such as Gothic Cursive, Humanist Minuscule or German Kurrent and sometimes with rather unconventional layouts. Until now, the relevant skills have been developed, for example, by the highly specialised field of palaeography. In the last few years, a shift in practice has taken place. With digital transcription tools based on deep learning models trained to read these old scripts and accompanying layouts on the rise, working with old documents or unusual layouts is becoming easier and quicker. However, using the corresponding software and platforms can still be intimidating. Users need to have a particular understanding of how to approach working with Automated Text Recognition (ATR) depending on their projects aims. This is why the Ad fontes platform [@noauthor_ad_2018] is currently developing an e-learning module that introduces students, researchers, and other interested users (e.g. citizen scientists) to ATR, its use cases, and best practices in general and more specifically into how exactly they can use ATR for their papers and projects.
Expand Down
6 changes: 4 additions & 2 deletions submissions/447/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,15 @@ keywords:
- Ontology
- FAIR Data
abstract: This article explores the significance of the Geovistory platform in the context of the growing Open Science movement within the Humanities, particularly its role in facilitating the production and reuse of FAIR data. As funding agencies increasingly mandate the publication of research data in compliance with FAIR principles, researchers face the dual challenge of mastering new methodologies in data management and adapting to a digital research landscape. Geovistory provides a comprehensive research environment specifically designed to meet the needs of historians and humanists, offering intuitive tools for managing research data, establishing a collaborative Knowledge Graph, and enhancing scholarly communication. By integrating semantic methodologies in the development of a modular ontology, Geovistory fosters interoperability among research projects, enabling scholars to draw on a rich pool of shared information while maintaining control over their data. Additionally, the platform addresses the inherent complexities of historical information, allowing for the coexistence of diverse interpretations and facilitating nuanced digital analyses. Despite its promising developments, the Digital Humanities ecosystem faces challenges related to funding and collaboration. The article concludes that sustained investment and strengthened partnerships among institutions are essential for ensuring the longevity and effectiveness of initiatives like Geovistory, ultimately enriching the field of Humanities research.
date: 07-26-2024
date: 09-12-2024
date-modified: 10-13-2024
maehr marked this conversation as resolved.
Show resolved Hide resolved
bibliography: references.bib
doi: 10.5281/zenodo.13907394
---

## Introduction

The movement of Open Science has grown in importance in the Humanities, advocating for better accessibility of scientific research, especially in the form of the publication of research data [@unesco2023]. This has led funding agencies like SNSF, ANR, and Horizon Europe to ask research projects to publish their research data and metadata along the FAIR principles in public repositories (see for instance [@anr2023; @ec2023; @snsf2024]. Such requirements are putting pressure on researchers, who need to learn and understand the principles and standards of FAIR data and its impact on research data, but also require them to acquire new methodologies and know-how, such as in data management and data science.
The movement of Open Science has grown in importance in the Humanities, advocating for better accessibility of scientific research, especially in the form of the publication of research data [@unesco2023]. This has led funding agencies like SNSF, ANR, and Horizon Europe to ask research projects to publish their research data and metadata along the FAIR principles in public repositories [see for instance @anr2023; @ec2023; @snsf2024]. Such requirements are putting pressure on researchers, who need to learn and understand the principles and standards of FAIR data and its impact on research data, but also require them to acquire new methodologies and know-how, such as in data management and data science.

At the same time, this accessibility of an increasing volume of interoperable quality data and the new semantic methodologies might bring a change of paradigm in the Humanities by the way knowledge is produced [@beretta2023; @feugere2015]. The utilization of Linked Open Data (LOD) grants scholars access to large volumes of interoperable and high-quality datasets, at a scale analogue methods cannot reach, fundamentally altering their approach to information. This enables scholars to pose novel research questions, marking a departure from traditional modes of inquiry and facilitating a broader range of analytical perspectives within academic discourse. Moreover, drawing upon semantic methodologies rooted in ontology engineering, scholars can effectively document the intricate complexities inherent of social and historical phenomena, enabling a nuanced representation essential to the Social Sciences and Humanities domains within their databases. This meticulous documentation not only reflects a sophisticated understanding of multifaceted realities but also empowers researchers to deepen the digital analysis of rich corpora.

Expand Down
Loading
Loading