diff --git a/docs/how-tos/use-multi-format-reader.md b/docs/how-tos/use-multi-format-reader.md
index ebb795c9b..69fc5accf 100644
--- a/docs/how-tos/use-multi-format-reader.md
+++ b/docs/how-tos/use-multi-format-reader.md
@@ -1,4 +1,5 @@
 # How to use the built-in MultiFormatReader
+
 While building on the ```BaseReader``` allows for the most flexibility, in most cases it is desirable to implement a reader that can read in multiple file formats and then populate the template based on the read data. For this purpose, `pynxtools` has the [**`MultiFormatReader`**](https://github.com/FAIRmat-NFDI/pynxtools/blob/master/src/pynxtools/dataconverter/readers/multi/reader.py), which can be readily extended for your own data. In this how-to guide, we will focus on an implementation using a concrete example. If you are also interested in the general structure of the `MultiFormatReader`, you can find more information [here](../learn/multi-format-reader.md).
 
 ## Getting started
@@ -69,8 +70,8 @@ The NXDL requires a user, some sample information, some instrument metadata, and
 
 Note that in order to be recognized as a valid application definition, this file should be copied to the `definitions` submodule at `pynxtools.definitions`.
 
-
 We first start by implementing the class and its ``__init__`` call:
+
 ```python title="reader.py"
 """MyDataReader implementation for the DataConverter to convert mydata to NeXus."""
 from typing import Tuple, Any
@@ -97,19 +98,23 @@ class MyDataReader(MultiFormatReader):
 
 READER = MyDataReader
 ```
+
 Note that here we are adding handlers for three types of data file extensions:
+
 1. `".hdf5"`, `".h5"`: This will be used to parse in the (meta)data from the instrument's HDF5 file.
 2. `".yml"`, `".yaml"`: This will be used to parse in the (meta)data from the ELN file.
 3. `".json"`: This will be used to read in the **config file**, which is used to map from the (meta)data concepts from the instrument and ELN data to the concepts in the NXDL file.
 
 ## Reading in the instrument's data and metadata
+
 First, we will have a look at the HDF5 file. This mock HDF5 file was generated with `h5py` using a [simple script](https://github.com/FAIRmat-NFDI/pynxtools/tree/master/examples/mock-data-reader/create_mock_data.py).
 
-<img src="media/mock_data.png" style="width: 50vw; min-width: 330px;" />
+<img src="media/mock_data.png" alt="Mock Data" style="width: 50%; min-width: 330px;" />
 
 Here, we see that we have a `data` group with x and y values, as well as some additional metadata for the instrument.
 
 Here is one way to implement the method to read in the data:
+
 ```python title="reader.py"
 import h5py
 
@@ -132,9 +137,11 @@ def handle_hdf5_file(filepath):
 
     return {}
 ```
+
 Note that here we are returning an empty dictionary because we don't want to fill the template just yet, but only read in the HDF5 data for now. We will use the config file later to fill the template with the read-in data. Note that it is also possible to return a dictionary here to update the template directly.
 
 `self.hdf5_data` will look like this:
+
 ```python
 {
     "data/x_values": array([-10.        ,  -9.7979798 ,  -9.5959596 , ...,  10.        ]),
@@ -147,9 +154,12 @@ Note that here we are returning an empty dictionary because we don't want to fil
     "metadata/instrument/detector/count_time_units": s",
 }
 ```
+
 ## Reading in ELN data
+
 As we can see in the application definition `NXsimple` above, there are some concepts defined for which there is no equivalent metadata in the HDF5 file. We are therefore using a YAML ELN file to add additional metadata.
 The ELN file `eln_data.yaml` looks like this:
+
 ```yaml  title="eln_data.yaml"
 title: My experiment
 user:
@@ -188,7 +198,9 @@ def handle_eln_file(self, file_path: str) -> Dict[str, Any]:
             
     return {}
 ```
+
 When this method is called, `self.eln_data` will look like this:
+
 ```python
 {
     "/ENTRY[entry]/title": "My experiment",
@@ -200,9 +212,11 @@ When this method is called, `self.eln_data` will look like this:
     "/ENTRY[entry]/SAMPLE[sample]/temperature/@units": "K"
 }
 ```
+
 Note that here we are using `parent_key="/ENTRY[entry]"` as well as a `CONVERT_DICT`, meaning that each key in `self.eln_data` will start with `"/ENTRY[entry]"` and some of the paths will be converted to match the template notation. This will be important later.
 
 ## Parsing the config file
+
 Next up, we can make use of the config file, which is a JSON file that tells the reader how to map the concepts from the HDF5 and ELN files in order to populate the template designed to match `NXsimple`. The choices made in the config file define how semantics from the source (data file) and target (NeXus application definition) side are mapped. Essentially, the config file should contain all keys that are present in the NXDL. In our case, the config file looks like this:
 
 ```json title="config_file.json"
@@ -210,7 +224,7 @@ Next up, we can make use of the config file, which is a JSON file that tells the
   "/ENTRY/title": "@eln", 
   "/ENTRY/USER[user]": {
     "name":"@eln",
-    "address":@eln:"/ENTRY/USER[user]/address",
+    "address":"@eln:/ENTRY/USER[user]/address",
   }, 
   "/ENTRY/INSTRUMENT[instrument]": {
     "@version":"@attrs:metadata/instrument/version",
@@ -235,9 +249,11 @@ Next up, we can make use of the config file, which is a JSON file that tells the
   }
 }
 ```
+
 Note that here we are using `@`-prefixes which are used to fill the template from the different data sources. We dicuss this below in more detail.
 
 We also implement a method for setting the config file in the reader:
+
 ```python title="reader.py"
 def set_config_file(self, file_path: str) -> Dict[str, Any]:
     if self.config_file is not None:
@@ -247,12 +263,14 @@ def set_config_file(self, file_path: str) -> Dict[str, Any]:
     self.config_file = file_path
   
     return {}
-```        
+```
 
 ## Filling the template from the read-in data
+
 Finally, after reading in all of the data and metadata as well as designing the config file, we can start filling the template. For this, we must implement functions that are called using the reader's **callbacks**.
 
 We will start with the `@attrs` prefix, associated with the `attrs_callback`. We must implement the `get_attr` method:
+
 ```python title="reader.py"
 def get_attr(self, key: str, path: str) -> Any:
     """
@@ -263,13 +281,16 @@ def get_attr(self, key: str, path: str) -> Any:
     
     return self.hdf5_data.get(path)
 ```
+
 This method (and all similar callbacks methods) have two inputs:
+
 1. **`key`**, which is a key in the config file. Note that here, the generic `"/ENTRY/"` gets replaced by `f"/ENTRY[{entry_name}]/"`, where `entry_name` is the one of the entries of the `self.get_entry_names` method.
 2. **`path`**, which is the part of the config value that comes after the `@attrs:` prefix. For example, for the config value `"@attrs:my-metadata"`, the extracted path is `my-metadata`.
 
 For the `get_attr` method, we are making use of the `path`. For example, for the config value `"@attrs:metadata/instrument/version"`, the extracted path is `metadata/instrument/version`, which is also one of the keys of the `self.hdf5_data` dictionary.
 
 For the ELN data, we must implement the `get_eln_data` function that gets called from the `eln_callback` when using the `@eln` prefix:
+
 ```python title="reader.py"
 def get_eln_data(self, key: str, path: str) -> Any:
         """Returns data from the given eln path."""
@@ -278,9 +299,11 @@ def get_eln_data(self, key: str, path: str) -> Any:
         
         return self.eln_data.get(key)
 ```
+
 Here, we are making use of the fact that we have used `CONVERT_DICT` in the `parse_yml` function above. Thus, the keys of the `self.eln_data` dictionary are exactly the same as those in the config file (for example, the config key `"/ENTRY[entry]/USER[user]/address"` also exists in `self.eln_data`). Therefore, we can just get this data using the `key` coming from the config file. 
 
 Finally, we also need to address the `@data` prefix, which gets used in the `data_callback` to populate the NXdata group in the template. Note that here we use the same `@data` prefix to fill the `x_values` as well as the `data` (from `y_values`) fields. We achieve this by using the path that follows `@data:` in the config file:
+
 ```python title="reader.py"
 def get_data(self, key: str, path: str) -> Any:
     """Returns measurement data from the given hdf5 path."""
@@ -291,6 +314,7 @@ def get_data(self, key: str, path: str) -> Any:
 ```
 
 ## Bringing it all together
+
 Et voilĂ ! That's all we need to read in our data and populate the `NXsimple` template. Our final reader looks like this:
 
 ```python title="reader.py"
@@ -394,6 +418,7 @@ READER = MyDataReader
 ```
 
 ## Using the reader
+
 We can call our reader using the following command
 
 ```console
@@ -402,4 +427,4 @@ user@box:~$ dataconverter mock_data.h5 eln_data.yaml -c config_file --reader myd
 
 The final `output.nxs` file gets automatically validated against `NXsimple`, so we can be sure that it is compliant with that application definition. Here is a look at our final NeXus file:
 
-<img src="media/resulting_file.png" style="width: 50vw; min-width: 330px;" />
\ No newline at end of file
+<img src="media/resulting_file.png" alt="Resulting File" style="width: 50vw; min-width: 330px;" />
\ No newline at end of file
diff --git a/docs/learn/multi-format-reader.md b/docs/learn/multi-format-reader.md
index 40efc33ef..f4863cddf 100644
--- a/docs/learn/multi-format-reader.md
+++ b/docs/learn/multi-format-reader.md
@@ -15,6 +15,7 @@ Here, we will explain the inner workings of the `MultiFormatReader`. Note that t
 ## The basic structure
 
 For extending the `MultiFormatReader`, the following basic structure must be implemented:
+
 ```python title="multi/reader.py"
 """MyDataReader implementation for the DataConverter to convert mydata to NeXus."""
 from typing import Tuple, Any
@@ -38,6 +39,7 @@ READER = MyDataReader
 ```
 
 In order to understand the capabilities of the `MultiFormatReader` and which methods need to be implemented when extending it, we will have a look at its ```read``` method:
+
 ```python title="multi/reader.py"
 def read(
     self,
@@ -50,8 +52,11 @@ def read(
     self.config_file = self.kwargs.get("config_file", self.config_file)
     self.overwrite_keys = self.kwargs.get("overwrite_keys", self.overwrite_keys)   
 ```
+
 ## Template initialization and processing order
+
 An empty `Template` object is initialized that later gets filled from the data files later.
+
 ```python title="multi/reader.py"
     template = Template(overwrite_keys=self.overwrite_keys)
 
@@ -70,6 +75,7 @@ If the reader has a `self.processing_order`, the input files get sorted in this
 If `self.overwrite_keys` is True, later files get precedent. For example, if `self.processing_order = [".yaml", ".hdf5"]`, any values coming from HDF5 files would overwrite values from the YAML files.
 
 ## Reading of input files
+
 ```python title="multi/reader.py"
     for file_path in sorted_paths:
         extension = os.path.splitext(file_path)[1].lower()
@@ -84,22 +90,28 @@ If `self.overwrite_keys` is True, later files get precedent. For example, if `se
 
         template.update(self.extensions.get(extension, lambda _: {})(file_path))
 ```
+
 This parts reads in the data from all data files. The `MultiFormatReader` has an `extensions` property, which is a dictionary that for each file extension calls a function that reads in data from files with that extension. If the reader shall handle e.g. an HDF5 file, a method for handling this type of file should be added, i.e., `self.extensions[".hdf5"] = self.handle_hdf5`.
 Note that these methods should also implement any logic depending on the provided data, i.e., it may not be sufficient to rely on the filename suffix, but the reader may also need to check for different file versions, binary signature, mimetype, etc.
 
 Any of these methods should take as input only the file path, e.g.
+
 ```python title="multi/reader.py"
 def handle_eln_file(self, file_path: str) -> Dict[str, Any]
 ```
+
 These methods must return a dictionary. One possibility is to return a dictionary that directly fills the template (see the `template.update` call above) with the data from the file. Another option is to return an empty dictionary (i.e., not fill the template at this stage) and only later fill the template from a config file (see below).
 
 Note that for several input formats, standardized parser functions already exist within the `MultiFormatReader`. For example, YAML files can be parsed using the `pynxtools.dataconverter.readers.utils.parse_yml` function.
 
 ## Setting default values in the template
+
 ```python title="multi/reader.py"
     template.update(self.setup_template())
 ```
+
 Next, the `setup_template` method can be implemented, which is used to populate the template with initial data that does not come from the files themselves. This may be used to set fixed information, e.g., about the reader. As an example, `NXentry/program_name` (which is defined as the name of program used to generate the NeXus file) scan be set to `pynxtools-plugin` by making `setup_template` return a dictionary of the form
+
 ```json
 {
   "/ENTRY[my_entry]/program_name": "pynxtools-plugin",
@@ -108,20 +120,25 @@ Next, the `setup_template` method can be implemented, which is used to populate
 ```
 
 ## Handling objects
+
 ```python title="multi/reader.py"
     if objects is not None:
         template.update(self.handle_objects(objects))
 ```
+
 Aside from data files, it is also possible to directly pass any Python objects to the `read` function (e.g., a numpy array with measurement data). In order to exploit this, the `handle_objects` method must implemented, which should return a dictionary that populates the template.
 
 ## Parsing the config file
+
 ```python title="multi/reader.py"
     if self.config_file is not None:
         self.config_dict = parse_flatten_json(
             self.config_file, create_link_dict=False
         )
 ```
+
 Next up, we can make use of the config file, which is a JSON file that tells the reader which input data to use to populate the template. In other words, the config.json is used for ontology mapping between the input file paths and the NeXus application definition. Essentially, the config file should contain all keys that are present in the NXDL. A subset of a typical config file may look like this:
+
 ```json
 {
   "/ENTRY/title": "@attrs:metadata/title", 
@@ -148,6 +165,7 @@ Next up, we can make use of the config file, which is a JSON file that tells the
   }
 }
 ```
+
 Here, the `parse_flatten_json` method is used that allows us to write the config dict in the structured manner above and internally flattens it (so that it has a similar structure as the Template).
 
 In the config file, one can
@@ -158,12 +176,15 @@ In the config file, one can
 Note that in order to use a `link_callback` (see below), `create_link_dict` must be set to `False`, which means that at this stage, config values of the form `"@link:"/path/to/source/data"` get NOT yet converted to `{"link": "/path/to/source/data"}`.
 
 ## Data post processing
+
 ```python title="multi/reader.py"
    self.post_process()
 ```
+
 In case there is the need for any post-processing on the data and/or config dictionary _after_ they have been read, the `post_process` method can be implemented. For example, this can be helpful if there are multiple entities of a given NX_CLASS (for example, multiple detectors) on the same level and the config dict shall be set up to fill the template with all of these entities.
 
 ## Filling the template from the read-in data
+
 ```python title="multi/reader.py"
     if self.config_dict:
         suppress_warning = kwargs.pop("suppress_warning", False)
@@ -178,9 +199,11 @@ In case there is the need for any post-processing on the data and/or config dict
 
     return template
 ```
+
 As a last step, the template is being filled from the config dict using the data. If there is more than one entry, the `get_entry_names` method must be implemented, which shall return a list of all entry names. The `fill_from_config` method iterates through all of the them and replaces the generic `/ENTRY/` in the config file by keys of the form `/ENTRY[my-entry]/` to fill the template.
 
 Here, we are using **callbacks**, which are used to bring in data based on `@`-prefixes in the config file. These are defined in the reader's ``__init__`` call using the `pynxtools.dataconverter.readers.multi.ParseJsonCallbacks` class:
+
 ```python title="multi/reader.py"
 self.callbacks = ParseJsonCallbacks(
     attrs_callback=self.get_attr,
@@ -189,7 +212,9 @@ self.callbacks = ParseJsonCallbacks(
     dims=self.get_data_dims,
 )
 ```
+
 The `ParseJsonCallbacks` class has an attribute called `special_key_map` that makes use of these callbacks to populate the template based on the starting prefix of the config dict value:
+
 ```python title="multi/reader.py"
 self.special_key_map = {
     "@attrs": attrs_callback if attrs_callback is not None else self.identity,
@@ -198,6 +223,7 @@ self.special_key_map = {
     "@eln": eln_callback if eln_callback is not None else self.identity,
 }
 ```
+
 That means, if the config file has an entry ```{"/ENTRY/title": "@attrs:metadata/title"}```, the `get_attr` method of the reader gets called and should return an attribute from the given path, i.e., in this case from `metadata/title`.
 
 By default, the MultiFormatReader supports the following special prefixes:
@@ -206,45 +232,57 @@ By default, the MultiFormatReader supports the following special prefixes:
 - `@data`: To get measurement data from the read-in experiment file(s). You need to implement the `get_data` method in the reader.
 - `@eln`: To get metadata from addtional ELN files. You need to implement the `get_eln_data` method in the reader.
 - `@link`: To implement a link between two entities in the NeXus file. By default, the link callback returns a dict of the form {"link": value.replace("/entry/", f"/{self.entry_name}/")}, i.e., a generic `/entry/` get replaced by the actual `entry_name`.
+- `@formula`: To calculate values based on the presence of other (meta)data in the NeXus file. By default, the link callback returns a number (int or float).
 
 The destinction between data and metadata is somewhat arbitrary here. The reason to have both of these prefixes is to have different methods to access different parts of the read-in data. For example, `@attrs` may just access key-value pairs of a read-in dictionary, whereas `@data` can handle different object types, e.g. xarrays. The implementation in the reader decides how to distinguish data and metadata and what each of the callbacks shall do.
 
 In addition, the reader can also implement the `get_data_dims` method, which is used to return a list of the data dimensions (see below for more details).
 
 All of `get_attr`, `get_data`, and `get_eln_data`  (as well as any similar method that might be implemented) should have the same call signature:
+
 ```python
 def get_data(self, key: str, path: str) -> Any:
 ```
+
 Here, `key` is the config dict key (e.g., `"/ENTRY[my-entry]/data/data"`) and path is the path that comes _after_ the prefix in the config file. In the example config file above, `path` would be `mydata`. With these two inputs, the reader should be able to return the correct data for this template key.
 
 ### Special rules
+
 - **Lists as config value**: It is possible to write a list of possible configurations of the sort
+
   ```json
   "/ENTRY/title":"['@attrs:my_title', '@eln', 'no title']"
   ```
+
   The value must be a string which can be parsed as a list, with each item being a string itself. This allows to provide different options depending if the data exists for a given callback. For each list item , it is checked if a value can be returned and if so, the value is written. In this example, the converter would check (in order) the `@attrs` (with path `"my_title"`) and `@eln` (with path `""`) tokens and write the respective value if it exists. If not, it defaults to "no title".
   This concept can be particularly useful if the same config file is used for multiple measurement configurations, where for some setup, the same metadata may or may not be available.
 
     Note that if this notation is used, it may be helpful to pass the `suppress_warning` keyword as `True` to the read function. Otherwise, there will be a warning for every non-existent value.
 
 - **Wildcard notation**: There exists a wildcard notation (using `*`)
+
   ```json
   "/ENTRY/data/AXISNAME[*]": "@data:*.data",
   ```
+
   that allows filling multiple fields of the same type from a list of dimensions. This can be particularly helpful for writing `DATA` and `AXISNAME` fields that are all stored under similar paths in the read-in data.
   For this, the `get_data_dims` method needs to be implemented. For a given path, it should return a list of all data axes available to replace the wildcard.
-    
+
     The same wildcard notation can also be used within a name to repeat entries with different names (e.g., field_*{my, name, etc} is converted into three keys with * replaced by my, name, etc, respectively). As an example, for multiple lenses and their voltage readouts, one could write:
+
   ```json
+
   "LENS_EM[lens_*{A,B,Foc}]": {
     "name": "*",
     "voltage": "@attrs:metadata/file/Lens:*:V",
     "voltage/@units": "V"
   },
   ```
+
   which would write `NXlens_em` instances named `lens_A`, `lens_B`, and `lens_Foc`.
 
 - **Required fields in optional groups**: There will sometimes be the situation that there is an optional NeXus group in an application definition, that (if implemented) requires some sub-element. As an example, for the instrument's energy resolution, the only value expected to come from a data source is the `resolution`, whereas other fields are hardcoded.
+
   ```json
   "ENTRY/INSTRUMENT[instrument]/energy_resolution": {
     "resolution": "@attrs:metadata/instrument/electronanalyser/energy_resolution",
@@ -252,13 +290,21 @@ Here, `key` is the config dict key (e.g., `"/ENTRY[my-entry]/data/data"`) and pa
     "physical_quantity": "energy"
   }
   ```
+
   Now, if there is no data for `@attrs:metadata/instrument/electronanalyser/energy_resolution` available in a dataset, this will be skipped by the reader, and not available, yet the other entries are present. During validation, this means that the required field `resolution` of the optional group `energy_resolution` is not present, and thus a warning or error would be raised:
+
   ```console
   LookupError: The data entry, /ENTRY[entry]/INSTRUMENT[instrument]/ELECTRONANALYSER[electronanalyser]/energy_resolution/physical_quantity, has an optional parent, /ENTRY[entry]/INSTRUMENT[instrument]/ELECTRONANALYSER[electronanalyser]/energy_resolution, with required children set. Either provide no children for /ENTRY[entry]/INSTRUMENT[instrument]/ELECTRONANALYSER[electronanalyser]/energy_resolution or provide all required ones.
   ```
 
     To circumvent this problem, there exists a notation using the `"!"` prefix. If you write
+
     ```json
     "ENTRY/INSTRUMENT[instrument]/energy_resolution/resolution": "!@attrs:metadata/instrument/electronanalyser/energy_resolution"
     ```
-    the whole parent group `/ENTRY/INSTRUMENT[instrument]/energy_resolution` will _not_ be written in case that there is no value for `@attrs:metadata/instrument/electronanalyser/energy_resolution"`, thus preventing the aforementioned error.
\ No newline at end of file
+
+    the whole parent group `/ENTRY/INSTRUMENT[instrument]/energy_resolution` will _not_ be written in case that there is no value for `@attrs:metadata/instrument/electronanalyser/energy_resolution"`, thus preventing the aforementioned error.
+
+- **Formulas**: There exists a notation using the `@formula` prefix that allows to calculate values based on the  presence of other (meta)data in the template. By default, values prefixed with `@formula` are evaluated last, after all the other template key-value pairs have been filled. The function that performs the evaluation of the formula uses Python's built-in [`eval()`](https://docs.python.org/3/library/functions.html#eval) function to evaluate formula written as string statements. To prevent malicious use of this feature, the formula that can be written is limited. Standard operators such as +,-,*,/ are supported. Moreover, all functions from [NumPy's API](https://numpy.org/doc/2.1/reference/index.html) that are callable can be used by writing the function's name, e.g.
+to calculate the mean of the energy field in an NXdata group, you should write
+"!@formula:mean(/ENTRY/INSTRUMENT[instrument]/DATA[data]/energy)", which would then call NumPy np.mean function.
\ No newline at end of file
diff --git a/src/pynxtools/dataconverter/readers/multi/reader.py b/src/pynxtools/dataconverter/readers/multi/reader.py
index 4a11f5db7..94a06012e 100644
--- a/src/pynxtools/dataconverter/readers/multi/reader.py
+++ b/src/pynxtools/dataconverter/readers/multi/reader.py
@@ -23,6 +23,8 @@
 import re
 from typing import Any, Callable, Dict, List, Optional, Tuple, Union
 
+import numpy as np
+
 from pynxtools.dataconverter.readers.base.reader import BaseReader
 from pynxtools.dataconverter.readers.utils import (
     is_boolean,
@@ -36,6 +38,55 @@
 logger = logging.getLogger("pynxtools")
 
 
+def evaluate_expression(expression: str, data: Dict[str, Any]) -> Any:
+    """
+    Evaluates a string expression where keys are accessed from a dictionary and transformations are applied.
+
+    Args:
+        expression (str): The string expression to evaluate, e.g., '/data/value + /someothervalue'.
+        data (Dict[str, Any]): A dictionary where keys are matched to parts of the expression.
+
+    Returns:
+        Any: The result of the evaluated expression.
+    """
+    if not expression:
+        logger.warning("Empty formula provided.")
+        return None
+
+    # Prepare the safe environment for evaluation
+    # Dynamically allow all basic NumPy functions
+    safe_conversions = {name: func for name, func in vars(np).items() if callable(func)}
+
+    # Disable built-ins for safety
+    safe_conversions.update({"__builtins__": {}})
+
+    def resolve_key(key: str) -> Any:
+        """Resolve a key by accessing the dictionary."""
+        if key not in data:
+            raise KeyError(f"Key '{key}' not found in data.")
+        return data[key]
+
+    # Use regex to replace only keys in the expression
+    def replace_keys(match: re.Match) -> str:
+        key = match.group(0)
+        return f"resolve_key('{key}')"
+
+    # Match only valid dictionary keys (not operators or function calls)
+    # this is currently not yet working
+    pattern = r"(\/[\w\[\]\_\-/]+)"
+
+    resolved_expression = re.sub(pattern, replace_keys, expression)
+
+    print(resolved_expression)  # Debugging output to see the resolved expression
+
+    # Evaluate the resolved expression
+    try:
+        return eval(resolved_expression, safe_conversions, {"resolve_key": resolve_key})
+    except Exception as exc:
+        logger.warning(f"Formula '{expression}' could not be evaluated due to: {exc}")
+        return None
+
+
 def fill_wildcard_data_indices(config_file_dict, key, value, dims):
     """
     Replaces the wildcard data indices (*) with the respective dimension entries.
@@ -75,6 +126,7 @@ class ParseJsonCallbacks:
         "@link": used for linking (meta)data
         "@data": measurement data
         "@eln": ELN data not provided within the experiment file
+        "@formula": To calculate values based on the presence of other (meta)data
 
     Args:
         attrs_callback (Callable[[str], Any]):
@@ -85,6 +137,8 @@ class ParseJsonCallbacks:
             The callback to retrieve links under the specified key.
         eln_callback (Callable[[str], Any]):
             The callback to retrieve eln values under the specified key.
+        formula_callback (Callable[[str], Any]):
+            The callback to control formula calculations.
         dims (List[str]):
             The dimension labels of the data. Defaults to None.
         entry_name (str):
@@ -101,6 +155,7 @@ def __init__(
         data_callback: Optional[Callable[[str, str], Any]] = None,
         link_callback: Optional[Callable[[str, str], Any]] = None,
         eln_callback: Optional[Callable[[str, str], Any]] = None,
+        formula_callback: Optional[Callable[[str, str], Any]] = None,
         dims: Optional[Callable[[str, str], List[str]]] = None,
         entry_name: str = "entry",
     ):
@@ -109,17 +164,28 @@ def __init__(
             "@link": link_callback if link_callback is not None else self.link_callback,
             "@data": data_callback if data_callback is not None else self.identity,
             "@eln": eln_callback if eln_callback is not None else self.identity,
+            "@formula": formula_callback
+            if formula_callback is not None
+            else self.formula_callback,
         }
 
         self.dims = dims if dims is not None else lambda *_, **__: []
         self.entry_name = entry_name
 
-    def link_callback(self, key: str, value: str) -> Dict[str, Any]:
+    def link_callback(self, _: str, value: str) -> Dict[str, Any]:
         """
         Modify links to dictionaries with the correct entry name.
         """
         return {"link": value.replace("/entry/", f"/{self.entry_name}/")}
 
+    def formula_callback(self, _: str, value: str) -> Dict[str, Any]:
+        """
+        Modify formulas to start with "formula=".
+        """
+        return {
+            "formula": value.replace("/ENTRY[entry]/", f"/ENTRY[{self.entry_name}]/")
+        }
+
     def identity(self, _: str, value: str) -> str:
         """
         Returns the input value unchanged.
@@ -230,10 +296,13 @@ def parse_config_value(value: str) -> Tuple[str, Any]:
             )
 
     # after filling, resolve links again:
-    if isinstance(new_entry_dict.get(key), str) and new_entry_dict[key].startswith(
-        "@link:"
-    ):
-        new_entry_dict[key] = {"link": new_entry_dict[key][6:]}
+    if isinstance(new_entry_dict.get(key), str):
+        if new_entry_dict[key].startswith("@link:"):
+            new_entry_dict[key] = {"link": new_entry_dict[key][6:]}
+
+    if isinstance(new_entry_dict.get(key), dict) and "formula" in new_entry_dict[key]:
+        if formula := new_entry_dict[key]["formula"]:
+            new_entry_dict[key] = evaluate_expression(formula, new_entry_dict)
 
 
 def fill_from_config(
@@ -263,9 +332,13 @@ def dict_sort_key(keyval: Tuple[str, Any]) -> bool:
         Besides, pythons sorted is stable, so this will keep the order of the keys
         which have the same sort key.
         """
-        if isinstance(keyval[1], str):
-            return not keyval[1].startswith("!")
-        return True
+        value = keyval[1]
+        if isinstance(value, str):
+            if value.startswith(("!@formula:", "@formula:")):
+                return (2, keyval[0])  # Last
+            if value.startswith("!"):
+                return (0, keyval[0])  # First
+        return (1, keyval[0])  # Middle
 
     if callbacks is None:
         # Use default callbacks if none are explicitly provided
@@ -278,6 +351,7 @@ def dict_sort_key(keyval: Tuple[str, Any]) -> bool:
 
         # Process '!...' keys first
         sorted_keys = dict(sorted(config_dict.items(), key=dict_sort_key))
+
         for key in sorted_keys:
             value = config_dict[key]
             key = key.replace("/ENTRY/", f"/ENTRY[{entry_name}]/")