Skip to content

Commit

Permalink
Merge pull request #33 from SoftVarE-Group/refactor-io
Browse files Browse the repository at this point in the history
Unify input/output files
  • Loading branch information
uulm-janbaudisch authored Sep 20, 2024
2 parents c4ba0ba + 9b11332 commit ac115b7
Show file tree
Hide file tree
Showing 15 changed files with 340 additions and 402 deletions.
82 changes: 82 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ resolver = "2"

[workspace.dependencies]
clap = { version = "4.5", features = ["derive"] }
csv = "1.3"
ddnnife = { path = "ddnnife" }
log = "0.4"
mimalloc = "0.1"
uniffi = { version = "0.28" }

Expand Down
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,19 +181,19 @@ This could be achieved by modifying `$PATH` on unix system for example.
Prepossesses the d-DNNF: `berkeleydb_dsharp.nnf` which may need preprocessing because it was created with dsharp (in this case it is necessary) and save the resulting d-DNNF as `berkeleydb_prepo.nnf`.

```
ddnnife example_input/berkeleydb_dsharp.nnf -s example_input/berkeleydb_prepo.nnf
ddnnife -i example_input/berkeleydb_dsharp.nnf --save-ddnnf example_input/berkeleydb_prepo.nnf
```

Compute the cardinality of a feature model for `auto1`.

```
ddnnife example_input/auto1_c2d.nnf
ddnnife -i example_input/auto1_c2d.nnf count
```

Compute the cardinality of features for `busybox-1.18.0_c2d.nnf` and saves the result as `busybox-features.csv` in the current working directory.

```
ddnnife example_input/busybox-1.18.0_c2d.nnf -c busybox
ddnnife -i example_input/busybox-1.18.0_c2d.nnf -o busybox-features.csv count-features
```

Compute the cardinality of features for `auto1` when compiled with d4.
Expand All @@ -204,40 +204,40 @@ The results will be saved as `auto1_d4_2513-features.csv`.
(Note that for the example input the number of features is part of the file name for d4 models.)

```
ddnnife example_input/auto1_d4_2513.nnf -t 2513 -c
ddnnife -i example_input/auto1_d4_2513.nnf -t 2513 count-features
```

Compute the cardinality of features for `auto1` starting from a CNF file.
Compute the cardinality of `auto1` starting from a CNF file.
Currently, the CNF file must be indicated by either the file ending `.cnf` or `.dimacs`.
We use the d4 compiler to generate a dDNNF which we can use in the following steps.
The `-t` option is not necessary, because the needed information if part of the CNF.

```
ddnnife example_input/auto1.cnf -c
ddnnife -i example_input/auto1.cnf count
```

An alternative to the above, using the possibility to load a model via stdin.

```
cat example_input/auto1_d4_2513.nnf | ddnnife -p -t 2513 -c
cat example_input/auto1_d4_2513.nnf | ddnnife -t 2513 count
```

Compute the cardinality of partial configurations for `X264_c2d.nnf` with 4 threads (default) and save the result as `X264_c2d-queries.csv` (default) in the current working directory (default).
Compute the cardinality of partial configurations for `X264_c2d.nnf`.

```
ddnnife example_input/X264_c2d.nnf -q example_input/X264.config
ddnnife -i example_input/X264_c2d.nnf count-queries example_input/X264.config
```

Compute 100 uniform random samples for the auto1 model for seed 42.

```
ddnnife example_input/auto1_d4.nnf -t 2513 urs -n 100 -s 42
ddnnife -i example_input/auto1_d4.nnf -t 2513 urs -n 100 -s 42
```

Compute the atomic sets for auto1.

```
ddnnife example_input/auto1_d4.nnf -t 2513 atomic-sets
ddnnife -i example_input/auto1_d4.nnf -t 2513 atomic-sets
```

Display the help information for the sat command.
Expand All @@ -250,7 +250,7 @@ Create the mermaid visualization of the small example d-DNNF under assumptions.
The model count is 4 and the count for the partial configuration (2,4) is 1.

```
ddnnife example_input/small_example_c2d.nnf mermaid -a 2 4
ddnnife -i example_input/small_example_c2d.nnf mermaid -a 2 4
```

```mermaid
Expand Down Expand Up @@ -290,7 +290,7 @@ The idea behind the stream API is to interact with `ddnnife` with another progra
We start ddnnife in stream mode for the `automotive01` model via

```
ddnnife example_input/auto1_d4.nnf -t 2513 stream
ddnnife -i example_input/auto1_d4.nnf -t 2513 stream
```

From here on, we can use the following types of queries:
Expand Down
3 changes: 2 additions & 1 deletion ddnnife/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,10 @@ crate-type = ["lib", "cdylib"]

[dependencies]
bitvec = "1.0"
csv = "1.3"
csv = { workspace = true }
file_diff = "1.0.0"
itertools = "0.13"
log = { workspace = true }
nom = "7.1"
num = "0.4"
once_cell = "1.19"
Expand Down
15 changes: 4 additions & 11 deletions ddnnife/src/ddnnf/anomalies.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,30 +5,23 @@ pub mod false_optional;
pub mod sat;
pub mod t_wise_sampling;

use std::{
fs::File,
io::{LineWriter, Write},
};

use crate::Ddnnf;
use std::io::Write;

impl Ddnnf {
/// Takes a d-DNNF and writes the string representation into a file with the provided name
pub fn write_anomalies(&mut self, path_out: &str) -> std::io::Result<()> {
let file = File::create(path_out)?;
let mut file = LineWriter::with_capacity(1000, file);

pub fn write_anomalies(&mut self, mut output: impl Write) -> std::io::Result<()> {
// core/dead features
let mut core = self.core.clone().into_iter().collect::<Vec<i32>>();
core.sort();
file.write_all(format!("core: {core:?}\n").as_bytes())?;
output.write_all(format!("core: {core:?}\n").as_bytes())?;

// false-optionals

// atomic sets
let mut atomic_sets = self.get_atomic_sets(None, &[], false);
atomic_sets.sort_unstable();
file.write_all(format!("atomic sets: {atomic_sets:?}\n").as_bytes())?;
output.write_all(format!("atomic sets: {atomic_sets:?}\n").as_bytes())?;

Ok(())
}
Expand Down
32 changes: 1 addition & 31 deletions ddnnife/src/ddnnf/anomalies/t_wise_sampling.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,13 @@ mod sat_wrapper;
mod t_iterator;
mod t_wise_sampler;

use crate::util::format_vec;
use crate::Ddnnf;
pub use config::Config;
pub use sample::Sample;
use sample_merger::similarity_merger::SimilarityMerger;
use sample_merger::zipping_merger::ZippingMerger;
use sampling_result::SamplingResult;
pub use sampling_result::SamplingResult;
use sat_wrapper::SatWrapper;
use std::path::Path;
use std::{fs, io, iter};
use t_wise_sampler::TWiseSampler;

#[cfg_attr(feature = "uniffi", uniffi::export)]
Expand All @@ -37,33 +34,6 @@ impl Ddnnf {
}
}

pub fn save_sample_to_file(sampling_result: &SamplingResult, file_path: &str) -> io::Result<()> {
let file_path = Path::new(file_path);
if let Some(dir) = file_path.parent() {
fs::create_dir_all(dir)?;
}
let mut wtr = csv::Writer::from_path(file_path)?;

match sampling_result {
/*
Writing "true" and "false" to the file does not really fit the format of the file but we
want to somehow distinguish between true and false sampling results.
True means that the feature model contains no variables and therefore an empty sample
covers all t-wise interactions.
False means that the feature model is void.
*/
SamplingResult::Empty => wtr.write_record(iter::once("true"))?,
SamplingResult::Void => wtr.write_record(iter::once("false"))?,
SamplingResult::ResultWithSample(sample) => {
for (index, config) in sample.iter().enumerate() {
wtr.write_record([index.to_string(), format_vec(config.get_literals().iter())])?;
}
}
}

wtr.flush()
}

#[cfg(test)]
mod test {
use itertools::Itertools;
Expand Down
Loading

0 comments on commit ac115b7

Please sign in to comment.